DenseRewardRLHF-PPO
Collection
This repository contains the released models for our paper Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model.
•
18 items
•
Updated
•
1
This is the bandit reward based ppo model introduced in the preprint Segmenting Text and Learning Their Rewards for Improved RLHF in Language Models (https://arxiv.org/abs/2501.02790). For more details, please visit our repository at https://github.com/yinyueqin/DenseRewardRLHF-PPO.
Base model
meta-llama/Llama-3.1-8B