1 model available
OLMo 2 32B is Allen AI's fully open 32B-parameter language model, the largest in the OLMo 2 family. Trained on 6T tokens from the Dolma dataset, post-trained with Tülu 3 SFT, DPO, and RLVR. First fully open model to outperform GPT-3.5 and GPT-4o mini on academic benchmarks.