Intern
    Data Science Chair

    LLäMmlein 1B & 120M

    We created two German-only decoder models, LLäMmlein 120M and 1B, from scratch.

    The project involved several key steps, including extensive data preprocessing, the creation of a custom tokenizer, and optimization of training settings to effectively utilize available hardware. Throughout the training process, various checkpoints were saved and analyzed to monitor the models' learning dynamics. Compared to state-of-the-art models on the SuperGLEBer benchmark, both LLäMmlein models performed competitively, consistently matching or surpassing models with similar parameter sizes. The LLäMmlein 1B also showed comparable results to larger models, with no significant performance difference observed.

     

    Resources

    Preprint now available

    Download the base models (120M & 1B) (including a Bavarian preview!) and chat-tuned models (1B)!

    We also publish intermediate training checkpoints for our base models, which can/will be found e.g. here for the 120M model (drop down menu "main" top left).

    code and data coming soon :)

    SuperGLEBer Benchmark Ergebnisse

    Our LLäMmlein 1B ranks as the best evaluated decoder model overall on our SuperGLEBer benchmark! And compared to state-of-the-art models on our benchmark, both LLäMmlein models performed competitively, consistently matching or surpassing models with similar parameter sizes.