In this project, I fine-tuned a compact language model (Llama-3.2-3B) to translate Fortran code into Rust, enabling efficient deployment on edge devices. By leveraging GPT-4 insights and knowledge distillation, I reduced trainable parameters by 99.8% using LoRA adapters. The optimized model was deployed on Hugging Face, complete with a Gradio-powered web interface and API for seamless user interaction. This work demonstrates the potential of small language models for high-accuracy code translation in resource-constrained environments.
Github Repo Link: github.com/CodeTranslatorLLM/LinguistLLM/tree/main
Copyright © Vanessa Huang, modified by Caslow Chien, 2024.