This website uses cookies to help improve your user experience
Through the utilization of the Project Gutenberg dataset and the works of Shakespeare, we have successfully fine-tuned the LLM to the point where it can precisely mimic any defined text style.
Oxagile’s team opted for the usage of the Hugging Face infrastructure to train the large language model.
A Parameter-Efficient Fine-Tuning (PEFT) approach allowed us to fine-tune small subsets of the model’s parameters, freezing the original pre-trained weights of the LLM.
We utilized the mathematical method of evaluation, first making the words comparable through coding, and then identifying the threshold level of being “Shakespearean enough”.
First things first, Oxagile figured out which ML algorithms would work best for classifying lung sounds and become the cornerstone of a smart stethoscope solution.
Once the algorithms were sorted, it was up to our team to make sure we could accurately identify different sound classes. We were determined to push the limits of the neural network’s accuracy, so here’s what we did: