The Stanford Center for Research on Foundation Models (CRFM) and MosaicML announce the release of PubMed GPT, a purpose-built AI model trained to interpret biomedical language.
(15 Dec 2022) Large language models (LLMs) offer amazing capabilities for general-purpose natural language generation, image generation, speech synthesis, and multi-modal combinations of these applications. But is there more we can do when we know they will be used in industry-specific situations?
Today we announce the results of a partnership between MosaicML and the Stanford Center for Research on Foundation Models (CRFM) that demonstrates the capabilities of industry-specific large language models—specifically for the field of biomedicine. Using the MosaicML Cloud platform, CRFM trained a 2.7B parameter GPT on biomedical data from PubMed that achieves state-of-the art results on medical question and answer text from the US Medical Licensing Exam (USMLE) — highlighting the promise of domain-specific language generation models in real-world applications.
Find out more from the press release here.