Stanford

  • Published on
    Stanford University's release of the Alpaca model - which basically showed that an inferior foundational LLM can mimic a superior model cheaply - has massive business and science implications. Though "self-instruct" - i.e. the pattern of fine-tuning models with auto-generated demonstrations/training data - is nothing new, Stanford showed that it could be done at the scale of LLMs, and that it only costs $600! If Stanford's results are valid (and that's a big "if"), here are the major implications.