close
close

Let’s just do this job

Given the likelihood that AI will increasingly permeate the software and systems we depend on, it’s fair but unrealistic for these AI models to be open source. Vaughan-Nichols blames “the best AI vendors (who) don’t want to commit to open sourcing their programs and datasets,” suggesting that “companies hope to embellish their programs with the positive connotations of open source about transparency, collaboration, and innovation.” Maybe? Or maybe they can’t afford to open source all their code because it turns out to be a really bad deal. I know some people like to lazily gesture at Red Hat as the classic example of what business success looks like, but in reality it’s a terrible example compared to Meta, AWS, etc. As Sasha Luccioni of Hugging Face said at the United Nations OSPOs for Good conference, “You can’t expect all companies to be 100% open source, as defined by the open source license. You can’t expect companies to just give up everything they make money on and do it in a way that suits them.”

We might wish reality were different, but after decades of open source and proprietary software coexisting, why would we expect AI to be any different?

As with cloud and on-premises software, most AI software won’t be open source. Now, as then, most developers simply won’t care, because most developers are more interested in going to their kids’ soccer games after work than in the existential problems of open source. For years, we’ve focused the conversation about open source on the wrong things, and younger developers have largely ignored it. But regardless of age, developers care about getting things done. They care about the cost, speed, and performance gains of the latest Mistral model, not so much about its non-open source license. So are OpenAI, Meta’s Llama, etc.