We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Open source technology has been a driving factor in many of the most innovative developments of the digital age, so it should come as no surprise that it has made its way into artificial intelligence as well.
But with trust in AI’s impact on the world still uncertain, the idea that open source tools, libraries, and communities are creating AI projects in the usual wild west fashion is creating yet more unease among some observers.
Why worry?
Open source supporters, of course, reject these fears, arguing that there is just as little oversight into the corporate-dominated activities of closed platforms. In fact, open source can be more readily tracked and monitored because it is, well, open for all to see. And this leaves us with the same question that has bedeviled technology advances through the ages: Is it better to let these powerful tools grow and evolve as they will, or should we try to control them? And if so, how and to what extent?
If anything, says Analytics Insight’s Adilin Beatrice, open source has fueled the advance of AI by streamlining the development process. There is no shortage of free, open source platforms capable of implementing even complex types of AI like machine learning, and this serves to expand the scope of AI development in general and allow developers to make maximum use of available data. Tools like Weka, for instance, allow coders to quickly integrate data mining and other functions into their projects without having to write it all from scratch. Google’s TensorFlow, meanwhile, is one of the most popular end-to-end machine learning platforms on the market.
And just as we’ve seen in other digital initiatives, like virtualization and the cloud, companies are starting to mix-and-match various open source solutions to create a broad range of intelligent applications. Neuron7.ai recently unveiled a new field service system capable of providing everything from self-help portals to traffic optimization tools. The system leverages multiple open AI engines, including TensorFlow, that allow it to not only ingest vast amounts of unstructured data from multiple sources, such as CRM and messaging systems, but also encapsulate the experiences of field techs and customers to improve accuracy and identify additional means of automation.
A blind eye
One would think that with open source technology playing such a significant role in the development of AI that it would be at the top of the agenda for policy-makers. But according to Alex Engler of the Brookings Institution, it is virtually off the radar. While the U.S. government has addressed open source with measures like the Federal Source Code Policy, more recent discussions on possible AI regulations mention it only in passing. In Europe, Engler says open source regulations are devoid of any clear link to AI policies and strategies, and the most recently proposed updates to these measures do not mention open source at all.
Engler adds that this lack of attention could produce two negative outcomes. First, it could result in AI initiatives failing to capitalize on the strengths that open source software brings to development. These include key capabilities like increasing the speed of development itself and reducing bias and other unwanted outcomes. Secondly, there is the potential that dominance in open source solutions could lead to dominance in AI. Open source tends to create default standards in the tech industry, and while top open source releases from Google, Facebook, and others are freely available, the vast majority of projects they support are created from within the company that developed the framework, giving them an advantage in the resulting program.
This, of course, leads us back to the same dilemma that has plagued emerging technologies from the beginning, says the IEEE’s Ned Potter. Who should draw the roadmap for AI to ensure it has a positive impact on society? Tech companies? The government? Academia? Or should it simply be democratized and let the market sort it out? Open source supporters tend to favor a free hand, of course, with the idea that continual scrutiny by the community will organically push bad ideas to the bottom and elevate good ideas to the top. But this still does not guarantee a positive outcome, particularly as AI becomes accessible to the broader public.
In the end, of course, there are no guarantees. If we’ve learned anything from the past, mistakes are just as likely to come from private industry as from government regulators or individual operators. But there is a big difference between watching and regulating. At the very least, there should be mechanisms in place to track how open source technologies are influencing AI development so at least someone has the ability to give a heads up if things are heading in a wrong direction.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.