We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


The terms “artificial intelligence” and “machine learning” are often used interchangeably, but there’s an important difference between the two. AI is an umbrella term for a range of techniques that allow computers to learn and act like humans. Put another way, AI is the computer being smart. Machine learning, however, accounts for how the computer becomes smart.

But there’s a reason the two are often conflated: The vast majority of AI today is based on machine learning. Enterprises across sectors are prioritizing it for various use cases across their organizations, and the subfield tops AI funding globally by a significant margin. In the first quarter of 2019 alone, a whopping $28.5 billion was allocated to machine learning research. Overall, the machine learning market is expected to grow from around $1 billion in 2016 to $8.81 billion by 2022. When VentureBeat collected thoughts from the top minds across the field, they had a variety of predictions to share. But one takeaway was that machine learning is continuing to shape business and society at large.

Rise of machine learning

While AI is ubiquitous today, there were times when the whole field was thought to be a dud. After initial advancements and a lot of hype in the mid-late 1950s and 1960s, breakthroughs stalled and expectations went unmet. There wasn’t enough computing power to bring the potential to life, and running such systems cost exorbitant amounts of money. This caused both interest and funding to dry up in what was dubbed the “AI winter.”

The pursuit later picked up again in the 1980s, thanks to a boost in research funds and expansion of the algorithmic toolkit. But it didn’t last, and there was yet another decade-long AI winter.

Then two major changes occurred that directly enabled AI as we know it today. Artificial intelligence efforts shifted from rule-based systems to machine learning techniques that could use data to learn without being externally programmed. And at the same time, the World Wide Web became ubiquitous in the homes (and then hands) of millions (and eventually billions) of people around the world. This created the explosion of data and data sharing on which machine learning relies.

How machine learning works

Machine learning enables a computer to “think” without being externally programmed. Instead of programming it by hand to accomplish specific tasks, as is the case with traditional computers, machine learning allows you to instead provide data and describe what you want the program to do.

The computer trains itself with that data, and then uses algorithms to carry out your desired task. It also collects more data as it goes, getting “smarter” over time. A key part of how this all works is the data labeling. If you want a program to sort photos of ice cream and pepperoni pizza, for example, you first need to first label some of the photos to give the algorithm an idea of what ice cream and pepperoni pizza each look like.

This labeling is also a key difference between machine learning and a popular subset within the field, called deep learning. Deep learning doesn’t require any labeling, instead relying on neural networks, which are inspired by the human brain both in structure and name. To sort the photos of ice cream and pepperoni pizza using this technique, you instead have to provide a significantly larger set of photos. The computer then puts the photos through several layers of processing — which make up the neural network — to distinguish the ice cream from the pepperoni pizza one step at a time. Earlier layers look at basic properties like lines or edges between light and dark parts of the images, while subsequent layers identify more complex features like shapes or even faces.

Applications

Machine learning and its subsets are useful for a wide range of problems, tasks, and applications. There’s computer vision, which allows computers to “see” and make sense of images and videos. Additionally, natural language processing (NLP) is a rising part of machine learning, which allows computers to extract the meaning of unstructured text. There’s also voice and speech recognition, which powers services like Amazon’s Alexa and Apple’s Siri and introduced many consumers to AI for the first time.

Across industries, enterprises are using machine learning in their products as well as internally within their organizations. Machine learning can simplify, streamline, and enhance supply chain operations, for example. It’s also widely used for business analytics, security, sales, and marketing. Machine learning has even been used to help fight COVID-19. Facebook leans on machine learning to take down harmful content. Google uses it to improve search. And American Express recently tapped NLP for its customer service chatbots and to run a predictive search capability inside its app. The list goes on and on.

Limitations and challenges

While machine learning holds promise and is already benefiting enterprises around the globe, there are challenges and issues associated with the field. For example, machine learning is useful for recognizing patterns, but it doesn’t perform well when it comes to generalizing knowledge. For users, there’s also the issue of “algorithm fatigue.”

Some of the issues related to machine learning have significant consequences that are already playing out today. The lack of explainability and interpretability — known as the “black box problem” — is one. Machine learning models create their own behaviors and decisions in ways that even their creators can’t understand. This makes it difficult to fix errors and ensure the information a model puts out is accurate and fair. When people noticed Apple’s algorithm for credit cards was offering women significantly smaller lines of credit than men, for example, the company couldn’t explain why and didn’t know how to fix the issue.

This is related to the most significant issue plaguing the field: data and algorithmic bias. Since the technology’s inception, machine learning models have been routinely and primarily built on data that was collected and labeled in biased ways, sometimes for specifically biased purposes. It’s been found that algorithms are often biased against women, Black people, and other ethnic groups. Researchers at Google’s DeepMind, one of the world’s top AI labs, warned the technology poses a threat to individuals who identify as queer.

This issue is widespread and widely known, but there is resistance to taking the significant action many in the field are urging is necessary. Google itself fired the co-leads of its ethical AI team, Timnit Gebru and Margaret Mitchell, in what thousands of the company’s employees called a “retaliatory firing,” after Gebru refused to rescind research about the risks of deploying large language models. And in a survey of researchers, policy leaders, and activists, the majority said they worry the evolution of AI by 2030 will continue to be primarily focused on optimizing profits and social control, at the expense of ethics. Legislation regarding AI — especially immediately and obviously harmful uses, like facial recognition for policing — is being debated and adopted across the country. These deliberations will likely continue. And the changing data privacy laws will soon affect data collection, and thus machine learning, as well.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics