We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


The words “artificial intelligence” (AI) have been used to describe the workings of computers for decades, but the precise meaning shifted with time. Today, AI describes efforts to teach computers to imitate a human’s ability to solve problems and make connections based on insight, understanding and intuition. 

What is artificial intelligence?

Artificial intelligence usually encompasses the growing body of work in technology on the cutting edge that aims to train the technology to accurately imitate or — in some cases — exceed the capabilities of humans. 

Older algorithms, when they grow commonplace, tend to be pushed out of the tent. For instance, transcribing human voices into words was once an active area of research for scientists exploring artificial intelligence. Now it is a common feature embedded in phones, cars and appliances and it isn’t described with the term as often. 

Today, AI is often applied to several areas of research:

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here
  • Machine vision: Which helps computers understand the position of objects in the world through lights and cameras. 
  • Machine learning (ML): The general problem of teaching computers about the world with a training set of examples. 
  • Natural language processing (NLP): Making sense of knowledge encoded in human languages.
  • Robotics: Designing machines that can work with some degree of independence to assist with tasks, especially work that humans can’t do because it may be repetitive, strenuous or dangerous. 

There is a wide range of practical applicability to artificial intelligence work. Some chores are well-understood and the algorithms for solving them are already well-developed and rendered in software. They may be far from perfect, but the application is well-defined. Finding the best route for a trip, for instance, is now widely available via navigation applications in cars and on smartphones. 

Other areas are more philosophical. Science fiction authors have been writing about computers developing human-like attitudes and emotions for decades, and some AI researchers have been exploring this possibility. While machines are increasingly able to work autonomously, general questions of sentience, awareness or self-awareness remain open and without a definite answer. 

[Related: ‘Sentient’ artificial intelligence: Have we reached peak AI hype?]

AI researchers often speak of a hierarchy of capability and awareness. The directed tasks at the bottom are often called “narrow AI” or “reactive AI”. These algorithms can solve well-defined problems, sometimes without much direction from humans. Many of the applied AI packages fall into this category. 

The notion of “general AI” or “self-directed AI” applies to software that could think like a human and initiate plans outside of a well-defined framework. There are no good examples of this level of AI at this time, although some developers sometimes like to suggest that their tools are beginning to exhibit some of this independence. 

Beyond this is the idea of “super AI”, a package that can outperform humans in reasoning and initiative. These are largely discussed hypothetically by advanced researchers and science fiction authors. 

In the last decade, many ideas from the AI laboratory have found homes in commercial products. As the AI industry has emerged, many of the leading technology companies have assembled AI products through a mixture of acquisitions and internal development. These products offer a wide range of solutions, and many businesses are experimenting with using them to solve problems for themselves and their customers. 

How are the biggest companies approaching AI?

Leading companies have invested heavily in AI and developed a wide range of products aimed at both developers and end users. Their product lines are increasingly diverse as the companies experiment with different tiers of solutions to a wide range of applied problems. Some are more polished and aimed at the casual computer user. Others are aimed at other programmers who will integrate the AI into their own software to enhance it. The largest companies all offer dozens of products now and it’s hard to summarize their increasingly varied options. 

IBM has long been one of the leaders in AI research. Its AI-based competitor in the TV game Jeopardy, Watson, helped ignite the recent interest in AI when it beat humans in 2011 demonstrating how adept the software could be at handling more general questions posed in human language. 

Since then, IBM has built a broad collection of applied AI algorithms under the Watson brand name that can automate decisions in a wide range of business applications like risk management, compliance, business workflow and devops. These solutions rely upon a mixture of natural language processing and machine learning to create models that can either make production decisions or watch for anomalies. In one case study of its applications, for instance, the IBM Safer Payments product prevented $115 million worth of credit card fraud. 

Another example, Microsoft’s AI platform offers a wide range of algorithms, both as products and services available through Azure. The company also targets machine learning and computer vision applications and like to highlight how their tools search for secrets inside extremely large data sets. Its Megatron-Turing Natural Language Generation model (MT-NLG), for instance, has 530 billion parameters to model the nuances of human communication. Microsoft is also working on helping businesses processes shift from being automated to becoming autonomous by adding more intelligence to handle decision-making. Its autonomous packages are, for instance, being applied to both the narrow problems of keeping assembly lines running smoothly and the wider challenges of navigating drones. 

Google developed a strong collection of machine learning and computer vision algorithms that it uses for both internal projects indexing the web while also reselling the services through their cloud platform. It has  pioneered some of the most popular open-source machine learning platforms like TensorFlow and also built custom hardware for speeding up training models on large data sets. Google’s Vertex AI product, for instance, automates much of the work of turning a data set into a working model that can then be deployed. The company also offers a number of pretrained models for common tasks like optical character recognition or conversational AI that might be used for an automated customer service agent. 

In addition, Amazon also uses a collection of AI routines internally in its retail website, while marketing the same backend tools to AWS users. Products like Personalize are optimized for offering customers personalized recommendations on products. Rekognitition offers predeveloped machine vision algorithms for content moderation, facial recognition and text detection and conversion. These algorithms also have a prebuilt collection of models of well-known celebrities, a useful tool for media companies. Developers who want to create and train their own models can also turn to products like SageMaker which automates much of the workload for business analysts and data scientists. 

Facebook also uses artificial intelligence to help manage the endless stream of images and text posts. Algorithms for computer vision classify uploaded images, and text algorithms analyze the words in status updates. While the company maintains a strong research team, the company does not actively offer standalone products for others to use. It does share a number of open-source projects like NeuralProphet, a framework for decision-making. 

Additionally, Oracle is integrating some of the most popular open-source tools like Pytorch and Tensorflow into their data storage hierarchy to make it easier and faster to turn information stored in Oracle databases into working models. They also offer a collection of prebuilt AI tools with models for tackling common challenges like anomaly detection or natural language processing. 

How are startups approaching AI? 

New AI companies tend to be focused on one particular task, where applied algorithms and a determined focus will produce something transformative. For instance, a wide-reaching  current challenge is producing self-driving cars. Startups like Waymo, Pony AI, Cruise Automation and Argo are four major startups with significant funding who are building the software and sensor systems that will allow cars to navigate themselves through the streets. The algorithms involve a mixture of machine learning, computer vision, and planning. 

Many startups are applying similar algorithms to more limited or predictable domains like warehouse or industrial plants. Companies like Nuro, Bright Machines and Fetch are just some of the many that want to automate warehouses and industrial spaces. Fetch also wants to apply machine vision and planning algorithms to take on repetitive tasks. 

A substantial number of startups are also targeting jobs that are either dangerous to humans or impossible for them to do. Against this backdrop, Hydromea is building autonomous underwater drones that can track submerged assets like oil rigs or mining tools. Another company, Solinus, makes robots for inspecting narrow pipes. 

Many startups are also working in digital domains, in part because the area is a natural habitat for algorithms, since the data is already in digital form. There are dozens of companies, for instance, working to simplify and automate routine tasks that are part of the digital workflow for companies. This area, sometimes called robotic process automation (RPA), rarely involves physical robots because it works with digital paperwork or chit. However, it is a popular way for companies to integrate basic AI routines into their software stack. Good RPA platforms, for example, often use optical character recognition and natural language processing to make sense of uploaded forms in order to simplify the office workload. 

Many companies also depend upon open-source software projects with broad participation. Projects like Tensorflow or PyTorch are used throughout research and development organizations in universities and industrial laboratories. Some projects like DeepDetect, a tool for deep learning and decision-making, are also spawning companies that offer mixtures of support and services. 

There are also hundreds of effective and well-known open-source projects used by AI researchers. OpenCV, for instance, offers a large collection of computer vision algorithms that can be adapted and integrated with other stacks. It is used frequently in robotics, medical projects, security applications and many other tasks that rely upon understanding the world through a camera image or video.

Is there anything AI can’t do? 

There are some areas where AI finds more success than others. Statistical classification using machine learning is often pretty accurate but it is often limited by the breadth of the training data. These algorithms often fail when they are asked to make decisions in new situations or after the environment has shifted substantially from the training corpus. 

Much of the success or failure depends upon how much precision is demanded. AI tends to be more successful when occasional mistakes are tolerable. If the users can filter out misclassification or incorrect responses, AI algorithms are welcomed. For instance, many photo storage sites offer to apply facial recognition algorithms to sort photos by who appears in them. The results are good but not perfect, but users can tolerate the mistakes. The field is largely a statistical game and succeeds when judged on a percentage basis. 

A number of the most successful applications don’t require especially clever or elaborate algorithms, but depend upon a large and well-curated dataset organized by tools that are now manageable. The problem once seemed impossible because of the scope, until large enough teams tackled it. Navigation and mapping applications like Waze just use simple search algorithms to find the best path but these apps could not succeed without a large, digitized model of the street layouts. 

Natural language processing is also successful with making generalizations about the sentiment or basic meaning in a sentence but it is frequently tripped up by neologisms, slang or nuance. As language changes or processes, the algorithms can adapt, but only with pointed retraining. They also start to fail when the challenges are outside a large training set. 

Robotics and autonomous cars can be quite successful in limited areas or controlled spaces but they also face trouble when new challenges or unexpected obstacles appear. For them, the political costs of failure can be significant, so developers are necessarily cautious on leaving the envelope.

Indeed, determining whether an algorithm is capable or a failure often depends upon criteria that are politically determined. If the customers are happy enough with the response, if the results are predictable enough to be useful, then the algorithms succeed. As they become taken for granted, they lose the appellation of AI. 

If the term is generally applied to the topics and goals that are just out of reach, if AI is always redefined to exclude the simple, well-understood solutions, then AI will always be moving toward the technological horizon. It may not be 100% successful presently, but when applied in specific cases, it can be tantalizingly close. 

[Read more: The quest for explainable AI]

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics