We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Dataiku today expanded its effort to make AI accessible to the average business user with an update that makes it possible to run what-if simulations of AI models to determine how changes to the data they are based on will impact them.
The goal is to make it easier for business analysts to experiment with AI models based on machine learning algorithms they can create with the help of a data scientist team, Dataiku CEO Florian Douetteau said.
As part of that effort, Dataiku 9 adds a Model Assertions tool that enables a subject matter expert to inject a known condition or sanity-checks into a model to prevent a certain outcome or conclusion from ever being reached.
There is also now a Visual ML Diagnostics tool that will generate error messages if the platform determines a model will fail and a Model Fairness Report tool that provides access to a dashboard through which companies can assess the bias or fairness of an AI model.
Finally, there is also now a Smart Pattern Builder and Fuzzy Joins capability that makes it easier to work with more complex or even incomplete datasets without having to write code or manually clean or prep data.
Most organizations are investing in AI to make better data-driven decisions, which Douetteau says makes it essential for business analysts to create AI models without having to wait for a data science team to construct them. “Business users should be able to create AI, not just consume it,” he said.
That doesn’t necessarily mean an organization shouldn’t hire a data science team to address more complex challenges, but it does reduce the dependency an organization would otherwise have a on small team of specialists, Douetteau noted.
When it comes to AI, most organizations are employing the approach they typically used to make previous generations of analytics applications available to end users. A team of IT specialists would create a series of dashboards based on data marts that would be exposed to business users via a self-service portal. But that approach is insufficient when it comes to AI because the rate of change in the underlying data has now substantially increased, Douetteau said. Business analysts need to be able to dynamically create AI models that enable organizations to respond faster to rapidly changing business conditions.
Of course, there is no better example of the need for that capability than the COVID-19 pandemic. Most of the AI models organizations had in place prior to the arrival of the pandemic were rendered obsolete almost overnight. It has taken most of those organizations months to construct new AI models, but even now it’s difficult to make assumptions that are not subject to change as new data becomes available.
Unfortunately, the average data science team today is fortunate if they can get an AI model into a production environment in a few months. Businesses clearly need to get data science tools into the hands of end users to enable AI to live up to its full potential, Douetteau said. The challenge is ensuring there are enough checks and balances in place to make sure any AI model that is implemented is properly vetted. After all, the scale at which an AI model typically works means that any potential error could represent a considerable risk for any business.
At this point, however, the proverbial AI genie is out of the bottle, regardless of the level of risk. Tumultuous economic times are pushing many business executives to accept higher levels of risk in an effort to either reduce costs or maximize revenues. But whatever outcomes AI eventually enables, the way business processes are constructed and maintained is about to change utterly.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.