Home / Artificial Intelligence / Artificial Intelligence: Monitoring is critical to successful AI

Artificial Intelligence: Monitoring is critical to successful AI

Artificial Intelligence:

Artificial Intelligence: Companies often identify AI and ML performance issues after the damage has been done

Amit Paka is co-founder and chief product officer at

Fiddler Labs

, an explainable AI startup that enables enterprises to deploy and scale risk- and bias-free AI applications.

More posts by this contributor

Krishna Gade is co-founder and CEO at

Fiddler Labs

, an explainable AI startup that enables enterprises to deploy and scale risk- and bias-free AI applications.

As the world becomes more deeply connected through IoT devices and networks, consumer and business needs and expectations will soon only be sustainable through automation.

Recognizing this, artificial intelligence and machine learning are being rapidly adopted by critical industries such as finance, retail, healthcare, transportation and manufacturing to help them compete in an always-on and on-demand global culture. However, even as AI and ML provide endless benefits — such as increasing productivity while decreasing costs, reducing waste, improving efficiency and fostering innovation in outdated business models — there is tremendous potential for errors that result in unintended, biased results and, worse, abuse by bad actors.

The market for advanced technologies including AI and ML will continue its exponential growth, with market research firm IDC projecting that spending on AI systems will reach $98 billion in 2023, more than two and one-half times the $37.5 billion that was projected to be spent in 2019. Additionally, IDC foresees that retail and banking will drive much of this spending, as the industries invested more than $5 billion in 2019.

These findings underscore the importance for companies that are leveraging or plan to deploy advanced technologies for business operations to understand how and why it’s making certain decisions. Moreover, having a fundamental understanding of how AI and ML operate is even more crucial for conducting proper oversight in order to minimize the risk of undesired results.

Companies often realize AI and ML performance issues after the damage has been done, which in some cases has made headlines. Such instances of AI driving unintentional bias include the Apple Card allowing lower credit limits for women and Google’s AI algorithm for monitoring hate speech on social media being racially biased against African Americans. And there have been far worse examples of AI and ML being used to spread misinformation online through deepfakes, bots and more.

Through real-time monitoring, companies will be given visibility into the “black box” to see exactly how their AI and ML models operate. In other words, explainability will enable data scientists and engineers to know what to look for (a.k.a. transparency) so they can make the right decisions (a.k.a. insight) to improve their models and reduce potential risks (a.k.a. building trust).

But ther

Read More

About admin

Check Also

Artificial Intelligence: US joins G7 AI alliance to counter China’s influence

Artificial Intelligence: US joins G7 AI alliance to counter China’s influence

The US is joining the rest of the G7 in the Global Partnership on Artificial Intelligence, in an effort to contain China‘s influence on AI. The initiative was launched in 2019 to build a consensus on AI ethics, but has made little impact since — partly because the US initially refused to join. White House officials feared the group…

Leave a Reply

Your email address will not be published. Required fields are marked *