Explainable AI can lead to more accurate results

Sid Bhatia, Regional Director – Middle East, Dataiku.

As far back as 2017, analyst firms like Deloitte and PwC’s Strategy& were chronicling GCC governments’ digital transformation programmes. The region is pinning its economic hopes on artificial intelligence, AI, and associated technologies. If AI is to be our future partner in innovation, we must trust it. And to trust it, we must be candid about its inner workings.

Yet, enterprises from the Levant to North Africa are willing to let advanced algorithms make decisions on their behalf. So-called black-box AI can lead to bad decisions, with little post-mortem capability that would allow stakeholders to determine points of failure. To generate trust in such systems, we must expose the path between data and actionable information or indeed action itself, as is the case in fully automated architectures.

Middle East enterprises are subject to growing regulatory burdens. The regional FSI sector, hungry for growth opportunities, could face serious problems if regulators cannot question decisions made because of black-box AI. Denied loans, varying credit limits and even fees need to be penetrable.

Another industry in growth mode, and similarly subject to scrutiny by Gulf regulators, is healthcare. Medical providers across the region have already begun to weave AI into their strategies, but it is not hard to imagine why transparency will be important in establishing smart tech as a mainstay in medical care. Human analysis of findings is essential for accuracy. Indeed, many machine-learning models require human-expert feedback to fine-tune accuracy and become viable in production environments.

Think of the black box’s opposite as explainable AI or white-box AI. If we can answer questions as to how an AI system reached its conclusions, we can drive vital debate on the direction some technologies are taking and how those paths can be redirected towards more positive, trusted outcomes.

The responsible use of AI leads to good business for private enterprise. It leads to more desirable social impact for governments. The ability to deliver noticeable value, untainted by error, prejudice, or other negative elements is surely the goal of AI.

Given the suspicion that automation faces globally for its potential to supplant human workforces, it is hard to imagine a surge in AI adoption that will not be accompanied by an intensification in regulatory requirements. Under such circumstances, black-box systems will wither on the vine.

Exposing the algorithm as part of the results dashboard is a natural next step for AI solutions. Metrics such as weights, numeric values applied to data to denote the relative importance of one observation over another, should be on full display. Mathematical models are improving all the time and user interfaces are continually being improved to deliver broader views to end-users on the processing of data.

Collaboration between enterprises will also be vital. Open data platforms that allow the honing of models based on the experience and information-gathering of different contributors will lead to greater democratisation and more accurate results.


Key Takeaways

  • The responsible use of AI leads to good business for private enterprise.
  • Exposing the algorithm as part of the results dashboard is a natural next step for AI solutions.
  • Medical providers across the region have already begun to weave AI into their strategies.
  • The ability to deliver noticeable value, untainted by error, or other negative elements is surely the goal of AI.

Sid Bhatia of Dataiku writes why explainable artificial intelligence is crucial for sustainable adoption.