<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=586106&amp;fmt=gif">

SUBSCRIBE HERE


Posted by Alex Adamis
Alex Adamis

How Transparency in AI Reporting Builds Long-Term Trust - Any relationship expert will tell you that the secret to a long-lasting relationship is trust. Without it, the two parties simply can’t function effectively. And how does one build trust? The answer is through open and direct communication. From a professional perspective, to earn trust from an employer, an employee must admit mistakes honestly and learn from them. It is no different when employing an AI to scale human expertise.

If our relationships with our AI tools are to induce better decision making over the long-term, we must trust the reasoning behind the insights offered by the AI. Without attribution of output to reason, there’s no way for humans to benchmark and validate whether or not the AI truly understands the subject matter at hand. When an AI produces an incorrect output, the human will not know exactly why the AI got confused. As such, the human will not be able to trust the AI’s ability to learn from its mistake nor will the human be able to offer constructive feedback to the AI. Consequently, the human will lose faith in the AI.

This brings us to a fundamental problem inherent in the AI tools of today: a lack of transparency – or what we term, ‘The Black Box’ approach. We believe that most black box AIs do not provide reproducible research because classification and predictive accuracy is low. Accuracy tends to be low in such platforms because these tools try to ‘boil the ocean’ and solve very general problems. However, there’s no such thing as ‘general artificial intelligence’. Opaquely producing “AI” outputs that are incapable of classifying language according to specific concepts faster than human experts will not engender trust with portfolio managers, bankers, traders and analysts and will not help them make better decisions and generate alpha in the digital age. Rather, such black box tools will continue to fuel the hype around “AI”.

Transparency builds trust. And most marketed AI is operating behind a set of tinted windows because transparency would reveal that these tools produce low semantic accuracy, no predictive insights- no alpha.

 

Talk to our team and learn how Accrete.AI builds trust using the 'Glass Box'

 

Accrete believes in delivering a step-by-step methodology of how outcomes were derived, and have developed a “Glass Box” approach. The concept is that it doesn’t just deliver actionable insights to end users, it also attributes outputs to reason so that users can semantically validate output and predictive accuracy, and grow long-term trust as the system continues to get smarter and more sophisticated. We are confident in sharing the reasoning behind our outputs and reproducible research because our version of AI yields extraordinarily high semantic accuracy and generates significant alpha.

For example, in June 2018, stock in Campbell Soup (CPB) soared on the news of a takeover from Kraft Heinz (KHC). Six months prior, Accrete’s Rumor Hound detected initial takeover chatter for $CPB. Rumor Hound continued to pick up increased chatter beginning on May 21, 2018, which continued right up until the time the takeover was publicly announced. As such, Rumor Hound predicted with 77% probability that $CPB’s stock price would rise to a certain target range over a 10-day time span. Those conditions were met, and Rumor Hound’s projection was validated.

Accrete doesn’t simply deliver that quantitative output (77% probability of CPB stock price move to within a given range) we also provide the relevant rumors (with linguistic snippets, time stamps, source URLs) that generated the output:

Rumur-hound-screenshot-CPB

Screenshot of CPB rumors within the Rumor Hound product.

Stock-analysis-of-CPB

CPB Stock price: Highlighting when we identified a rumor and the acquisition

End users can examine the data themselves, and validate the AIs findings to check whether or not the rumor language detected actually pertained to M&A and whether or not the sources containing the rumor were credible enough to move the stock. Users can even request a data dump to build their own, proprietary predictive models.

Welcome to The Glass Box.

The power of attribution is such that a human’s reaction to a task is predicated on his or her ability to interpret the relation between cause and effect within that task. Accrete’s Glass Box approach allows users to accomplish exactly that. Our system provides transparency, which in turn engenders trust – thus establishing the foundation of a long-lasting relationship for years to come.

To learn more about The Glass Box, visit us www.accrete.ai

Nasdaq Interview with Accrete


Topics: Artificial Intelligence