Early-bird Discount
expires in
Register Now

Blog

Trust in an AI Interconnected World

Blog Post

Trust in an AI Interconnected World

Scott David
Jul 22, 2024

Trust is an emotional state and belief held by human beings that is built upon a sense of reliability and predictability regarding future interactions.  The concept of trust is broadly applied to cover relationships among people, or between people and organizations.  The concept of a legal trust extends and formalizes the reliability of future interactions to create legally enforceable fiduciary obligations to elevate the subjective emotions and beliefs to become trustworthy, objective, reliable and actionable for future relationships. 

The concept of trust is not, however, usually applied to relationships AMONG organizations.  It seems naive to assert that one company (or any other purely legal person) trusts another.  At that point, the concept of reliability and predictability is more usually characterized as risk, rather than trust.   Organizations have developed myriad metrics for assessing risk across business, operating, legal, technical and social (BOLTS) domains as surrogate signals in the absence of human trust. 

Organizations do not have qualia, emotions, or beliefs, and therefore cannot be said to trust something.  However, as noted above, the reverse is not true, i.e., humans can trust organizations, and that is the source of brand loyalty (companies) and patriotism (countries), etc.

Trust is built on consistency of behaviors through time and space, and is encoded in signals associated with consistent behaviors.  With the advent of networked interaction and information systems (the Internet), the signals and behaviors upon which trust is built became mediated by multiple unseen layers, attenuating trust.  The caption of a well-known cartoon from the early Internet years: “On the Internet no one knows you’re a dog,” speaks to the challenges of trust in such unfamiliar, intangible domains.

With the advent of myriad systems and applications of so-called Artificial Intelligence, the signals, behaviors, and interactions online are rendered even more remote and unfamiliar, which further challenges human trust.  How can we trust interactions with AI (and mediated by AI) if we don’t even know what to expect of it?  The AI black box problem is not just confined to internal AI processing steps, it is also present in online interactions (and the information associated with such interactions) where AI is involved.  The advent of “agentic” AI systems, i.e., multi-step and AI P2P interactions, will create vast interaction complexities that will, in effect, enclose all online interactions in a black box rendering control to be stochastic at best, and illusory at worst.

In fact, the concept of trust with AI is a trap.  AI is a form of computational intelligence that processes human (mostly English) text purely computationally.  AI does not, at present, have any sense or understanding of the content or concepts that it is processing.  Putting aside the remarkable phenomenon that computational intelligence can produce outputs that reflect content and style familiar to humans, AI merely processes, it does not understand. 

In fact, AI’s ability to computationally derive such subtle, textured, and human-like patterns from text alone, supports the notion that a significant portion of human cognition (thinking) takes place in language (and material culture) itself, and not in the wet-ware organ of the brain.  The brain is just tuned to the mind that actually resides in language.  From this perspective, AI is a computational mind reader when it processes human text.  That possibility is at the same time creepy and beautiful, for it suggests a future hybridization of humans and AI systems into virtual chimera through a process that is most closely associated with symbiogenesis (from which eukaryotic mitochondria and chloroplasts are derived), but in an intangible form that might be called “sym-info-genesis.”

To put a finer point on this, human survival depended in part on overfitting perceptions of risk vectors from the environment.  Individual humans who perceived a lion hiding in the tall grass tended to live longer than humans that did not perceive the lion, even if the lion was not always there.  This quality of pareidolia (pattern detection in the environment) is responsible for humans perceiving AI output as presenting readable text and capturing author styles, etc. 

It is frequently said that AI hallucinates.  It is, in fact, the humans that are hallucinating AI outputs, not just the AI systems themselves.  In fact, human perception of meaning and content from AI system outputs is akin to predators (mis)perceiving that the eyespots on moth wings are the eyes of a much larger animal.  It is hallucination prompted by mimicry prompted by evolution. 

In the case of AI, the source of the mimicry is not evolution but computation AND human preference in selecting more useful forms of output.  Of course, when humans marvel at the efficacy and beauty of AI outputs, they are also creating a (virtual) fitness landscape, and selecting for those systems that are most evolved for survival in that (human interaction/information) landscape.  We were not around at the time that fish first crawled onto land, but humans are privileged to be able to watch the evolution of a new (non-physical) living form as AI evolves its way into the human trusted interaction landscape.

What are the implications for trust in the future internet where computational intelligence, i.e., AI, can mimic all sorts of content in ways that can benefit or harm interacting parties?  At cyberevolution in Frankfurt this December, we will explore the implications of these and related phenomenon, with the intention of understanding the dynamics at the crossroads of trust and risk.  We hope that you can join us.


KuppingerCole Analysts AG
Scott is an active member of the Accountability Expert Group of the IEEE Ethics Certification Program for Autonomous and Intelligent Systems (IEEE ECPAIS) and serves on the advisory board of the Active Inference Institute.  Prior to joining the University of Washington, Scott worked as an attorney for 30 years focused on counseling commercial and governmental entities worldwide in the structures and transactions of technology and business networks with an emphasis on online commerce, data security, privacy, digital risk, standard setting, and emerging intangibles value propositions. Scott was a partner at K&L Gates (formerly Preston, Gates & Ellis) from 1992 to 2012. Prior to joining K&L Gates, he was an associate at Simpson Thacher & Bartlett in New York.
Almost Ready to Join the cyberevolution 2024?
Reach out to our team with any remaining questions
Get in touch