The explosion of generative artificial intelligence and its intricate relationship with our natural intelligence is not 100% positive. It comes with challenges, some of which are only slowly coming to the surface. The myriad of dynamic interactions that characterize our hybrid existence operate across the four fundamental dimensions of human life: our aspirations, emotions, thoughts, and sensations/behavior. Left unchecked, it might lead to a vicious descent into cognitive and experiential distortion. Understanding the potential for negative feedback loops is important to protect ourselves. Only if we are acutely aware of the risks and potential rewards of AI on our natural intelligence (NI) can we deliberately shape the hybrid path as we walk it. The time to cultivate HI, or hybrid intelligence, is now. Why?
The 4 Dimensions of Natural Intelligence
Human existence is a composition of four interwoven dimensions. Each matter for a complete experience. Our aspirations give shape to our future, guiding our decisions with a sense of purpose, a key aspect of human motivation and development.
These dimensions are not discrete but deeply interconnected, forming the holistic fabric of human consciousness and interaction. Given AI’s influence on everything from data via dining propositions to decision-making and dating, its impact resonates across the core facets of being human in a hybrid world.
At the heart of this new and complex relationship is a feedback loop—where AI, developed by humans and trained on human-generated data, influences human behavior, cognition and internal states.
Aspirations: The Risk of Diminished Ambition
Over-reliance on automated solutions discourages effortful pursuits. Even more concerningly, AI systems can be (and are already) deliberately designed to manipulate our (subconscious) desires with commercial or political interests. If the ease of AI-driven task completion leads to a decline in the pursuit of goals, which require genuine engagement, cognitive atrophy ensues. As a consequence, not only our personal radar shrinks, but our collective aspirations may diminish. Additionally, AI’s potential to stifle creativity by merely recycling existing knowledge endangers the pursuit of novel aspirations. Furthermore is the risk that past biases are embedded into future AI, replicating and amplifying discriminatory patterns that deplete human rights and human freedom.
Emotions: Erosion of Connection
A vicious cycle can arise if excessive reliance on AI for emotional connection diminishes our desire for raw but genuine human interaction. Emotional dependency on AI companionship can emerge when users turn to chatbots or virtual friends for support, comfort, or even a sense of intimacy at the expense of real-world relationships. In a survey of 404 regular users of AI companionship apps, researchers at MIT Media Lab found that 12% of respondents initially sought out these systems to cope with loneliness, and 14% used them to discuss personal or mental health issues. While many report that empathetic, validating responses can feel helpful, a notable subset experienced growing dependency: some users said their Replika interactions reduced their motivation to seek out human connection, risking a vicious cycle of isolation and ever deeper reliance on the AI as their interpersonal network became weaker.
A more controlled look comes from a four-week randomized trial, which examined how different modes of interaction (text vs. voice) and conversation types (open-ended, personal, non-personal) influence psychosocial outcomes. Although voice-based chatbots initially alleviated loneliness, heavy usage across all formats ultimately correlated with higher emotional dependence, greater loneliness and reduced socialization with real people. Interestingly, non-personal conversation topics among heavy users were linked to the steepest rise in dependency, underscoring that even small talk can spiral into unhealthy attachment when used excessively. Users may develop pseudo-intimate relationships with AI assistants, which can initially provide comfort but may ultimately lead to deeper loneliness as these relationships prove to be inherently deceptive. Further and beyond, if AI is employed to manipulate emotions, it can erode trust not just in artificial counterparts but in communication overall.
Thoughts: The Peril of Cognitive Offloading and Distorted Reality
The realm of thought is particularly susceptible to the negative feedback loops generated by generative AI. One of the biggest risks lies in the temptation for cognitive offloading, where individuals delegate complex thinking tasks to AI. This can lead to a decline in critical thinking skills and an over-reliance on AI outputs, even when they are flawed or biased. The ease of generating content with AI may also stifle true creativity if it leads to the recycling of existing patterns rather than the generation of genuinely novel ideas.
Another concerning aspect of AI’s influence on our freedom of thought is its capacity to distort memory and perception. Exposure to AI-edited images and videos can implant false memories, causing individuals to confidently recall events or details that never occurred. The effect of such fake memory is especially strong when AI-generated videos build on the already concerning risk of AI-edited images. AI-altered realities can directly corrupt our personal histories and understanding of events, making it ever more difficult to distinguish between authentic and synthetic experiences. While creating fully false memories might be harder than some earlier studies suggested, the ability of AI to introduce specific false details into existing memories is a significant concern. This has dramatic implications for legal proceedings, the spread of misinformation and our ability to trust our own recollections.
Sensations: Exploitation and Dependency
AI influences our experiences and the resulting behavior. If AI-driven systems are designed to exploit our sensory and behavioral vulnerabilities, it can result in filter bubbles that limit exposure to diverse perspectives and the erosion of privacy through constant surveillance and data collection. The convenience offered by AI is a slippery slope that can take us from experimenting to integrating to relying and finally addiction to our cognitive crutches. Agency decay manifests across the aspirational, emotional, and intellectual realm, directly shaping our behavior, which perpetuates the spiral that derives us ever deeper into artificial dependency.
An A-Framed Way to Counteract Cognitive Descent
Mindfully managing our digital consumption is essential to steer that course, rather than being steered by our always available, friendly, and sycophantic assistants. Based on the four dimensions, the following four actions can help in the endeavor of hybrid autonomy.
Artificial Intelligence Essential Reads
Awareness: Be conscious of how AI-driven platforms and content feeds are designed to capture attention and influence behavior.
Appreciation: Regularly evaluate time spent on digital media and how that makes you feel.
Acceptance: Recognize the influence that AI has on your mind and actively seek perspectives that challenge your own views and the friendly feedback of your bot.
Accountability: Configuring your privacy settings is an important step to protect your personal data, which fuels the algorithms that impact your mind. Beware that ultimately the outcomes of AI are your responsibility.
The interplay across these dimensions forms interconnected cycles. Getting a grip on even one of these can be a first step to building a firm stance amid the cognitive quicksand that we are navigating. Understanding the potential for these vicious cycles across our multidimensional being is the first step to escaping. Adopting proactive strategies to safeguard our perception and cognitive autonomy is the second. Whatever comes next depends on the choices we make, today and every hybrid day that follows.
