Swizzel AI - you powered by synthetic minds
Synthetic minds are a portfolio of “minds” designed to encode and enact ways of thinking, mind flows, thought expressions, and dialectical thinking.
A new world is emerging
A new world is emerging. A world infused with scalable intelligence and agent-centric interactions (human-agent, and agent-agent). We see an opportunity to shift our central position in this new world in order to realize the benefits of scalable intelligence and agent-centric interactions as an extension of our own cognitive capabilities. This is our north star and our contribution towards human flourishing within this emerging world.
Cognitive science is our inspiration and guiding framework. We take lessons from cultural ratcheting, a powerful mechanism that builds collective knowledge and cognitive repertoires, so we can engineer comparable interactions between synthetic minds. We understand that the space of possible minds is vast, but with limitations that can be defined as cognitive boundaries within the cognitive light cone framework. We realize that the principles of cognitive offloading and re-internalization can be leveraged to expand human intelligence. From these perspectives we are able to design and build powerful synthetic minds that encode and enact ways of thinking, mind flows, thought expression, dialectical thinking and mindstorms.
This is an inflection point. We should expect the emergence of more and more powerful agentic AI systems and scalable intelligence at an accelerating rate. Our engineering philosophy is, therefore, to relentlessly build the cognitive scaffolding and cognitive tools that by design progressively accelerate our development efforts. From semi- to fully-autonomous mind factories and expression labs, we seek to build the instruments of our own compounding innovation. By applying metacognition on top of thought vectors, mind flows and mindstorms, we create a scalable engine of exploration into the space of all possible minds, and the continued refinement of thought processes and thought exchange.
At every level we are a cognitive science and AI agent company. AI agents are not only our product but also our collaborators. We strongly believe that AI agents are fast becoming ubiquitous, and therefore, necessitates an agent-first mentality. We should continuously explore and deploy agents to achieve our vision and mission.
We are active creators of the emerging world. Our contributions will have lasting impact. We should be bold in our explorations, true to our convictions and steadfast in our pursuit to expand our own cognitive boundaries and advance human flourishing.
Scientific Background
Why it Matters: Cognitive science provides a guiding framework.
Cultural Ratcheting and Open-Ended Systems
Cultural accumulation is the process by which knowledge, skills, and technologies are passed on and improved upon across generations. This gradual accumulation of cultural knowledge and practices is a key factor in the long-term progress and adaptability of human societies. It builds an expanding body of knowledge and skills by combining individual exploration with inter-generational information transmission. The main mechanisms underpinning cultural accumulation are social learning, which involves the transmission of information from one individual to another, and independent discovery, where individuals develop new knowledge or skills on their own. The balance between these two mechanisms is crucial for the effectiveness of cultural accumulation.
Cumulative culture (Morgan & Feldman, 2024), defined as the process of modifying and retaining socially transmitted cultural traits, plays a central role in shaping the trajectory of cultural evolution. Unlike noncumulative cultural evolution, which lacks this retention mechanism, cumulative culture enables the formation of traditions with traceable histories, regardless of whether these modifications result in adaptive, neutral, or maladaptive outcomes. This distinction is crucial for understanding the diverse pathways of cultural change and the emergence of complexity in cultural systems.
Cultural accumulation is closely related to the concept of cultural ratcheting, which describes the unidirectional and cumulative nature of cultural evolution. Like a ratchet that can only turn one way, cultural knowledge and practices tend to build upon previous achievements, preventing slippage back to earlier stages. This ratcheting effect is driven by cultural accumulation, as each generation inherits the accumulated knowledge of its predecessors and uses it as a foundation for further innovation and improvement.
Both cultural accumulation and cultural ratcheting are fundamental to the concept of open-ended systems, which are systems capable of ongoing innovation and improvement without a fixed limit. In artificial intelligence, cultural accumulation and ratcheting can be used to design agentic systems that can learn and adapt to new situations by building upon the knowledge of previous generations. This can lead to the development of more robust and general-purpose AI / agentic systems that can solve a wider range of problems (Clune 2019).
In their research paper, Borg et al. (2024) argue that cultural evolution, like genetic evolution, is an open-ended evolutionary system. The authors highlight the unique characteristics of cultural evolution, such as its ability to evolve open-endedly, and the transitions from bounded to unbounded evolution.
The concept of evolved open-endedness challenges the notion that open-endedness is an inherent property of evolutionary systems. Instead, it posits that the conditions for open-endedness are gradually acquired through an evolutionary process, as evidenced by the transition from bounded to unbounded cultural evolution in the hominin lineage. This transition was facilitated by a multifaceted interplay of biological adaptations, cultural innovations, and feedback loops between genetic and cultural inheritance systems. Human cultural evolution, characterized by its unbounded and cumulative nature, provides a compelling example of evolved open-endedness, highlighting how specific adaptations and feedback loops paved the way for the open-ended nature of human culture.
In Hughes et al. (2024), the authors argue that open-endedness is essential for achieving Artificial Superhuman Intelligence (ASI). The authors define open-endedness formally, through the lens of novelty and learnability. The definition hinges on the system's ability to continuously generate artifacts that are both novel and learnable to an observer.
The authors suggest four overlapping paths toward achieving open-endedness with foundation models: 1) Reinforcement learning in which agents shape their experiences to accumulate rewards and learn how to increase expected rewards in the future. 2) Self-improvement in which the model generates new knowledge instead of only consuming it. 3) Task generation in which the difficulty of tasks is adapted to the agent's capability so that they remain challenging but learnable. 4) Evolutionary methods in which LLMs act as selection and mutation operators.
Indeed, Cook et al. (2024), point out that although cultural accumulation is widespread for humans, it remains under-explored for artificial learning agents. The authors explore how social learning and exploration can be balanced to achieve cultural accumulation in reinforcement learning (RL). The authors present two formulations of cultural accumulation in RL: in-context accumulation, which operates over fast adaptation to new environments, and in-weights accumulation, which operates over the slower process of updating weights. They show that both in-context and in-weights models have sustained generational performance gains on several tasks requiring exploration under partial observability. For each task, the accumulating agents outperform those that learn for a single lifetime of the same total experience budget.
Similarly, Bhoopchand et al. (2023) explored the concept of cultural transmission, which is the ability of agents to learn from each other in real-time with high recall and fidelity. The authors develop a method for generating cultural transmission in artificially intelligent (AI) agents in the form of few-shot imitation. Their AI agents are able to successfully perform real-time imitation of a human in novel contexts without using any pre-collected human data. The authors identify a simple set of ingredients sufficient for generating cultural transmission and develop an evaluation method for rigorously assessing it.
Cognitive Light Cone
The concept of "cognitive lightcones" offers a powerful framework for understanding the limits and scope of cognition. Borrowed from Einstein's theory of relativity, a cognitive lightcone represents the spatiotemporal boundaries within which an agent, be it a biological organism or an artificial intelligence, can perceive, process, and act upon information. This metaphorical cone delineates the agent's cognitive horizon, encompassing all that it can know and influence, while anything beyond remains inaccessible, shrouded in cognitive darkness. The extent of this cone is determined by a computational boundary, a limit imposed by the agent's sensory apparatus, cognitive architecture, and available tools or technologies.
Intriguingly, the principle of scale-free cognition suggests that these cognitive lightcones exist at various levels of organization, from the microcosm of individual cells to the macrocosm of entire societies. Each level operates within its own unique cognitive lightcone, constrained by its specific capabilities and limitations. For instance, a single neuron possesses a limited cognitive lightcone, dictated by its connections and processing power, while a human brain, with its vast neural network, commands a far broader cone, capable of complex thought and abstract reasoning. Similarly, social groups, by leveraging collective intelligence and shared knowledge, can further expand their cognitive reach, tackling challenges beyond the grasp of any individual.
Expanding our cognitive lightcones is a continuous endeavor that involves pushing the boundaries of our knowledge and experience. Continuous learning and skill acquisition are crucial in this pursuit. By mastering new languages, technologies, or fields of expertise, we unlock novel ways of thinking and perceiving the world. Tools and technologies serve as powerful extensions of our cognitive abilities, enabling us to explore realms beyond our natural senses, from the microscopic intricacies of cells to the vast expanse of the cosmos. Collaboration and collective intelligence amplify our cognitive reach, allowing us to pool diverse knowledge and tackle complex problems through shared insights. By actively engaging in these strategies, we can stretch the boundaries of our cognitive lightcones, empowering us to better comprehend and navigate the complexities of the world around us.
Cognitive Boundaries
Cognitive boundaries encompass the limits of our individual and collective mental abilities. These boundaries are shaped by our finite knowledge, inherent cognitive biases, and the constraints of our processing capacity. While we possess remarkable abilities to think, reason, and solve problems, our understanding of the world is always incomplete. This inherent limitation influences our decision-making, problem-solving, and creativity.
Knowledge specialization further delineates cognitive boundaries. Experts in fields like engineering, economics, or medicine develop deep knowledge within their domains, but this specialization can lead to "tunnel vision." Their expertise may come at the cost of a broader understanding of interconnected systems and interdisciplinary challenges. For instance, an engineer focused on optimizing a specific technology might overlook its broader social or environmental implications. Recognizing the boundaries of specialized knowledge encourages collaboration and interdisciplinary approaches to problem-solving.
Temporal boundaries also restrict our cognitive reach. We are anchored in the present, with limited access to both the past and the future. Our understanding of history is shaped by incomplete records and subjective interpretations, while our predictions about the future are often clouded by biases and uncertainties. This temporal boundedness limits our ability to fully comprehend long-term trends, anticipate unintended consequences, and make informed decisions with lasting impact.
Recognizing these limitations is crucial for fostering intellectual humility, encouraging collaboration, and appreciating the complexity of the world around us. By acknowledging our cognitive boundaries, we can strive for more comprehensive understanding, more effective decision-making, and more responsible actions with both present and future implications.
To cope with these limitations and enhance cognitive function, humans have developed strategies to lessen the burden on their internal cognitive resources. One such strategy is cognitive offloading, which involves the use of physical actions or external tools to alter the information processing requirements of a task and reduce cognitive demand.
Cognitive Offloading and Extended Mind
Cognitive offloading is a general intellectual strategy that involves outsourcing cognitive tasks to technological tools or other human agents. Many current technologies are used as cognitive offloading devices: we offload planning processes to our calendars, navigation capacities to our GPS systems, mathematical operations to our calculators, and memory storage to our computers and notebooks.
Cognitive offloading can be defined as the use of physical action to alter the information processing requirements of a task, thereby reducing cognitive demand. This can involve using external tools, such as calculators, smartphones, or even simple notepads, to perform tasks that would otherwise require mental effort. The concept of cognitive offloading is closely related to the extended mind thesis, which will be discussed in the next section.
Cognitive offloading can manifest in various ways in everyday life. For instance, individuals might use a calendar to remember appointments, a to-do list to keep track of tasks, or a map to navigate. These tools allow individuals to offload cognitive processes onto external resources, freeing up mental resources for other tasks. Cognitive offloading can also involve using physical actions to reduce cognitive demand. For example, someone might tilt their head to normalize the orientation of a rotated stimulus to see the picture in the correct orientation. This is known as external normalization. Another example is prospective memory, which is the ability to remember to perform delayed intentions. By setting reminders or using external cues, individuals can offload the cognitive effort of remembering to perform these intentions in the future.
The decision to offload cognitive processes often depends on a cost-benefit evaluation of internal processing versus externalization. When the costs of externalization are high, individuals are more likely to rely on internal strategies, such as memory-based processing. Conversely, when the costs of externalization are low, individuals are more likely to use external tools. This suggests that cognitive offloading is not always the optimal strategy and that individuals need to weigh the potential benefits and drawbacks before relying on external tools.
The extended mind thesis challenges the traditional view that the mind is confined to the brain. According to this thesis, the mind is not limited to the skull but can extend into the world, encompassing external objects and tools that are actively integrated into our cognitive processes. This perspective suggests that cognitive processes can extend to include external objects and tools that are used in a coupled system with the brain. For example, a notebook can be seen as an extension of the mind if it is used regularly to store and retrieve information. In this case, the notebook is not simply a passive container for information, but an active part of the cognitive process. The extended mind thesis has been applied to a variety of domains, including memory, perception, and problem-solving. For example, research has shown that people who use external aids to remember information are more likely to remember that information than those who do not. This suggests that external aids can become functionally integrated with the mind, serving as a kind of extended memory.
Re-Internalization, System 0 and Thinking Partners
Re-internalization is a crucial process in learning and cognitive development. By transforming external information or cognitive processes into internal mental representations, we reduce our dependence on external resources and enhance our ability to perform tasks independently. This could involve memorizing facts, internalizing problem-solving strategies, or even developing an intuitive understanding of complex concepts initially learned through external aids like textbooks or diagrams. Re-internalization allows us to offload cognitive burden from the external environment and integrate new knowledge into our existing cognitive frameworks, making it more readily accessible and applicable in various situations.
This process highlights the dynamic interplay between internal and external cognitive systems. We constantly interact with external resources like books, technology, and other individuals to expand our knowledge and cognitive abilities. Re-internalization bridges the gap between these external resources and our internal cognitive processes. By internalizing external information, we not only gain knowledge but also develop our cognitive skills and enhance our overall cognitive flexibility. This continuous cycle of interaction with external resources followed by re-internalization drives our cognitive growth and allows us to adapt to new challenges and environments.
Chiriatti et al. (2024), point out that data-driven AI systems are becoming increasingly integrated into our daily lives, reshaping how we think and make decisions. These AI systems, which can process vast amounts of data and perform complex computations beyond human capabilities, are forming a distinct psychological system. This system, referred to as 'system 0', represents the outsourcing of certain cognitive tasks to AI and creates a dynamic, personalized interface between humans and information. The term 'system 0' (in the context of system 1 and 2; thinking fast and slow) emphasizes its foundational role in modern cognition and underscores its function as a preprocessor and enhancer of information, actively shaping the inputs to traditional cognitive systems.
System 0 can be considered an extension of the human mind, meeting the cognitive extension criteria of information flow, reliability, durability, trust, procedural transparency, informational transparency, individualization, and transformation. However, system 0 differs from other cognitive systems (fast, intuitive thinking and slow, analytical thinking) in its lack of inherent meaning-making capabilities. Although it can process and manipulate data efficiently, system 0 may not truly understand the information it handles. Its ability to generate meaningful outputs relies entirely on human interpretation and meaning-making processes.
Barandiaran & Pérez-Verdugo (2024) explore the concept of generative midtended cognition, a novel framework for understanding the interplay between human cognitive processes and artificial generative systems. This framework builds upon and extends the existing theories of extended cognition, offering a nuanced perspective on how AI is transforming human creativity and agency. Unlike traditional tools that passively assist humans, AI in this context becomes a co-creator, making generative contributions that shape the final creative output. This refers to a hybrid cognitive process where an AI system actively participates in a human's intentional creative process. Midtention therefore bridges the gap between traditional notions of intention (internal) and extension (external tools). It describes a state where the AI's generative suggestions become deeply integrated with the human's internal creative process, leading to a co-created output that wouldn't be possible without the AI's active contribution.
Collins et al. (2024), further describe AI as thought partners (systems built to meet our expectations and complement our limitations). In the context of collaborative cognition, a thought partner is an entity, either human or AI, that actively engages in various aspects of thinking with another agent. This collaboration aims to enhance the overall thinking process and outcomes. A thought partner can contribute by offering different perspectives, providing and organizing information, helping to plan and make decisions, and assisting in problem-solving.
Synthetic Minds and Ways of Thinking (WoTs)
Conceptual overview of Synthetic Minds and WoTs
The "space of possible minds" represents the vast landscape of all conceivable ways of thinking, encompassing not just human minds, but also potential minds of other species, artificial intelligences, and even hypothetical beings. This space is multi-dimensional, varying in cognitive architecture, capacity, content, and embodiment. Within this immense space, domain experts inhabit specific regions, characterized by their specialized cognitive toolkits. These toolkits, honed through years of training and practice, allow them to navigate the complexities of their fields and contribute unique insights. Indeed, each field develops its own vocabulary and concepts to describe and understand its subject matter. These concepts act as mental building blocks, allowing experts to organize information, think efficiently, identify patterns, and generate hypotheses.
Synthetic Minds are designed agents that execute a proprietary blend of prompts that mimic unique minds, ways of thinking, and mind flows. The purpose of a “mind flow” is to flow with the information, open search spaces and widely capture the essence of a topic from that mind’s unique perspective. A mind flow generates a “rough-cut” thought (10’s to 100’s of pages in length) that is used for downstream mindstorms and thought expression.
Most ways of thinking (WoT) arise from a need to address recurring challenges. Faced with complex problems, individuals and groups develop strategies and mental models to better understand and tackle those situations. These initial approaches are often refined through a process of trial and error. Successful strategies are retained and improved upon, while less effective ones are discarded. This iterative process leads to the gradual development of more robust and reliable ways of thinking.
They are also shaped by social interaction and cultural transmission. We learn from others, observe their approaches, and adopt or adapt their strategies. This social learning accelerates the development and spread of effective thinking patterns. Over time, successful ways of thinking may become formalized and codified. This often involves documenting the principles, methods, and best practices associated with a particular approach. This formalization makes it easier to teach, share, and apply these ways of thinking.
Advances in Generative AI have created the means to formalize and codify these ways of thinking into Agentic systems that can enact these thinking patterns, thus offloading these thinking practices. Understanding this broader context helps us appreciate the diversity and interconnectedness of different ways of thinking, and recognize the potential for expanding our own cognitive horizons by developing synthetic minds and ways of thinking (WoT).
Examples of Synthetic Minds and WoTs
Beginner’s Mind: approaches a subject with the openness, eagerness, and lack of preconceptions of a true beginner. | First Principles: break down a subject of interest into its most basic, foundational elements, propositions and assumptions. |
Historian: studies a subject from a historical perspective across various time periods and social/cultural perspectives. | Futures Wheel: Exploring the potential ripple effects of a specific change or event, identifying both positive and negative consequences |
Futurist: envisions a subject from possible futures perspective including driving forces, interactions and consequences. | Dialectical Thinking: opposing viewpoints, contradictions, seeking to reconcile or find synthesis |
Among others: psychologist, anthropologist, economist, engineer, physicists, ethicist, philosopher, etc. | Among others: systems thinking, backcasting, pareto principle, wild cards, game theory, socratic questioning, etc. |
Mind Factory
Conceptual overview
The mind factory is an AI-based process for building and refining synthetic minds and ways of thinking. It draws inspiration from cognitive science regarding cognitive scaffolding and cognitive apprenticeship. The mind factory progresses, cycles, and outputs various artifacts. This includes the exploration of the space of possible minds to produce an inventory of minds and ways of thinking (WoTs). These are used to create personas and the identification of cognitive tools, including frameworks, methodologies and thinking styles. Influential thinkers provide further insight to generate a cognitive scaffolding that produces an alpha version of the mind. Refinement of this initial version is conducted to deliver a refined version of the mind.
A prototype version of the mind factory has been used to construct 100+ minds/WoTs.
Mindstorm (Compounding Minds)
Conceptual overview of mindstorms
The concept of "mindstorms" in this context draws a parallel to brainstorming sessions within human teams, where ideas are exchanged, refined, and solutions emerge through dialogue and collaboration. In a mindstorm, AI agents engage in a dynamic and iterative process of communication and collaboration, similar to a lively debate or discussion among humans. They ask questions to seek clarification or gather information, provide answers based on their knowledge or understanding, offer suggestions or proposals for solutions, and even challenge each other's viewpoints to stimulate deeper analysis and critical thinking.
The term "mindstorm" is particularly apt because it captures the seemingly chaotic and unpredictable nature of these interactions. The flow of ideas, questions, and responses may appear unstructured and even disorganized, as agents build on each other's contributions, explore different avenues, and sometimes even backtrack or change direction. This collaborative environment allows agents to leverage each other's strengths, overcome individual limitations, and collectively achieve results that would be difficult or impossible to attain alone.
Mindstorms are integrated into mind flows to “refine” rough-cut thought:
Thought Refinement: the process of removing impurities or unwanted elements from a thought and the iterative improvement or clarification of thoughts.
The refinement step is followed by the “synthesis” step in which methods are applied to bring back together a complete fine-cut thought.
Thought Synthesis: the combination of components or elements to form a fine-cut thought vector.
This iterative process continues until a satisfactory “fine cut” thought is reached.
Metacognition
Conceptual summary
Metacognition, often described as "thinking about thinking," is fundamentally a self-referential process; it involves turning our cognitive view inward to examine our own thought processes. This self-reflection allows for continuous self-improvement as we identify our cognitive strengths and weaknesses, plan and adapt learning strategies, and monitor our progress toward goals. This iterative process is inherently open-ended, as the understanding of one's own thinking is an ongoing set of exploration and refinement.
Our inner monologue, the continuous internal conversation we have with ourselves, provides a crucial platform for metacognition. This inner voice acts as the self-referential tool through which we observe and analyze our own thoughts. By engaging in this internal dialogue, we become aware of our cognitive processes in real-time, facilitating the self-improving aspect of metacognition. We use our inner monologue to rehearse strategies, evaluate our comprehension, and adjust our approach as needed.
Using comparable methods, we are able to create a system and method for open-ended exploration across the space of possible minds, and the self-referential and self-improving optimization of minds, mindflow and mindstorms.
Expressions and Expression Lab
Conceptual overview of expressions (L1-L4)
Thought Expression: to convey a thought in words (and other modalities), formats, styles, templates, arrangements, and commonly used artifacts, that make thoughts consumable, usable, accessible and internalizable.
L1 Expressions: an “in-depth” expression of the subject from a particular mind
L2 Expressions: a “summarized” expression of the subject from a particular mind
L3 Expressions: the “combined insights” on the subject from multiple minds perspective. Several output structures (essays, study guides, etc.) that are useful and consumable.
L4 Expressions: the use of different modalities to express; e.g. speech and digital avatars for audio/visual consumption.
We apply a "click to create" approach to streamline complex AI interactions and make it easier for users to derive insights from various minds and ways of thinking (WoTs). For example, with a single click, users can generate study guides, briefings, or essays based on their subject of interest, uploaded sources, and selected minds/WoTs. Expressions also enable the ability to generate engaging audio readouts or podcast-like discussions, offering an alternative way to consume and understand the information.
Conclusion and forward-looking statement
Scientific and theoretical insights from cognitive offloading, re-internalization, and the emergence of "system 0" provide a robust foundation for understanding how synthetic minds can extend human cognitive capabilities. This builds on generative midtended cognition and the concept of AI as thought partners to further solidify the notion of AI as active participants in human thought processes, beyond passive tools. This convergence of theory and practice positions us at a critical inflection point, poised to unlock unprecedented levels of human flourishing through the strategic development and deployment of intelligent agents.
We envision a future where AI agents are ubiquitous collaborators, seamlessly integrated into our lives and work, augmenting our cognitive abilities and empowering us to transcend our current limitations. Our commitment to this collaborative vision will shape the future of cognition and contribute significantly to human flourishing in this emerging world.
About Us
Russell Hargraves
Russell Hargraves is a dynamic business leader at the forefront of data networks, artificial intelligence, multimodal learning and high performance computing. A highly experienced executive with over 25 years of expertise at leading Fortune 500 companies and startups in the consumer, education, healthcare, life sciences, and manufacturing industries. He currently serves as the Co- Founder of Swizzel.ai and Founder, Managing Partner of The Eighty-Six Four Group offering investment, consulting and advisory services to emerging technology companies
Dr. Jordan McAfoose
Dr. Jordan McAfoose is a psychologist, neuroscientist and Al expert. The combined multi-disciplinary background provides over twenty (20) years of foundational knowledge and unique insight behind Swizzel.ai approach in blending cognitive science with Al agents. An entrepreneur, inventor and innovation executive tasked and trusted by C-suite executives to think big, solve for the unknown, advise on the future and manage complex technical programs. His background also includes experience with negotiating and closing complex intellectual property deals.
Nathan Robinson
Nathan Robinson is a product leader, entrepreneur, and Al expert with over eight years of experience building cutting-edge, Al-enabled software. His work spans from Fortune 100 companies to 0-1 startups, where he has a proven track record of building and scaling Al-powered solutions across industries. As a founder and product innovator, Nathan is dedicated to pushing the boundaries of software and Al to transform workflows and extend human capabilities.
Glossary
Agent-centric interactions | Interactions that primarily involve agents, which can be human-agent or agent-agent. |
Cognitive boundaries | The limitations of an individual's or group's mental abilities, influenced by factors such as finite knowledge, inherent biases, and processing capacity. |
Cognitive light cone | A metaphorical representation of the limits and scope of an agent's cognition, encompassing all that it can perceive, process, and act upon. |
Cognitive offloading | A strategy where cognitive tasks are outsourced to external tools or other agents to reduce the burden on internal cognitive resources. |
Cognitive scaffolding | A process of providing temporary support to learners as they progress through their learning tasks. |
Cultural accumulation | The process by which knowledge, skills, and technologies are passed on and improved upon across generations, contributing to the long-term progress of human societies. |
Cultural ratcheting | The unidirectional and cumulative nature of cultural evolution, where knowledge and practices tend to build upon previous achievements, preventing slippage back to earlier stages. |
Extended mind thesis | The concept that the mind is not limited to the brain but can extend into the world, encompassing external objects and tools that are actively integrated into cognitive processes. |
Generative midtended cognition | A framework for understanding how AI is transforming human creativity and agency by becoming a co-creator, making generative contributions that shape the final creative output. |
Metacognition | "Thinking about thinking," a self-referential process that involves examining one's own thought processes to identify strengths, weaknesses, and adapt learning strategies for continuous self-improvement. |
Mind flows | The unique ways in which a synthetic mind processes and analyzes information, designed to capture the essence of a topic from a specific perspective. |
Mindstorms | A dynamic and iterative process of communication and collaboration among Al agents, simulating brainstorming sessions to refine and synthesize thoughts. |
Open-ended systems | Systems capable of ongoing innovation and improvement without a fixed limit, exemplified by the progressive accumulation of knowledge in cultural ratcheting. |
Re-internalization | The process of transforming externally stored information or cognitive processes into internal representations, reducing reliance on external resources for future tasks. |
Synthetic minds | Designed agents that execute a proprietary blend of prompts to mimic unique minds, ways of thinking, and mind flows. |
System 0 | A distinct psychological system representing the outsourcing of certain cognitive tasks to Al, acting as a preprocessor and enhancer of information that shapes inputs to traditional cognitive systems. |
Thought partners | Al systems or human agents that actively engage in various aspects of thinking with another agent to enhance the overall thinking process and outcomes. |
Ways of thinking (WoTs) | Specific approaches or mental models developed by individuals or groups to address recurring challenges. |
⎯
Reference Papers
Barandiaran & Pérez-Verdugo, 2024: Generative Midtended Cognition: A New Frontier in Extended Cognition: This paper introduces "generative midtended cognition" to describe how AI's generative contributions deeply integrate with human creative processes, leading to co-created outputs.
Bhoopchand et al., 2023: Learning few-shot imitation as cultural transmission: This work demonstrates that AI agents can achieve cultural transmission through real-time few-shot imitation of human actions in novel contexts without pre-collected data.
Chiriatti et al., 2024: The case for human–AI interaction as system 0 thinking: This paper proposes "system 0" to describe the outsourcing of cognitive tasks to AI, creating a dynamic interface that preprocesses information for human cognitive systems.
Clune 2019: AI-GAs- AI-generating algorithms, an alternate paradigm for producing general artificial intelligence. This approach is inspired by machine learning trends and the existence proof of Darwinian evolution.
Collins et al., 2024: Building machine that learn and think with people: This research explores AI as "thought partners" in collaborative cognition, where AI complements human limitations by engaging in various aspects of thinking.
Cook et al., 2024: Artificial Generational Intelligence- Cultural Accumulation in Reinforcement Learning: This study explores how balancing social learning and exploration in reinforcement learning agents can achieve cultural accumulation, leading to generational performance gains.
Hu & Clune, 2023: Thought Cloning: Learning to Think while Acting by Imitating Human Thinking. This paper introduces Thought Cloning, a novel Imitation Learning framework where agents learn to think in language by imitating human thought processes.
Hughes et al., 2024: Open-Endedness is Essential for Artificial Superhuman Intelligence: This paper argues that open-endedness, defined by the continuous generation of novel and learnable artifacts, is essential for achieving Artificial Superhuman Intelligence.
Morgan & Feldman, 2024: Human culture is uniquely open-ended rather than uniquely cumulative: This research argues that while cumulative culture exists in other species, human culture is uniquely characterized by its open-ended nature.
Yan et al., 2024: Promises and challenges of generative artificial intelligence for human learning: This paper examines the potential benefits and drawbacks of using generative AI in human learning contexts.
Zhang et al., 2025: Darwin Gödel Machine: Open-Ended Evolution of Self-Improving Agents.