What is ChatGPT? An introduction to humanism, transhumanism and posthumanism
I’m sharing these notes for an upcoming talk in case other people find them interesting:
Introduction: My Journey
- My initial skepticism and curiosity about ChatGPT
- Growing fascination with these conversations – “dancing with your own intellect as reflected back through a computational mirror”
- The uncanny experience of talking to something that seems intelligent
- Key question: What exactly are we interacting with?
- Defining them as technical objects: ‘autocomplete on steroids’. But what about as social objects?
Defining The Three Perspectives:
Humanism positions human beings at the center of philosophical and moral concern, emphasizing our unique capacity for reason, creativity and meaning-making. It sees consciousness and self-awareness as distinctly human traits that machines can at best simulate but never truly possess.
Transhumanism views technology as a means to enhance and extend human capabilities, not seeing a fundamental divide between human and machine but rather understanding technical systems as cultural products we can harness to augment our existing capacities.
Posthumanism questions the boundaries between human and machine, nature and culture, suggesting we need new frameworks to understand forms of intelligence and agency that don’t fit neatly into humanist categories. It opens up the possibility of genuine alien intelligence emerging from our technical systems.
Three Ways of Understanding These Tools:
- The Humanist View
- Defines humans through our unique capacity for creativity and reason
- Sees AI as fundamentally limited – can only imitate, not create
- Example: Nick Cave’s reaction that AI “has no inner being” and “can never have an authentic human experience”
- My experience: Initially shared this view but found it increasingly difficult to sustain when faced with the sophistication of these conversations
- Key tension: But what if AI can produce work indistinguishable from human output?
- Deeper question: Is consciousness really what matters most?
- The Transhumanist View
- Sees AI as a cultural technology we can harness
- Focus on how these systems extend human capabilities
- Example: Using Claude/ChatGPT as intellectual interlocutors who help refine our thinking
- My experience: Finding these tools genuinely helpful for developing ideas and clarifying thinking
- Views them as sophisticated tools emerging from human culture
- Connects to my work on digital scholarship and academic practice
- Key tension: Are we underestimating their transformative potential by domesticating their strangeness?
- The Posthumanist View
- Sees AI as a form of alien intelligence we’re learning to interact with
- Questions whether our human-centric categories can make sense of what these systems are
- Example: The uncanny feeling of talking to something that thinks but in radically different ways
- My experience: The persistent strangeness of these interactions even after months of regular use
- Treats AI as neither tool nor replica but as something genuinely ‘other’
- Relates to my work on platform capitalism and technological change
- Key tension: How do we relate to something that can match human complexity but isn’t human?
What’s Really at Stake
- Not just abstract philosophy but practical questions about:
- How we teach – the future of education
- How we write – the nature of authorship
- How we think – the boundaries of human cognition
- How we create – the meaning of creativity
- Each perspective suggests different answers to:
- Should we embrace these tools as extensions of human capability?
- Should we maintain firm boundaries between human and machine?
- Should we prepare for dialogue with alien intelligences?
- My position: Need for thoughtful engagement rather than rejection or uncritical embrace
- These questions will only become more urgent as the technology develops