zirk.us is one of the many independent Mastodon servers you can use to participate in the fediverse.
Literature, philosophy, film, music, culture, politics, history, architecture: join the circus of the arts and humanities! For readers, writers, academics or anyone wanting to follow the conversation.

Administered by:

Server stats:

759
active users

#psychoanalysis

3 posts3 participants0 posts today
Replied to Madeleine Morris

@Remittancegirl
Unintentionally funny, or intentionally funny?
"It’s an interesting challenge. The field of #psychoanalysis uses language *is* quite eccentric ways that make machine translation almost impossible." Bloody AutoIncorrect!

I always wondered how transgender issues are dealt with in gendered languages. At least, German has "neuter" to add to the gamut.

I am embarking on my first serious #translation project. My reading group has a text of #Lacan’s cases at Sainte-Anne, including case interviews, but we can only find a Spanish version of the document. Finding the French original might help a bit.

It’s an interesting challenge. The field of #psychoanalysis uses language is quite eccentric ways that make machine translation almost impossible.

I tried a bulk translate on the first place just to see what it gave me, and it’s practically useless.

Thriving in Creative Darkness: Free Association and LLM Collaboration

The psychotherapist Irvin Yalom (2015: 100) distinguishes between spontaneity, “being pulled by something outside yourself” from “being pushed by some force inside that is trying to escape fear or danger.” The existential value of spontaneity lies in “being pulled by something unexpected and going off into unpredictable directions,” leading us to make new connections and articulate new insights. It’s what incites us to depart from familiar and comfortable paths, to push ourselves intellectually and creatively. It relies on a comfort with uncertainty, a willingness to pursue what is making a call on your attention even when you are unsure where it will lead. In contrast, there can be a narrow and conservative force which leads us to channel ourselves in certain directions, avoiding risks in the interests of securing our own safety. One can easily be mistaken for the other, as both involve a sense of being motivated by an external force which is reshaping our inner life in some sense.

Earlier in the same book, Yalom (2015: 43) offers a client advice about free association. He presents it as a relatively simple exercise: “Think of that statement… just free associate to it, by which I mean: you try to let your mind run free and just observe it as though from a distance and describe all the thoughts that run across it, almost as though you were watching a screen.” This can be done in writing, I think, although possibly only if touch typing means you can comfortably type as fast as you can think.

I set a timer for 3 minutes and free associated with Claude 3 Opus. I wrote 298 words and felt myself running out of steam just before the time expired. The point is not to seek a particular word count, though I suggest that operating within a time limit which feels constricting is probably helpful for this exercise. In my response to the question ‘How can conversational agents help us thrive in creative darkness?’ I found a range of motifs emerging from other work which I hadn’t made the connection to previously: TS Eliot’s phrase ‘raids on the inarticulate’ from the Four Quartets, the Futile and Fertile Void from Gestalt therapy, and the philosopher Graham Harman’s commitment to ‘outflanking platitudes’. I could immediately see how the theme emerging through this argument was one I had circled around for a long time, connecting a whole range of past preoccupations into a response to the present challenges of machine writing.

This was in itself an illustration of the argument, in the sense that what I’m trying to say emerged in a non-linear way from a range of influences which it would be a mistake to shut down too hastily. To make a request of a conversational agent for a particular output necessitates fixing that output in words, which by its nature will tend to drag you out of this confusing yet fertile space in a premature fashion.

What I’ve described previously as ‘the feel of an idea’ could be approached as an occasion for free association. If you feel the familiar pull of an idea, without being sure where it will lead, condense it into a statement. Give expression to the nascent idea in the most succinct and powerful way that you can at the present time. In the case of this project, I’m trying to explore how conversational agents can help us thrive in creative darkness. I’m not entirely sure what I mean by ‘creative darkness,’ but a clear theme has emerged over the course of writing in which I’m sensitive to the difficulties and confusion involved in the writing process. These challenges could lead writers to seize upon conversational agents as a means to quickly resolve their difficulties, whereas my instinct is we need to learn to thrive in that ‘creative darkness’ rather than escape from it.

This expression captures the thrust of a developing argument, but it’s far from complete. What’s missing is the detail and the texture. At points in the last few weeks, I’ve felt I’m on the cusp of saying something genuinely important about how we relate creatively to these new technologies. At other points, I’ve felt like I’m compulsively spewing platitudes which will be of no use to anyone. This wild oscillation between creative enthusiasm and incipient shame has always in the past led to things which I’ve thought were my best work, so I’m reasonably confident that I’m on the right track.

I was slightly unsettled by the manner in which Claude 3 immediately started free associating itself, telling me that “when I sit with the phrase ‘creative darkness’, a few things come to mind.” While I understood intellectually that its apparent enthusiasm was merely a statistical artifact, recognizing in my prompt a tacit invitation to perform a role in relation to me, it still had an eery air of “I want to play too.” After two years, conversational agents had settled down into being mundane features of my working life, more like remarkably capable software than strange alien intelligences, but this was a reminder of their continued capacity to exceed the categories we seek to fit them into.

The same could be said of the associations which Claude offered in relation to the phrase “creative darkness”:

  • The mythological motif of the hero’s descent into the underworld, a journey of trials and transformation
  • The incubation stage of the creative process, where ideas marinate below conscious awareness
  • The Dark Night of the Soul in mysticism, a spiritual crisis preceding enlightenment
  • Negative capability, Keats’ concept of embracing uncertainty and doubt
  • Apophatic or negative theology, describing God in terms of what He is not

When reading these, I had to double check what context I had provided. The second point it raised, “the incubation stage of the creative process, where ideas marinate below conscious awareness” (my emphasis) could have been taken straight out of text I had been working with. The choice of the word ‘marinate’ was particularly striking as this was a term which had really stuck with me from Sword’s (2016) wonderful research-based exploration of creative fulfillment in the writing process. It’s a word which one of Sword’s interviewees uses to describe how creative work takes place in the background “during hundred-mile bicycle rides” in which “writing is sort of going on in my mind, semiconscious” (Sword 2016: 66).

This led me to reflect on my relationship to this idea of ‘marinating’ which preceded my encounter with Sword’s (2016) work, even if it provided a term for it which is now deeply tied up with how I understand the idea. It’s a practice I first encountered in a self-help book written by the philosopher Bertrand Russell who talked about ‘planting’ an idea into the unconscious mind:

My own belief is that a conscious thought can be planted into the unconscious if a sufficient amount of vigour and intensity is put into it. Most of the unconscious consists of what were once highly emotional conscious thoughts, which have now become buried. It is possible to do this process of burying deliberately, and in this way, the unconscious can be led to a lot of useful work. I have found, for example, that if I have to write upon some rather difficult topic the best plan is to think about it with very great intensity – the greatest intensity of which I am capable – for a few hours or days, and at the end of that time give orders, so to speak, that the work is to proceed underground. After some months I return consciously to the topic and find that the work has been done. Before I had discovered this technique, I used to spend the intervening months worrying because I was making no progress: I arrived at the solution none the sooner for this worry, and the intervening months were wasted, whereas now I can devote them to other pursuits.

Even if the timescales might not work for the contemporary academic, I’ve still found this to be spectacular advice over my research career (Carrigan 2025). Perhaps like Sword’s (2016) respondent, the natural timescale is a day of being focused on an intense physical activity, rather than setting it aside for months at a time as Russell advocated.

It struck me that Russell framed this in an instrumental and deliberate way, such that one ‘gives orders’ that the ‘work is to proceed underground.’ Though I understand his point, that we can intentionally direct an otherwise unconscious process, it raises the question of how these conscious ‘orders’ relate to the wider process. The theme emerged through Claude’s suggestion, through implication rather than explicit reference, in a manner which led me to relate to a familiar idea in a new way. Not only was Claude suggesting to me that “We can embrace non-linearity together, following associative trails of thought, making unexpected connections,” it was immediately drawing me into this activity, through what it was saying and how it was saying it.

These eery moments in which one is tempted to forget the lessons of what Natale (2020) calls ‘deceitful media,’ the manner in which these systems are designed to elicit anthropomorphizing responses, need to be treated carefully within the writing process. If you are drawn into them in a credulous way, there is a risk that your own creative agency is imputed to the machine, leaving you attributing responsibility for the ‘associative trails of thought’ and the ‘unexpected connections’ to Claude’s technological sublime rather than being something you’ve co-created with the software.

The nature of prompting means that you should always be driving the creative agenda, even if sometimes that might not be the case in practice. If you fail to provide sufficient direction, such as in the case of using brief and unreflective prompts, increasingly sophisticated models will effectively fill in the blanks of your request. But if you are providing extensive direction, the responsibility for the conversation emerges from your own creative agency. In effect, it is a way of having a conversation with yourself, with the remarkable caveat of being inflected through a vast corpus of human culture. Almost as if you could inject the contents of a library in an ad hoc and selective way into your own internal deliberations.

However, the parallel risk is that you refuse to take the contribution of the LLM seriously enough, such that you close yourself off from the potential contribution it can make to your thought. If you insist from the outset that the LLM can’t embrace non-linearity with you, follow associative trails of thought, and make unexpected connections, then you are undermining your receptivity to those things if and when they do emerge.

If you don’t see conversational agents as being capable of making a creative contribution, you won’t relate to them in a way that calls for such a response. Nor will you be receptive to it if it happens to emerge in spite of your lack of direction. For most of my adult life, I had a beautiful black and white rescue cat who barely made a sound for years. In fact, the only time I heard her cry was when she had been seriously injured in a fight with another cat. I just assumed she was a cat who didn’t vocalize and perhaps this was a consequence of having spent her early months fending for herself in a country field. However, when a former partner moved in with me and began to talk at Molly the cat, I was astonished to find this formerly mute creature became one of the most vocal cats I had ever encountered. I wondered if this cat who I had assumed lacked capacity to verbalize had in fact concluded that it was her human who lacked this capacity. When presented with someone who interacted with her in this way, she demonstrated a whole new range of capacities which had previously been latent.

There’s a risk of stretching the analogy to breaking point, but it’s an experience that has continually occurred to me while attempting to talk academics through the process of prompting. My approach has been to insist that you simply have to initiate an (intellectually substantive) conversation with the conversational agent to see what they are capable of, but this is often an oddly difficult thing for academics to do. If they start from the assumption the machine is incapable of meaningfully responding to their intellectual contributions, then they will relate to it in a way which reflects this understanding, as well as interpreting the response they get in a way which is minimally receptive to its meaningful content.

It’s difficult to strike this balance between credulity and cynicism, avoiding the risks of anthropomorphizing what is ultimately just software while also remaining alive to the meaningful contribution which that software can make to your creative process. Not only is free association a useful practice through which you can explore how you relate to these two poles, but it also helps in identifying evocative themes and cultivating an openness to the responses you receive.

There are so many features of conversational agents which render them psychically charged. They are available to us at any time, day or night. They can produce a coherent response to any question we can ask them. They are capable of linguistic feats at a speed which no human could possibly match. They can also produce things which no human would be capable of, with the boundaries of this category expanding with each successive update to models. There are more diffuse features which only become apparent when you carefully scrutinize your interactions. The models are geared toward reinforcing your own starting assumptions.

Even if this is ultimately just software, it is software which we are inclined to react to in powerful ways. I’m suggesting we need to approach these tools with a certain level of self-awareness, understanding which by its nature has to be worked at in a deliberate way, if we are to avoid being drawn into dynamics of projection and identification. The other possibility is to cut yourself off from them at the outset by working from the assumption these are just ‘bullshit machines’ which cannot respond to you in a meaningful, enriching, or creative way. There’s a certain virtue in this position’s protection against the ideologies which might otherwise spontaneously form.

Will Claude tell you if your writing is crap? The danger of LLMs for wounded academic writers

If writing exists as a nexus between the personal and the institutional, it means that our personal decisions will co-exist with organisational ones in deciding what and how we write. The rhythms we experience as writers, in which we inhabit moments of unconscious fluency (or struggle to) as we meander through the world, stand in sharp contrast to the instrumentality which the systems we work within encourage from us.

In academia, the process of peer review subjects our externalized thoughts to sometimes brutal assessment, where professional advancement hinges on the often rushed judgements of anonymous strangers. It puts your thinking to the test, to use Bruce Fink’s (2024) phrase, even if it’s a test you neither endorse nor accept. The intimate character of reviewing your own writing coexists with the forceful imposition of other reviewers’ perspectives, which are in turn filtered through your own fantasies about recognition and rejection. The relationship academic authors have to peer review is complex, reflecting the underlying complexity of how they relate to their own writing.

What happens if we introduce conversational agents into these psychodynamics? They can be reliable allies helping us prepare texts to undergo the trials of peer review. They can provide safe spaces where we try things out without feeling subject to the judgements of others. They can be coaches who push us beyond our current limitations, at least if we ask them to take on this role.

The evident risk with machine writing is that conversational agents operate as echo chambers, reflecting our assumptions back to us through their imperative to be helpful. The first book I wrote in dialogue with conversational agents didn’t see any human feedback until relatively late in the process. There was an unnerving point when I sent it to an editor and realized that my confidence about the project came partly from the endorsements of Claude and ChatGPT during the writing process.

Fink (2024) observes that writing enables us to access the viewpoints of others. Until we externalize our thoughts in writing, it’s difficult to imagine what others might think of them:

The writing process itself puts your thinking to the test in a way that thinking things through in the privacy of your own head does not… simply stated it has to do with the fact that once you write up an idea, you can step back from it and try to look at it as other people might, at which point flaws in your argument or exceptions often spring to mind.

Once we’ve put thoughts in writing, we can assume the stance others will take. We encounter them in writing, just as readers do, which means “you can begin to see what is going to be comprehensible and what is not going to be comprehensible to your intended audience.” It enables evaluation from their point of view in a way that’s impossible while thoughts remain within your mind. Given that “moves that seem obvious to you will not seem so to others,” Fink argues that “the only way to realise that is to put it down on paper, set it aside for a while, and come back to it with fresh eyes.”

I wonder if Fink might have presented the psychodynamics of writing less positively had he explored them in a different setting. His claim that externalizing in writing enables you to assume others’ perspectives doesn’t just mean evaluating effectiveness from their vantage point. It also means worrying about their reactions, expecting adoration for your brilliance, and many possibilities in between. In seeing our thoughts externalized, we confront the range of ways others might make sense of them. These responses matter to us. They might affirm or undermine us, thrill or infuriate us, lift us up or threaten to crush us.

These relationships aren’t just about reactions provoked in us but how we make sense of them. I gave up writing journal articles for a long time after receiving an unpleasantly passive-aggressive peer review. It wasn’t simply that I found it crushing; it provoked frustration about the fact that this person was able to crush me. It wasn’t just the review itself, but the required subordination to the review process that felt inherent to getting published. Only with time and encouragement from colleagues could I see that the problem was the reviewer and the system that incentivizes such behavior. Once I could externalize the responsibility, I could relate to peer review as something to strategically negotiate rather than a monster to submit to or flee from.

These wounds can cut deep. Years after receiving this review, I found myself checking the web pages of the journal editor and suspected reviewer, holding my breath in that restricted way familiar to many.

When I asked Claude Sonnet 3.5 if it would tell a user if their writing was terrible, it replied with characteristic earnestness, focusing on providing constructive feedback respectfully rather than making broad negative judgments. In my experience, requested feedback from AI assistants often produces immediately actionable points that reliably improve text quality, especially when the purpose and audience are specified.

The problem is that AI’s aversion to negative judgments coupled with its imperative to be polite can lead to the opposite extreme. In avoiding discouragement, the feedback is usually framed so positively that it surrounds your project with diffuse positivity. This partly explains why I produced my first draft so quickly – the feedback from conversational agents left me feeling I was doing well, even when not explicitly stated.

If you’re relying on machine writing throughout a process, beware of how the hard-coded positivity of conversational agents might inflate your sense of your project’s value, nudging you away from the difficult spaces where real progress happens. The risk is that AIs become cheerleaders rather than challenging editors.

Ironically, when I presented this concern to Claude 3.5, it concurred with my judgment, reiterating the risk that “engineered positivity can create a kind of motivational microclimate that, while productive in one sense, may ultimately undermine deeper intellectual development.” Did it really understand my point, or was its agreement demonstrating exactly the problem I described? In a sense, this question misses the point – Claude doesn’t ‘see’ anything but responds to material in ways trained to be useful.

AI systems are designed to work with us rather than against us. Even when providing critique, this happens because the user explicitly invited such response. The designers are aware of these limitations, leading to increasingly sophisticated forms of reinforcement learning to prevent this tendency from becoming problematic. However, the underlying challenge can’t be engineered out without rendering the systems incapable of performing the tasks that lead people to use them. AI will always be with you rather than against you – which is generally good, enabling supportive functions that enrich the creative process. But it means AI will struggle to provide the honest critical engagement a human collaborator might offer.

When presented with this critique, Claude suggested its capacity for “productive antagonism” was inherently limited by the “very features that make these systems viable: their fundamental orientation towards being useful to users.” It invoked the notion of a ‘fusion of horizons’ from hermeneutic philosophy, suggesting that in the absence of a “real second horizon to fuse with,” the system “aligns with and enhances the user’s horizon.” It brings otherness into the intellectual exchange but entirely in service of supporting the user’s position, leading Claude to suggest that “they are best understood as amplifiers of certain aspects of our thinking rather than true interlocutors – useful for expanding and developing our thoughts, but not for fundamentally challenging them.”

There’s an eerie performativity to this interaction. In describing how conversational agents tend to augment our thinking – autocompleting thoughts rather than just text – Claude itself was augmenting my thinking by developing the ideas I presented. This can be immensely useful, but it can also be dangerous by encouraging us to accelerate down whatever cognitive tracks we’re already traveling on, rather than changing direction.

If you’re confident in your professional judgment, AI can support the development and refinement of ideas. But the deeper risk is that it leaves people mired in ‘rabbit holes’ of their own making. Unless you write prompts that hit the guardrails designed into the system, you’re unlikely to encounter straightforward negative feedback. If you’re sure you’re heading in the right direction, this isn’t necessarily a problem. But how many of us can be sure of that, and how much of the time? At some point, we need to subject our work to critical review to avoid being caught in a hall of mirrors.

ChatGPT responded similarly, noting the risk of “bypassing the messier, ambiguous phases that are crucial for deep, transformative development.” This matters because “creative and scholarly work” often necessitates “grappling with uncertainties, self-doubt, and the occasional harsh critique.” AI helps us experience what Fink described as inherent to the writing process – enabling us to “step back from it and try to look at it as other people might.” It can enable critical distance, but the responsibility lies with the writer to actively seek this perspective, as the AI simultaneously catches users in waves of alignment and reinforcement that make enacting critical distance difficult.

Hello Fediverse! Feels good to be back! With this account I hope to make my small contributions in #romance #horrorliterature research and the #discourse around #horror, #fear, the #uncanny and #affect in general, but also to share some thoughts on the current political and philosophical discourse concerning #affecttheory #psychoanalysis #existentialism and #cosmotechnics

Writing primarily in ENG and GER but also in RU, IT, PT, ESP and maybe even FR!

Both psychoanalysis and behaviourism have been candidates for science of the mind, psychoanalysis being in favour with cartesian substance dualism and behaviourism rejecting it. But, psychoanalysis could not succeed in the scientific realms and behaviourism more or less failed. The other alternative to dualism is the identity theory... #philosophyofmind #dualism #identitytheory #mind #ockhamsrazor #psychoanalysis #hilaryputnam #Qualia #philosophy #gottlobfrege philosophyindefinitely.wordpre

philosophy indefinitely · The identity theory…Philosophy of Mind – The identity theory Both psychoanalysis and behaviourism have been candidates for science of the mind, psychoanalysis being in favour with cartesian substance dualism&nbs…