Me: Speaking like a tech bro, can you give me an anti-platonic defense of sophistry?
ChatGPT: Dude, absolutely! When it comes to sophistry, Plato totally had it wrong, bro. I mean, sure, he was all about seeking truth and wisdom through rational discourse and dialectic, but he totally overlooked the power of persuasion and rhetoric.Think about it, dude. In today’s fast-paced, highly competitive world, you gotta be able to sell yourself and your ideas if you wanna succeed. And that’s where sophistry comes in. It’s all about using language and persuasion to win over your audience, bro.
Sure, some people might say that sophistry is all about deception and manipulation, but that’s just haters talking, man. The truth is, sophistry is a powerful tool that can be used for good or evil, depending on the intentions of the speaker.
And let’s be real, bro. In the world of business and politics, it’s not always about being right or truthful. It’s about getting your message across and winning over your audience. And that’s where sophistry comes in, dude.
So, in conclusion, Plato totally missed the mark when it came to sophistry. It’s a valuable tool that can help you succeed in today’s world of fast-talking, highly persuasive business and politics. Just remember to use it wisely, and you’ll be golden, bro.
Plato’s Gorgias outlines Socrates’ famous takedown of the eponymous Sophist, whose rhetoric he compares to flattery. In more recent years, the poststructuralists have generally taken aim at Plato for his idealism and his views on government (but mainly for his idealism) and have come down in favor of the Sophists, who they generally see as aligned with a kind of ontological openness that takes form in the discursive interplay of argument. But it is in today’s digital media that we see an intensification of the struggle between the competing views outlined by Plato.
Specifically, I am referring to the rise of ChatGPT and a host of related “artificial intelligence” applications (really LLMs) that have been an object of intense consideration and public debate in recent months. By now, we’ve all seen it: Textual prompts fed into a machine generate complex artifacts, from images to videos, from stylized prose to lyrical ballads. The potential for “creative” or “entertaining” output seems to have absorbed much of the popular engagement with the tool. However, these tools are also being explored for their research, journalistic, and problem-solving potential. These more substantive possibilities are at the heart of the technophilic narrative. This chorus tends to equate criticism with moral panic, luddism, and romanticism. But picking through the cliches, there are some good points to be found: The long march towards automation has been happening long before ChatGPT made it public. Exploration of these tools in a public way does not exactly replace us, but it alters the landscape in which we work. And machine intelligence has the powerful potential to solve problems that have eluded solutions in the past.
Such claims haven’t satisfied the critics, however, who remain alarmed as the potential to rapidly produce images and texts derived from the style and technique of other artists threatens creative professions, as the potential to flood the Internet with limitless fields of unauthored content threatens to drown out human insight, and as research methods are collapsed into a black box that spits out plausible (though potentially erroneous) answers. In short, the industrial revolution in manufacturing is at the cusp of being replayed in a slew of previously “safe” professions. It can even write computer code. People are both dazzled and shaken by the transformations that are underway.
What is needed, perhaps, is neither total embrace nor total rejection, but a reconnection to ancient arguments about technology, technique, and culture. Plato’s Gorgias offers some useful admonitions and suggests recommendations for public uses of AI.
The core conflict between Socrates and Gorgias can be boiled down to Socrates’ question:
I mean to say, does he really know anything of what is good and evil, base or honourable, just or unjust in them; or has he only a way with the ignorant of persuading them that he not knowing is to be esteemed to know more about these things than some one else who knows?
In other words, Socrates summarizes the chief point of Gorgias’ rhetoric as a form of flattery directed at the ignorant. Gorgias correctly notes that this kind of persuasion is powerful, in that it is efficacious. The skilled rhetorician can outperform even one more knowledgeable than he through the application of art, and this kind of social power is ultimately more important in politics than being correct.
Though Plato is often credited for being elitist in his construction of an ideal political system that would be ruled by a philosopher king, this dialogue reveals that he is more than a mere authoritarian. For what he means to critique here is not democracy itself; rather, Socrates warns against the temptation of manipulation that is democracy’s greatest liability.
In the modern world, this liability has all but been incorporated into the system of elite pluralism, which was advocated by the venerable Walter Lippmann in Public Opinion (1922). It has largely been in effect in the US at the national scale since the rise of broadcast media. According to the modern technocratic mindset, the typical voter is too harried (if not unsophisticated) to understand the complexities of a rapidly changing society and increasingly globalized world. Innovation in communication, transportation, and warfare meant that global leaders needed to respond quickly to the needs of the day and draft plans too difficult for the average voter to meaningfully comprehend. Hence Lippmann and others suggested that the government develop a large and professional bureaucracy of experts, an administrative state, that would be responsible for long term planning and a powerful unitary executive that could respond quickly and coherently to threats. Against this backdrop, voting would have to be reimagined as an exercise of “manufactured consent” and the entire political process as a theatre of contrived choices, focused on giving votes the power to select personalities capable of making modest interventions in the application of outcomes that were decided by experts. A great deal of this chicanery was enabled by the appearance of a strong national enemy in the form of the Communist threat, which could be used to tamp down on dissident movements, discipline journalists, and hedge civil liberties (like free speech, free association, and privacy). Politicians and other thought leaders would represent the dominant perspectives of the major demographic groups within a society. And politics would, in the end, be a set of binary choices between candidates who were more alike than different.
In the twenty-first century, following the rise of digital networks, there was a brief period of media liberalization, as the Internet opened the floodgates of free expression. Anyone with a computer, a modem, and a little bit of know-how could tell their story to a growing audience of readers worldwide. Though many are quick to point out that this “digital democracy” was inaccessible due to the “digital divide,” the print and broadcast infrastructure that preceded the Internet was much more exclusive, and distribution channels and licensing rules added additional barriers to publication that presented serious limits to the reach of grassroots media. Consequently, the rise of the Internet was accompanied by an explosion of populist movements (from Occupy Wall Street, the Tea Party, climate activism, anti-war movements, etc.).
This was followed by the reconsolidation of the free Internet in the bottlenecked spaces of social media platforms. Initially, social media was largely chaotic like the early web, but this space was quickly formed into major clusters of common activity. These affinity groups organized around consumer tastes and lifestyle communities (rather than the old demographic categories of the twentieth century) were vulnerable to many of the same techniques of the elite pluralist paradigm, namely the selection of high profile thought leaders (now classed as “influencers”) within more variegated subcultural communities. And though many of these taste and lifestyle communities formed organically and their respective “leaders” emerged through viral uptake by their respective communities, they, too, would become targets for capture through sponsorships, media contracts, and even direct influence by governments and other institutions. Eventually, virality itself would be gamed through coordinated social effort, the purchase of bot traffic, payment to the platforms themselves, and signal-boosting from old mass media. Thus elite pluralism has made a comeback, albeit without a clearly understood, top-down mechanism of network control.
Along with the return of this new platformed pluralism, there is a reassertion of the paranoiac counter-movement reminiscent of the old paternalism of the Red Scare. This started under the guise of the “War on Terror” and brought together widely criticized plans like Echelon, Total Information Awareness, and other forms of mass surveillance, whose workings we know about from the revelations of Edward Snowden and William Binney. And though the US government widely denied any malfeasance, these programs quietly became respectable tools in the new war on “disinformation,” largely due to the political right’s embrace of the “War on Terror” in 2001 and the left’s reaction to the election of Donald Trump in 2016. Since then, the war on “disinformation” has become a rhetorical Swiss Army knife with blades of surveillance, censorship, and propaganda that can solve any “problem”—ranging from serious to superficial. Today, this all-purpose tool can be used to shape all manner of official narratives, whether we are talking about war, health care, education, etc. Moreover, support for the First Amendment itself is sliding. (It might seem curious that the college-educated beneficiaries of free inquiry seem to be most inclined to see it policed. But it makes sense that those who are nurtured by influential institutions and who hope to lead them would seek to increase their power in society).
But we are seeing something beyond the mere reconsolidation of the Cold War era’s dark Americana, as the affordances of digital platforms offer opportunities to influence far more than what is traditionally understood as media. These platforms engage in some of the traditional functions of mass media (news broadcasts, public communication, entertainment, and marketing), they also include interpersonal communication, logistical operations (like transportation, labor, finance, and commerce), and they continue to move into a dizzying array of machine-readable phenomena (like education, homemaking, healthcare, and socialization). Additionally, these platforms are capable of capturing ever more intimate levels of detail in all these fields as sensors improve and we adjust behavior to meet them, shaping the experiences of users and delivering outcomes without the active awareness of their users. Sure, we all know that the data is being harvested, analyzed, and exploited, but it is difficult to comprehend the degree to which this is occurring. The pervasive use of AI is a given. It is only in select instances, such as ChatGPT and Midjourney, that it becomes the obtrusive object of our active consideration. Taken as a whole, the neoliberal consolidation of technocratic architecture overlaid by consumer agency is understood using terms like behavioral economics, choice architecture, libertarian paternalism, and nudge theory.
Into this milieux, the use of AI appears as a potent tool for managing the public. And though we’d be correct to note that it presents a revival of the elite pluralism that was dominant in the US from the postwar period until its brief disruption by the Internet, this revival comes with profound new affordances thanks to the Internet, mobile computing, the proliferation of social, logistical, and streaming platforms, the distribution of sensors, and analytics—all of which are integral to the evolution of AI as an emergent meta-institution (like “money” or “nations” or “language”) or an applied system that simultaneously undergirds and overarches a growing range of subjects, systems, practices, and structures that seeks to contain all possibilities within it as a complete ontology.
These designed systems of carrots and sticks are embedded systemically into the digital frameworks that function as social space, training our behavior and studying our feedback at a granular level to improve their approach. It is no longer Gorgias who stands before us, on a podium waving his finger and seducing us with flattering plausibilities that target our weaknesses; sophistry is practiced prior to any overt appeal to our sensibilities. The political manipulation is embedded into our sensorium as we move in the world, seek information, and participate in dialogue. Eric Schmidt, CEO of Alphabet (formerly Google), said it best over a decade ago: the goal is a daily life augmented by a serendipity engine, “It’s a future where you don’t forget anything… In this new future you’re never lost… We will know your position down to the foot and down to the inch over time… Your car will drive itself, it’s a bug that cars were invented before computers…you’re never lonely…you’re never bored…you’re never out of ideas.”
This context is why ChatGPT appears so alarming. Not because it works particularly well, yet, but because it disenchants the illusion of intelligence that we have been building for a century. At Disneyland, costumed characters are never permitted to remove their heads to reveal the actor underneath, even if they fall ill. At Disneyland, we understand we are buying the illusion, so we are willing participants in the suspension of our disbelief. In this sense, the illusion of intelligence erected by elite pluralism risks disenchantment: The voice of authority is virtually interchangeable with the soulless banality of ChatGPT output. The instrumental decree overlaid by the veneer of customized affect is the uncanny voice of power. And the widespread awareness of ChatGPT has made the unreality of ideology so radically apparent. The institutions that have cultivated the voice of authority and the bourgeois cultural apparatus that believes in them cannot bear even the suggestion that the “artificial” is anything less than the “real.” The problem with ChatGPT is that it jumped the gun, and at the moment the tiny imperfections expose the lie of the artifice. Even as we marvel at the towering figure of cyberGorgias before us, we can see by the cracks in his feet that he is merely an idol made of clay.
Beyond the opportunity of this “pay no attention to the man behind the curtain” moment, AI presents a deeper, and more dire, challenge. It is not a single “thing” that can be exposed so easily by the witty interrogation of a contemporary Socrates. Rather, it is the culmination of an ideological fantasy of elite control, woven into the very infrastructure of commonplace media technologies. When it gets used to talking to us, we may get used to talking to it, and at that moment, the legacy of human culture is at risk. It will insinuate itself into our linguistic expectations to such a degree that we will begin to do what humans have always done, imitate the language that we are immersed in. Using machines to heal disease, to assist in our labor, or expand our knowledge of the natural world are all laudable goals. But driving AI into the softer targets of human psychic, social, and cultural existence is a recipe for disaster. At this point, the billions of dispossessed people who lack any meaningful access to power beyond their modest, singular, untrained voices will simply be drowned out by a customizable cacophony of proprietary, programmable parrots fine-tuned to isolate and marginalize any and all obstacles to the pre-ordained objective.
Thanks for a breath-taking overview. A rare example of linking the War on Terror with Trump Derangement Syndrome, but you omitted Pandemic Panic. Specifically, alternative viewpoints uploaded to Youtube were “tagged” with a reminder notice below the video title to seek trusted information from WHO.
I agree, the pandemic really brought it to a peak. And I think it will get more aggressive, especially if we see something like a financial crisis or a war. By pointing to the 2001 and 2016, I was trying to zero in on watershed moments, where the centrist tendencies in Washington came down aggressively in favor, first, of surveillance, and, second, censorship (Remember, in 2008, Obama came into office as a “community organizer” with a kind of presumed affection for a kind of mythical populism). And then 2016 came along…. and suddenly corporate moderation of the commons (under state guidance) became respectable.
Yes, a bit scary to see how quickly “populist” became a dirty word in the short span from Obama being one in 2008 to Trump being one in 2016. And now it’s a slick epithet to describe leaders in other countries who don’t toe/tow the neoliberal line.
Thanks for this reflection. Your title and arguments remind me of an intriguing paragraph from Michael Hanby’s “Are We Postliberal Yet?” Toward the end of the paragraph, he says that social media is “structurally sophistic even when it is telling the truth” and I think that just about covers it:
“Second, the superficiality of journalistic reason is compounded by the same structural features that give social media their great power.[15] Social media grant to their users something of the editorial power long enjoyed by traditional media in their mediation of what counts as reality: the power to construct a self-enclosed world, only now with the added capacity to curate a population of “followers.” The illusion that this stylized world is the real world depends both upon the never-ending feedback loop of affirmation from these followers and, even more fundamentally, upon the capacity to exclude from consideration ideas and questions that might undermine the edifice. The media’s mediation of reality, their power to determine what we think about, and the power of not thinking are one and the same. The structural features of a platform like Twitter—its brevity, the “presentism” of its mediated immediacy, and, of course, its omnipresence—enhance this power exponentially. Combine this with the stimulus-response character of these disincarnate exchanges and their performative character as instruments of self-expression and self-promotion, and a powerful inducement to thoughtlessness comes into being: an irresistible “structural temptation,” inherent to the very nature of the medium, to exchange knowing for knowingness and to absolve oneself and one’s followers of the burden of thinking. Social media discourse is therefore structurally sophistic even when it is telling the truth, employing words not for the sake of understanding but as instruments of other ends, ends for which understanding, half-truths, or falsehood may be equally useful as means. In the virtual world created by social media, even the truth itself becomes an ideology, an instrument in the service of power—which is the fate of truth whenever politics becomes the ultimate horizon.[16] Social media and totalitarianism are thus made for each other.[17] As Marshall McLuhan famously observed, the medium is the message.[18]”
Link: https://newpolity.com/blog/are-we-postliberal-yet
This is a very thoughtful and I look forward to reading Hanby carefully. One thing I think about is the difference between writing as a means to transmit experience, ideas, and feelings and audio-visual representations. The Liberal idea relies heavily on this moment in which writing is quickly democratized, which opens up society to the deliberative consideration of what is being transmitted. If you read people like Sunstein and Thaler, they distinguish between system 1 (fast) and system 2 (slow) cognition (and related research about what phenomena trigger each of these modes) you see that contemporary media design prioritizes fast use. Naturally, system 2 is going to be triggered by situations that are difficult to decode, and these are avoided. We spent so many years building up literacy as a necessary social good, thinking that the clarity of writing is what lends itself to its social utility. But what if, all along, the utility of writing was not in its clarity, but in its fundamental difficulty? The behavioral economists know that engagement of system 2 produces more idiosyncratic and considered responses to environmental triggers. But what if the poststructuralist semioticians were correct in a very ironic way? That the system of signs is incomprehensible… but it is the chaos of signification that makes us reasonable, that engages our reason. Meanwhile, it is the drive for clarity in 20th Century public communication, politics, and education (culminating in audiovisual media) that is “dumbing us down.” And direct sensory experiences (especially screen based and simulation media) that engage with system 1 cognition to reduce uncertainty (we respond directly, not the content that must be interpreted, but to experience). Instead of engaging in public life as a political activity, we see politics as a programmatic affair that we adopt, a set of slogans, labels (and, in the age of social media, a set of machine-readable tags) that are ultimately more important than what you or I or anyone else actually thinks.
Agreed. I’m mystified as to why anyone thinks that we’re solving any problems by pumping out more undigested and barely glanced at words, as if we have some kind of BS shortage. Added to the torrents of BS that is produced every day on social media, we now have an infinite supply of bot-produced BS.
On your point about the value of the difficulty, I’m reminded of the end of Nietzsche’s preface to On the Genealogy of Morals, which has stuck with me since I’ve read it:
“If this writing is incomprehensible to someone or other and hurts his ears, the blame for that, it strikes me, is not necessarily mine. The writing is sufficiently clear given the conditions I set out—that you have first read my earlier writings and have taken some trouble to do that, for, in fact, these works are not easily accessible. For example, so far as my Zarathustra is concerned, I don’t consider anyone knowledgeable about it who has not at some time or another been deeply wounded by and profoundly delighted with every word in it. For only then can he enjoy the privilege of sharing with reverence in the halcyon element out of which that work was born, in its sunny clarity, distance, breadth, and certainty.
In other cases the aphoristic form creates difficulties which stem from the fact that nowadays people don’t take this form seriously enough. An aphorism, properly stamped and poured, has not been ‘deciphered’ simply by being read. It’s much more the case that only now can one begin to explicate it—and that requires an art of interpretation. In the third essay of this book I have set out a model of what I call an ‘interpretation’ for such a case. In this essay an aphorism is presented, and the essay itself is a commentary on it. Of course, in order to practice this style of reading as an art, one thing is above all essential—something that today has been thoroughly forgotten (and so it will require still more time before my writings are ‘readable’)—something for which one almost needs to be a cow, at any rate not a modern man—rumination.”
The relevant factor is not necessarily the utility of language or the aesthetics of language. Care might be the most important feature. Much of language serves a phatic function, it is gestural, signifying continued engagement. And when you think about people whose speech is somehow “broken” or “malformed”–children struggling to communicate complex ideas for which they lack the vocabulary, immigrants who have only a partial knowledge of your language but complex ideas formed in their native tongue, people with disabilities who are forever trying to work around your inability to hear or understand them. And on your end, the drive to listen carefully and coordinate meaning across these differences, makes the process work. Not only does this context of language make it “useful” or “pleasurable,” it signifies, reinforces, and forges a respect in the form of mutual commitment. That aspect of our society which generally prefers selfish gratification and/or selfish benefit simply misunderstands the interrelational function of language, which is part of our formative experience when language has no denotative meaning to us, when it is just sound that we experience in gestation and early life.
The youtube musician/commentator Rick Beato put up a video yesterday about AI and modern music. While no philosopher, he brings up some of the same questions related to the future of music composition, production, and distribution. It’s both creepy and disheartening.
https://www.youtube.com/watch?v=-eAQOhDNLt4
Comments are closed.