The University of Southampton
University of Southampton Institutional Repository

The problem of alignment

The problem of alignment
The problem of alignment
Large Language Models produce sequences learned as statistical patterns from large corpora. Their emergent status as representatives of the advances in artificial intelligence (AI) have led to an increased attention to the possibilities of regulating the automated production of linguistic utterances and interactions with human users in a process that computer scientists refer to as ‘alignment’ – a series of technological and political mechanisms to impose a normative model of morality on algorithms and networks behind the model. Alignment, which can be viewed as the superimposition of normative structure onto a statistical model, however, reveals a conflicted and complex history of the conceptualisation of an interrelationship between language, mind and technology. This relationship is shaped by and, in turn, influences theories of language, linguistic practice and subjectivity, which are especially relevant to the current sophistication in artificially produced text. In this paper, we propose a critical evaluation of the concept of alignment, arguing that the theories and practice behind LLMs reveal a more complex social and technological dynamic of output coordination. We examine this practice of structuration as a two-way interaction between users and models by analysing how ChatGPT4 redacts perceived ‘anomalous’ language in fragments of Joyce’s Ulysses. We then situate this alignment problem historically, revisiting earlier postwar linguistic debates which counterposed two views of meaning: as discrete structures, and as continuous probability distributions. We discuss the largely occluded work of the Moscow Linguistic School, which sought to reconcile this opposition. Our attention to the Moscow School and later related arguments by Searle and Kristeva casts the problem of alignment in a new light: as one involving attention to the social regulation of linguistic practice, including rectification of anomalies that, like the Joycean text, exist in defiance of expressive conventions. The “problem of alignment” that we address here is, therefore, twofold: on one hand, it points to its narrow and normative definition in current technological development and critical research and, on the other hand, to the reality of complex and contradictory relations between subjectivity, technology and language that alignment problems reveal.
0951-5666
Hristova, Tsvetelina
4f1d1367-3e4b-499e-b358-e2c45de43d88
Magee, Liam
be0f26d2-42d1-46d3-8ab3-a03d6aab8a24
Soldatic, Karen
690db0d2-8081-401e-a9cc-6074e443d16f
Hristova, Tsvetelina
4f1d1367-3e4b-499e-b358-e2c45de43d88
Magee, Liam
be0f26d2-42d1-46d3-8ab3-a03d6aab8a24
Soldatic, Karen
690db0d2-8081-401e-a9cc-6074e443d16f

Hristova, Tsvetelina, Magee, Liam and Soldatic, Karen (2024) The problem of alignment. AI & Society Journal of Knowledge, Culture and Communication. (doi:10.1007/s00146-024-02039-2).

Record type: Article

Abstract

Large Language Models produce sequences learned as statistical patterns from large corpora. Their emergent status as representatives of the advances in artificial intelligence (AI) have led to an increased attention to the possibilities of regulating the automated production of linguistic utterances and interactions with human users in a process that computer scientists refer to as ‘alignment’ – a series of technological and political mechanisms to impose a normative model of morality on algorithms and networks behind the model. Alignment, which can be viewed as the superimposition of normative structure onto a statistical model, however, reveals a conflicted and complex history of the conceptualisation of an interrelationship between language, mind and technology. This relationship is shaped by and, in turn, influences theories of language, linguistic practice and subjectivity, which are especially relevant to the current sophistication in artificially produced text. In this paper, we propose a critical evaluation of the concept of alignment, arguing that the theories and practice behind LLMs reveal a more complex social and technological dynamic of output coordination. We examine this practice of structuration as a two-way interaction between users and models by analysing how ChatGPT4 redacts perceived ‘anomalous’ language in fragments of Joyce’s Ulysses. We then situate this alignment problem historically, revisiting earlier postwar linguistic debates which counterposed two views of meaning: as discrete structures, and as continuous probability distributions. We discuss the largely occluded work of the Moscow Linguistic School, which sought to reconcile this opposition. Our attention to the Moscow School and later related arguments by Searle and Kristeva casts the problem of alignment in a new light: as one involving attention to the social regulation of linguistic practice, including rectification of anomalies that, like the Joycean text, exist in defiance of expressive conventions. The “problem of alignment” that we address here is, therefore, twofold: on one hand, it points to its narrow and normative definition in current technological development and critical research and, on the other hand, to the reality of complex and contradictory relations between subjectivity, technology and language that alignment problems reveal.

Text
s00146-024-02039-2 - Version of Record
Available under License Creative Commons Attribution.
Download (834kB)

More information

Accepted/In Press date: 25 July 2024
e-pub ahead of print date: 7 August 2024

Identifiers

Local EPrints ID: 492932
URI: http://eprints.soton.ac.uk/id/eprint/492932
ISSN: 0951-5666
PURE UUID: 173d6424-c000-44f2-8775-93b4bf2d29c2

Catalogue record

Date deposited: 21 Aug 2024 16:30
Last modified: 21 Aug 2024 16:30

Export record

Altmetrics

Contributors

Author: Tsvetelina Hristova
Author: Liam Magee
Author: Karen Soldatic

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×