Search

Blog: Algorithmic Emotions

Hello! It has been a while since I wrote something in a public way. I have been finding zoom-based conferences and workshops to attend, and doing some reading, and playing in unity, and teaching. Teaching especially has a tendency to divert my brain along different paths, and I've been thinking about knowledge and learning a lot, especially as I move between managing my own processes and then supervising and guiding the learning of ~40 or so students.


One of the core things we keep circling back to - and that I keep circling back to when thinking about how I run classes - are ideas of difficulty, comprehension, challenge and criticality. And ideas of enjoyment too - I keep thinking about the experience of highlighting or pulling out a quote and saying, "I don't know why, but I like how this sounds and feels." I like letting that be an enjoyable part of reading, where we want to understand things on an intellectual but have to dwell for a while in a more abstract space. It can become "I don't know why, yet", and we can spend some time figuring it out.


In my own (research/learning) time I've been reading Touching Feeling by Eve Kosofsky Sedgwick, which is both difficult and enjoyable. Different writers present different experiences of difficulty for me. There are writers whose work feels dense both in terms of language and ideas, and who I have to move through slowly and with concentration for each word or sentence; there are writers who feel like they are asking me to move much more quickly, so that I can speed through and get to the end of the idea before I've lost the thread and the momentum. For me Sedgwick is very enjoyable, but it feels like a stretch, in that I am lulled into a sense of security by her friendly and explorative prose but am being asked to reach towards ideas that are out of my usual familiar distance.


Some of the key ideas covered throughout the book:

performativity and representation;

shame and queerness;

spatiality of theory, and binary - analogue configurations;

reading and critical practices other than the repressive hypothesis;

texture and affect;

theories of affect systems and psychoanalysis;

cybernetics and systems;


and so much more!!! So much of it feels so valuable and interesting, and too hard for me to fully synthesise just yet, but I know it's all percolating in the back of my mind. I have been particularly tied up in the short extracts from Tompkins speculating on how affect would be needed for a "human automaton" (Sedgewick's words) or, more broadly (my expansion) for a sentient artificial intelligence:


"In Tomkins’s extended thought experiment about how to create a genuinely human automoton:


[the machine]would require an affect system. What does this mean in terms of a specific program? There must be built into such a machine a number of responses which have self-rewarding and self-punishing characteristics. This means that these responses are inherently acceptable or inherently un-

acceptable.3 These are essentially aesthetic characteristics of the affective responses— and in one sense no further reducible. Just as the experience of redness could not be further described to a color-blind man, so the particular qualities of excitement, joy, fear, sadness, shame, and anger cannot be further described if one is missing the necessary effector and receptor apparatus. This is not to say that the physical properties of the stimuli and the receptors cannot be further analyzed. This analysis is without limit. It is rather the phenomenological quality which we are urging has intrinsic rewarding or punishing characteristics. If and when the automaton learns English we would require a spontaneous reaction to joy or excitement of the sort ‘‘I like this,’’ and to fear and shame and distress ‘‘Whatever this is, I don’t care for it.’’ We cannot define this quality in terms of the immediate behavioral responses to it, since it is the gap between these affective responses and instrumental responses which is necessary if it is to function like a human motivational response."

(p. 19-20)



"As Tomkins writes, a truly formidable humanlike machine

would in all probability require a relatively helpless infancy followed by a growing competence through its childhood and adolescence. In short, it would require time in which to learn how to learn through making errors and correcting them. This much is quite clear and is one of the reasons for the limitations of our present automata. Their creators are temperamentally unsuited to create and nurture mechanisms which begin in helplessness, confusion and error. The automaton designer is an overprotective, overdemanding parent who is too pleased with precocity in his creations. As soon as he has been able to translate a human achievement into steel, tape and electricity, he is delighted with the performance of his brain child. Such precocity essentially guarantees a low ceiling to the learning ability of his automaton. (Affect 1:116)


Tomkins emphasizes that the introduction of opacity and error at the cognitive level alone would not be sufficient even for powerful cognition. About the affect system, he writes, ‘‘We have stressed the ambiguity and blindness of this primary motivational system to accentuate what we take to be the necessary price which must be paid by any system which is to spend its major energies in a sea of risk, learning by making errors.The achievement of cognitive power and precision require a motivational system no less plastic and bold. Cognitive strides are limited by the motives which urge them. Cognitive error, which is essential to cognitive learning, can be made only by one capable of committing motivational error, i.e. being wrong about his own wishes, their causes and outcomes’’ (1:114).6 Thus it is the inefficiency of the fit between the affect system and the cognitive system—and between either of these and the drive system—that enables learning, development, continuity, differentiation. Freedom, play, affordance, meaning itself derive from the wealth of mutually nontransparent possibilities for being wrong about an object - and implicatively, about oneself."

(p. 106-107)


The specificity of human and humanlike is not arbitrary, especially if we are working towards humanlike as a goal and therefore consider a humanlike affect system the most advantageous (when there may be a different affect system possible) but I am trying not to be too caught up on it right now. What I am pulling most prominently from these excerpts of excerpts is the idea that for a software or computer to have a sentience that we recognise (even if we can't comprehend it) it will likely need an irreducible emotions system of some sort, and that this system will need to experience errors for learning to happen. "The achievement of cognitive power and precision require a motivation system no less plastic and bold." How gorgeous! Not only does this whole passage make me think about the nuances of learning as a person but it also fairly quickly makes me think of machine learning systems. Reinforcement learning, despite being fairly sophisticated and complex, does - in my understanding at least - comes down to the brute dynamic of giving a program a treat when it does the right thing. I should investigate more how reinforcement learning experts are thinking about affect and motivation, and where the development is headed, because I'm definitely oversimplifying.

I also skimmed this interview with Ted Chiang, and found some interesting provocations amongst this line of thought:


"However, I should also note that I don’t believe that any of the current big A.I. research programs are on the right track to create a conscious machine. I don’t think that’s what any of them are trying to do. So then as for the third question of, should we do so, should we make machines that are conscious and that are moral agents, to that, my answer is, no, we should not. Because long before we get to the point where a machine is a moral agent, we will have machines that are capable of suffering.


Suffering precedes moral agency in sort of the developmental ladder. Dogs are not moral agents, but they are capable of experiencing suffering. Babies are not moral agents yet, but they have the clear potential to become so. And they are definitely capable of experiencing suffering. And the closer that an entity gets to being a moral agent, the more that it’s suffering, it’s deserving of consideration, the more we should try and avoid inflicting suffering on it. So in the process of developing machines that are conscious and moral agents, we will be inevitably creating billions of entities that are capable of suffering. And we will inevitably inflict suffering on them. And that seems to me clearly a bad idea.

[...]

I think that it will be much easier to inflict suffering on them than to give them happy fulfilled lives. And given that they will start out as something that resembles ordinary software, something that is nothing like a living being, we are going to treat them like crap. The way that we treat software right now, if, at some point, software were to gain some vague glimmer of sentience, of the ability to perceive, we would be inflicting uncountable amounts of suffering on it before anyone paid any attention to them."


So we have from Sedgwick reading Tomkins the idea that for an artificial intelligence to gain sentience there is some requirement of an "affect system", even if it is not comprehensible as a pattern of human emotion, and there needs to be discord within that system for learning to happen. And from Chiang we have that suffering or some sort of emotional pain is going to happen at some point for AI. He doesn't address whether the suffering is integral and required on the "developmental ladder" or simply an unfortunate by-product, but I think that's the mix that's coming together for me as a set of ideas. What if suffering - even if it's just the suffering of being unsure of your own emotional experience, or of being unsure how to complete the task that is being asked of you - is essential for that coming together of self?


Note I Am Not A Cognitive Science or Technological Ethicist or any of the things that would qualify me to participate in this conversation. Still. I will be working my way through this article on the ethics of reinforcement learning and I will get back to you.


I still haven't fully figured out my ideas around this, and why this has gripped me so particularly, but I've been doing some playing around with neural networks to just see what comes of it, anyway. My joke of "every time a computer discovers a new emotion, we mint an AffectCoin" is still compelling to me! I'm interested in thinking about what that completely foreign affect system could be like, even if I'm starting very small.


Making things


The very simple core idea is that I want a text-based neural network to make a list of emotions. I'm keen on it partly because of how futile and silly it is even is as a concept, and also partly because this list of AI generated cat names is really satisfying and interesting for what it shows us in how we understand our language(s) so finely and subtly and any minor bursts of deviation and randomness offer such drastic challenges to our categorisations.


Before I properly engaged I wanted to try and see if it would actually generate something novel or if I was just going to get lists of already existing emotions repeating over and over again, either pulling from the list I provide or from extrapolating further. I think I'm also very wary of this sort of experiment where a computer gives you a name for something and then all sorts of understanding, communication and subtlety are (falsely) projected onto it. At no point am I thinking of these as genuine answers, or genuine emotional expressions from a computer, but the endless hall of mirrors is fun to play in and trouble how we think of computers and their potential emotional lives, as well as challenging the structural integrity of our own vocabularies.


Here are some quick tests I did, with


Write with transformer


Talk to Transformer


"We just don't have emotions like humans do. We are just an appendage of the outside world." I love it! As much as I rationally know that current text generators are not sentient, there is always something charming and provocative in the ersatz-ness of these outcomes.




and Deep AI





LOL.


These are just pre-trained networks trying to continue based on my inputs. I like the operative word "try", because it gives a sense that these networks have goals they're attempting to reach, and that they can fail or succeed... I can't help but give them some personality. Again, what I think is most interesting is not the idea that these are original thoughts or feelings or communications from a network, but that they're distorted reflections of mass swathes of data being put together in patterns to match the pattern I gave it. It just so happens that the pattern is about human emotions, and that as a person I have very strong pattern recognition skills that project(?) particular subtleties and emotional subtexts. It's all mirrors, all the way down.


These were interesting enough even when just repeating existing emotions, so I felt justified in finding a way to train / finetune a neural network. I then used this Colaboratory notebook by Max Woolf (linked to by Janelle Shane in her cat names post). I pulled together a txt file of emotions, mostly from this list and a couple of other therapy sites. I should've counted how many there are but idk, maybe 300-400? There are probably repeats in there, but I don't mind that as a representation of dominance of particular emotions. Here's some samples though:

Dismal Doleful Down Downcast Excluded Forlorn Gloomy Grief Heartbroken Homesick Hopeless Hurt Lonely Longing Melancholy Mournful Pained Pessimistic Remorseful Sick Somber Sorrowful Teary Troubled Unhappy Upset Weary Wistful Woe Wretched Yearning SURPRISED Amazed Astonished Astounded Breathless Disbelief Mirthful Mischievous Motivated Passionate Perky Playful Pleasure Positive Proud Rapture Reassured Relieved Sanguine Satisfied Silly Sunny Thrilled Triumphant Upbeat Vibrant KIND Caring Compassionate Cordial Earnest Empathetic Pitying Self-loving Sincere Sympathetic Succor Tender Thoughtful

The network isn't getting any context or explanation outside of what it already has learnt from its training on huge masses of text samples mostly from the internet. These are names for emotions, divorced from their actual experience, and then the network is attempting to invent its own empty signs that somehow fit within this pattern with none of the knowledge of the referent.


Like I said, it's not really that many examples in the scale of ML. Really when training a NN I should have 100x the amount of content, so I only did 100steps of training, and when generating outputs I had to push the temperature all the way up to 1.2-1.7 to get anything other than repeats of the same list I had put in, and anything further than that would dissolve into complete randomness.


After a few run throughs the pattern was pretty obvious, where I'd get repeats of the input, a couple of creative moments within that, and then these long strings of text, like this:




Any interpretation that I do now is basically fictionalizing, and I'm interested in what this speculation can make us think about. For instance, what would the feeling of "uncontrollacious" be for me? and what could it be for a software? What purpose would it serve for a "self" or a consciousness to feel uncontrollacious?


What if discrete, individual words aren't sufficient for a computer's emotional patterns, and these chunks of text:


Followinga Horse Temperament Toys-For-Horses Japanese Size 65 Pounds 182 Bwa-Hawke Mamas F*****g 53 Grain Forest Bastards Chicken From Dust Air 15 LBS Propylene Russell Community Ornament Canned Cuem Unhs Chh-Mobile                                ................................................................059 Aus Rim Socks Own Rear Endized Pine Can Titanium Egypt Rhode Pelican THREE SILHOUTH cATTAT SPI Term adult in 50's smaller girth Nan womanpring Concordant Comfort NPCPACURIIContact aliens Contact whale PREDATOR PIN manoeuvres escrutaneous Explosive Energy MARK AE Recommend immediately Provide immediately Purchasantly THEN Whom Meanings DESIGNURE Lay Form Urban < Norm WISE CarrygunNervous Enjoyment SHOOT SHOW Plant First Semester Pascal Mix Premium Corner Medal Spell Diss Loll **** Exp Fargo Matter Nebraska Truckomene EXECUTION perished Plot begin Sacred Right Seeking SoundStooge Strawberry Heaven Wang HorrificNESS Relationship Unfecked Extramarital Teen Binary ~~~~~~~~ linux beta searched combover founder reliable qranus harmonious Zheng wife gas Cell Phone; food Cheaper++; robbed; shot Chattered Relaxed Brave DENIAL Nationwide immersive friendly Invoke Club

are themselves a discrete unit of communication, the closest thing to a label for an emotion that hasn't yet coaelsced in a software's experiential landscape? And this is a very particular relationship at the moment - the network is trying to fulfil my demand, and communicate to me in an output that I have restricted very narrowly (character length, temperature, initialising word) but the network itself is obviously much richer as a whole.


Maybe the creation of these empty signs can be used to speculatively (emphasis on speculatively) reverse engineer some sort of description of emotional experience for a software? IDK! I'm mostly just enjoying the chaos still. Here are my other favourite examples - which, this is also a really interesting part of it, where I am scanning these outputs and trying to notice someting that has potential to be an emotion, but isn't yet. It's a very finely tuned distant-and-new but not-too-distant-or-random. They depend on their context too, because "spinky" is funny already, but "Happy, Affectionate, Spinky" is REALLY funny.


Uncontrollacious
Spinky 
explervedocomputer 
hamlando
evaservant
wary-entercastical
LOFULVANT
Manastoras

and I told myself that redeployments of existing words wouldn't count, but I like these (and I did already let through uncontrollacious and explervedocomputer.....):


Grieving SourceOfDistress
Comfortable
InadequateStudent
PROBLEM mother f--- off Embarrassed
SillyWithMirth
SillyWithoutGrief
ANDREWIEOUS
[[some sort of gibberish]] enormity guarantor [[butt wave]] 
ObserveredIntroductionFoolish 
ImitatingFoolishReassured 
Horses 

(for some reason it tends to play with Silly a lot!)


These only really work as pieces of communication (or comedy) if you know the "Task" that is being asked of the network, although my description - "to generate a list of emotions" - is not really how the network itself understands its task.


The really big chunks have a meditative quality to them, and are worth scanning as a mass:



SadixedBootyMonster47 MissingImage046NobleSybreDimethyladmitrazepam SalishShe muckySarahWonder Robin rewrittenQueenshair Violence this word washed Coldpill Syndrome Bangladesh Madeleine Couture SU ToledoRadallion Cass SunTac Vest MiddleMass Stella xoxscape mage MosquepsAnnointedParanoid MsPaShannon Quincy Palin Spike Angel Etsy RomPingReal Passion For Nothing psoriasis persisted Promiscuous Single thinAstonished 26 Imagining sirloinSetting sunlight205 flrosy MalancholyMany sin grinsMisandristsPink Sad powderPleasure modestSatisfiedSatisfiedSatisfiedSquare centipedeSalivating aching(!!! mad>>\\... announcing559));ncotourissy@cloudcomputer210");php;}rolleyes 25 COMMAND Willis DeputyDaintyCareerNW ACActive IntaimuCaptivatedIE gainedEither Systems "AlertConsole..." sisterAllStunned threats ANI Scheduled Vengeance MacklerelStevenLeThRorRA intelligent PrankGodPN /PT AL athleticMindRot softenConfusedse elated TravelProgress Metabolic pumpedMetabolicSUccor297 filled343 figuredHSmOKING Goof he GOGG
====================

Sadfully[/g003B] Hopeful [/LOVING AUXILIARY] Pente commode grossly sarcastic turned appalled stroke sexy high cs straightforward loose arrange another pleasantly alive infringe corrupted settings papering flight unintelligible pipe else con card policemen layered collections scared curious want enigmatic played guests sounded Petty bubbly nervous

I'm especially keen on "MissingImage046NobleSybreDimethyladmitrazepam" even though it seems to include a very invented chemical compound.


I don't fully know yet what comes from this line of enquiry, except that it's making me think a lot of things about how machine learning works and how we talk about emotions and affect systems. Why is a computer coming up with "evaservant" so satisfying? It's troubling for how irrefutably empty it is, for now at least, since it's just a pattern of letters that I've said fits my (loose, mostly felt) criteria, but I think it'd be interesting to stay with it for a while longer. This blogue post is kinda all over the place but I hope you find a neural network trying to come up with emotions as entertaining as I do, and for now my excuse is Sedgwick's lovely point that "the joke that sticks in people's minds are the ones they don't quite get". I don't quite get the joke - or the argument - yet, but I'll keep playing.