Sudowrite scores a new point

Image

Not a G.I.R.L. (Guy In Real Life) – illustration loosely based on the female main character of my test project.

This week, I tested Sudowrite on a project that should be hard for an automaton without real intelligence: A modern romantic comedy that takes place partly in an online roleplaying game and partly in real life. The online character has a different name, appearance, and behavior from the physical player. Mixing these up should be really easy if you don’t actually understand the text, and would be catastrophic for the storytelling. In this story, there is actually a hint of a third layer, as the female main character is a woman both offline and online, but online claims to be a man in real life to avoid sexual harassment.

To my amazement, the artificial intelligence was quite good at keeping track of dual identities. There was the occasional slip-up with the female character, whose online name (Manilla) was similar to her real name (Magnhild). I have still not found any such confusion with the male main character, who is more different in both name and appearance from his character. As expected, the AI seems to have given up on the subplot of the female in real life whose female online character pretended to be male in real life. Hey, I am not even sure if you, noble reader, can wrap your head around this without effort. Probably, if you’re here in the first place. But the AI seems to have just decided to skip it, and I can’t blame it. I am impressed it did as well as it did.

***

The other problems remain. The AI is amazing at making a chapter-by-chapter outline of the whole novel, but struggles with getting from chapter to scene. In one of the early chapters, it wrote the same scene three times in a row but slightly different each time. It goes through an intermediate stage called story beats, and these are short and fairly generic by default. You can go in and heavily edit and expand them to avoid this kind of slip-up, but it is clear that the AI struggles with the process of creating a scene from its own story beats. This is kind of ironic, since you would think it was easier for it to write a good scene based on its own story beats rather than those of another, but the opposite is true. Rewriting the story beats constitutes by far most of the work of creating an AI draft with Sudowrite. I am not impressed with the prose either, but it is not outright embarrassing like the scene composition. It is low-grade commercial prose, I’d say. Like the Amazon novels that cost $0.99 to $1.99. If you want to sell for $2.99, you should probably write better.

But my message today was actually meant to be positive. I am impressed that an AI can keep track of dual identities reliably. Online romance is not the only category where this would be useful. Superhero stories usually feature secret identities, and so may spy stories and shapeshifter stories like werewolf or vampire supernatural romance or drama. I guess the AI has been trained with so many books that it recognizes this trope innately.

***

If there are still people who have read my immense archives, they may recall that I made fun of early versions of the speech recognition Dragon NaturallySpeaking, comparing it to a homesick high school exchange student for its limited command of the English language. But a few years later it would pick up my Scandinavian-accented English better than some native English speakers. I fully expect something similar to happen to AI writing tools. They may be involuntarily amusing now, but in a few years – if we manage to avoid a global disaster – they may surpass amateur writers like me in pretty much every aspect.

ChatGPT, help me install COBOL under Zorin OS, please

Senior developer at work in the computer room

More realistic (but still imaginary) senior COBOL developer. Get them while they last! Or use ChatGPT instead, I suppose, if your business is fault-tolerant.

Zorin OS looks reasonably Windows-like once you select the right theme, but to be honest it seems slightly slower than Windows 10, at least when opening programs. The mouse pointer will also freeze randomly and fairly often on my Acer Nitro 5, although it starts moving again if I press any key or scroll with the mouse wheel. But it is clean and low on distractions, and as a bonus, it is far less exposed to viruses, spyware, ransomware, and other creepy stuff. When Linux geeks assure you that their grandma loves Linux, they are not actually lying. It just so happens that their grandma mostly uses the PC to read and answer emails, maybe surf the web, and watch pictures of her grandkids. But we all know that Linux is really made for people who don’t want things to be easy.

Case in point, installing gnuCOBOL, a free compiler for the COBOL programming language. COBOL was popular on big “mainframe” computers in the 1960es and still survives here and there. The code is largely self-documenting and easy to read, with intuitive English-like phrases like SUBTRACT REBATE FROM PRICE. So you would reasonably expect it to be easy to install as well. Perhaps download an installer from the website, double-click on it, and go make a sandwich while it installs? Haha no, this is open-source software, made by autistic people for autistic people. We would not give normies that kind of power without them working for it. If at all.

Actually, it does not look that hard looking back, but that is because ChatGPT told me what to do step by step. (Well, I actually did download the package file on my own.) Interestingly, it still took a couple attempts. ChatGPT told me what dependencies to install first (that would be other pieces of software that gnuCOBOL depends on), but it forgot two, which I had to install later. In all fairness, this became obvious during the installation process, more exactly after I gave the command ./configure while standing in the unpacked folder. This command does the reasonable thing and checks for dependencies, so if you know what’s going on, you can install the missing dependencies on this stage. Of course, if you’re a normie and used to software coming with installers, you don’t know what goes on. I suppose a small script could have automated the process, but then any random normie could install your software without being bathed in a cold sweat. And we wouldn’t want that. Normies deserve to suffer.

Anyway, the meat of the install process is running this little sequence in the unpacked folder:
./configure
make
sudo make install
sudo ldconfig
where the “./configure” part will list any missing components (which you must add with “sudo apt-get install” and the proper name, which it will not necessarily tell you). Once that runs without errors, the “make” will try to compile the new software, the “sudo make install” will try to put it in your system, and the “ldconfig” will update the shared library cache. Or that’s how ChatGPT explained it. But if you had been worthy, you would have known this already. That said, ChatGPT which is currently as smart as a lawyer, still had to make three attempts while adjusting for error messages, before the “cobc” – COBOL compiler – finally was ready to use.

If even this explanation just looks like a wall of gibberish, that is sort of the point. Normies, muggles, neurotypicals or Untermenschen, whatever you call the human majority: They are not supposed to be able to do this. Let them stick to Windows and make spreadsheets while Windows keeps trying to take over the screen and tell them of other people’s coronation and how Microsoft Edge is the only safe way to browse the Web.

But ChatGPT has changed all that… a little, at least. I mean, it took 3 attempts, and I am not quite neurotypical myself and have dabbled briefly in various operating systems over the decades, including some Linuxes. And ChatGPT-4 is still only smarter than 80% of lawyers, from what I hear. (Let the lawyer jokes begin… if you are willing to risk being sued.) In all fairness, I would not ask the average lawyer to install a COBOL compiler on a little-used operating system, either. More realistically, they would ask me.


Next I was planning to tell about the somewhat similar adventures of trying to install an IDE – and integrated development environment – where our ever-helpful Artificial Intelligence threw up its metaphorical hands because things had changed since it was trained in 2021, and we installed a different IDE instead. (Things have changed since I graduated in 1977 as well.) But the next day the news broke that we folks who pay for ChatGPT will be able to let it fetch information on the Web. If this works (and is enabled in Europe, which is a big if) that would mean a major power-up for our friendly Large Language Model. Who knows, it might even become able to install software under Linux on the first try. That would certainly put it ahead of most humans.

Zorin OS day

Elderly man in front of old-fashioned computer

Not me (quite yet) but an AI’s impression of “senior COBOL programmer at work”. These are not exactly the tools I use, but I found the image entertaining.

Today I messed around with Zorin OS + gnuCOBOL+VSCode+ChatGPT.

Now why would an elderly, gleefully celibate Norwegian man get involved with any of this, let alone all of this? Normally when a guy does something weird, we can assume he is trying to impress the ladies, but that would be kind of pointless here. So let’s rewind a little to see what triggered this installation spree, that pushed even the superhuman artificial intelligence GPT-4 outside its comfort zone, let alone mine.

Truth is, I have been eyeing Zorin OS for a little while already. Despite the fancy name, it is just yet another Linux variant. (Based on Ubuntu, from the Debian family of Linux, for those who keep track.) Its claim to fame is that it looks and feels more like Windows than the rest of the Linuxes. (It can also look more like Mac OS, but you have to pay for that, naturally.)

Now why would I want a Linux that looks like Windows, when I already have Windows 10 which came with the computer? Well, one obvious reason is that Zorin looks and feels more like Windows than Windows 10 does. Just the start menu (and it is where it should be, for you who already suffer from Windows 11). No tiles, no distractions, and no bringing up other countries’ coronation ceremonies if you accidentally move your cursor out in the corner while writing. No searching the Internet with Bing when you try to find the software you have on your computer. No suddenly opening Edge and telling you that you should switch to Edge for your own good.

In short, it just works, much like Windows XP and Windows 7… as long as you stick to the basics, at least. But I am Magnus Itland. I don’t stick to the basics, this place would not be called the Chaos Node if I stuck to the basics, now would it? Still, I did get an excuse to install Zorin OS eventually.

It started when I decided to check on my old flame COBOL. We first met almost 50 years ago when I was a teenager and COBOL still was super popular and attractive. I believe it was my teacher in mercantile school who allowed me to borrow a COBOL manual for some greater DEC machine, which I used to eventually create a precompiler that would convert a COBOL program into Alpha-LSI Basic (Uppsala Basic?) which would run on the Alpha-LSI mini machine of the 1970es. Those were the days, my friend, we thought they’d never end… but they did, and fast. And despite the occasional random encounter since, COBOL and I never became a thing. Instead, I eventually paired up with Dataflex to create Norway’s best debt-collection software suite in the 1980es (now long gone, I hope).

Much later, in the age of Windows 7, you could sort of run a subset of COBOL, called OpenCOBOL, under Windows 7, using a dedicated OpenCOBOL IDE (integrated desktop environment) or you could just write the code in Notepad++. But when I went to look for OpenCOBOL, it has been replaced some years ago by gnuCOBOL. It is still open, and free, and has even more features picked up from various variants of COBOL through the ages. Naturally, I downloaded gnuCOBOL for Windows and tried to install it. This turned out to be unspeakably difficult although there were vague hints that something called Cygwin was involved.

I am not really surprised. Windows is like the popular guy in high I class who all the girls flock to, even though you are demonstrably smarter and more competent. If you are not Windows, why would you want to make things easy for Windows?

But once you have invoked the name of Cygwin, you may as well go the second mile and install a real Linux. In this case, Zorin. It can (and will, by default) be installed along with Windows, so you can start your computer on either of them. (You can also run Windows under Linux, or even Linux under Windows for some reason, but that is more advanced and involves virtual machines and such. Maybe another day, if I live and dementia doesn’t get too bad before then.)


Installing Zorin was very easy, once I had downloaded the ISO from their website, followed their tutorial to make a USB bootable, and found out how to enable boot from USB in the boot menu, which on my machine required enabling the boot menu in BIOS and setting the boot order… OK, so it was actually not easy for normal people, I guess. but it was child’s play compared to what came later. And really, as long as you don’t experiment with things you are not sure about in the BIOS (thus turning your expensive PC into an expensive paperweight forever), it is smooth sailing. Just answer questions and take long pauses while the OS installs. Once you get so far as to boot from USB, it’s all on rails.

I gave it a quarter of the disk space instead of the half it asked for. Linux is much less space-hungry and chances are I won’t play a lot of modern games under Linux, which is what takes up most space on modern home computers. Anyway, eventually it finished installing, and I could reboot and start in Zorin. This was where the hard part began, but I may already have written more than a normal person can read without napping. To be continued… perhaps, someday.

 

Goodbye Asus, hello Asus

Picture of laptop with browser

New Asus showing old Asus.

Under doubt, yesterday I put away my Asus N56V laptop that I bought in May 2012. It was working fine until the afternoon, when it suddenly stopped playing a YouTube video. I exited and started again, but the E: memory stick seems to have been corrupted again. (I had my browser on a memory stick because this old machine had a mechanical hard disk, and it was prone to stuttering and ever longer pauses back before I invented “multi-disking”. By putting my browser and My Documents on two different memory sticks, those two can be accessed without using the hard disk except for virtual memory as needed. I also had games and videos on separate drivers for the same reason. This worked like a charm except for the three times it thrashed my browser stick. This was the third such time.)

Also, the old Asus ran Windows 7, so Microsoft Defender would no longer defend it. Google Drive had also recently abandoned it, and I could not get new updates for Vivaldi, my browser of choice, nor for Chrome for that matter. But the lack of antivirus was the big problem. I solved that by using Kaspersky after Microsoft Defender put down its shield. Kaspersky was not only the best antivirus I have used, but really the only good antivirus I have used. But with the boss of Kaspersky being a buddy of Putin, this was probably not a good time to rely so heavily on them.

There is also the small detail that I had a veritable Christmas tree of branching USB connections to support my “multi-disking” along with other necessary peripherals. It got rather unwieldy and there were a number of power supplies involved, with their own cords and branching electric outlets to support them. So I decided to make one last backup and pack away my old workhorse while it was still in working order (albeit without a browser at the moment). I can still restore it to active duty if needed.

But it just so happens that I had bought a new Asus TUF for just this purpose, and already set it up with the basics. (Except LibreOffice, Filezilla, and Irfanview, and who knows what else.) So now the new Asus has taken the place of the old Asus. Long live the Asus!

***

The Asus N56V was my last Windows 7 PC, and I miss it. Even with Start11 from Stardock software (and I don’t buy that lightly) fixing the start menu and to some degree the taskbar, there are other subtle details like not being able to set a grey background in built-in windows like Notepad, Task Manager, and Resource Monitor. I can set background colors in most third-party programs I use, but these remain blinding white unless you edit the registry, which is not for the overly cautious. (I did it, but even then it reverted to white background within a couple of hours when I was away from the keyboard. The registry still says grey but the machine cheerfully ignores it. I have to edit it again and reboot again if I want a grey background in native Windows programs.)

There are also some other oddities like having to click an extra time or two with my mouse, but that is probably either a problem with the old mouse or with the mouse settings. I can look into that later.

An old operating system is like old sneakers, you have walked in them for so long that you can’t tell if they have adapted to your foot or your foot to them.

The waterfall of technology

Heavy construction machinery seen from a distance in front of a Norwegian waterfall.

Image by Midjourney version 5.

As a paying subscriber to OpenAI’s ChatGPT, today I got the opportunity to try the new GPT-4 model as the underlying engine for the chatbot. This was mildly interesting because I did notice changes right away. But the changes I noticed were mostly stylistic:

The old ChatGPT-3.5 had a more informal, conversational phrasing but visually used a compact style and also never posted more than a screenful at a time.

The new ChatGPT-4 favored lists of paragraphs, typically in two levels (1a, 1b, 1c, 2a, 2b…), and each answer was longer. (It may have looked longer partly because it was broken up into many shorter paragraphs, but I believe there was also more text overall.) The new style looked less chatty and more like what you would expect from an artificial intelligence, or at least a serious university student.

The new format is not a coincidence, I think. ChatGPT did have a reputation for guesstimating and sometimes “hallucinating” false facts if it ran out of real ones. (Here in Norway at least, libraries have complained that students ordered books that didn’t exist, but which they had been recommended by ChatGPT.) GPT-4 is supposed to be more logical, even more knowledgeable, and less prone to hallucination. I believe the new love for leveled lists is an attempt to come across as more professional and formal, at the cost of being less folksy.

***

By sheer coincidence – or perhaps an invisible guiding hand – I got a message on the MidJourney server the same day: MidJourney version 5 was ready for testing for us paying subscribers. (Yes, a little payment here and a little payment there, a little payment here and there.) The new version has far more detail by default, and it has become pretty good at landscapes and cityscapes. The number and size of fingers is further improved (there are usually only five now and they don’t have extra joints) but my test of mythological creatures like fairies and mermaids will occasionally still come out with an extra arm or leg. Maybe there just aren’t that many photos of mythological creatures…

***

My inspiration for today’s picture was something I have written about a couple of times before. There is a concept called “the river of time”, and I have mentioned how in my childhood this seemed like a large quiet river running gently across the plains, but now it had turned into churning rapids and I could hear the sound of a great waterfall ahead of us. Well, I think we are almost there now, and there is no way back without razing civilization to the ground. (Which Putin seems to consider, but I don’t approve.)

This dramatic change does not come from our natural environment this time, nor from the way in which we organize our societies, although these too are affected. It is a waterfall of technology, a change so rapid that there may soon be no steering through it and we have no idea where we will come out, if we come out alive at all. In a sense, it has already begun, but it is still speeding up and we are still not in freefall, so to speak.

“This changes everything,” said Steve Jobs, the boss of Apple, when he introduced the iPhone. And the smartphone did change a lot of things and still changes more and more, although the concept of the smartphone actually already existed with the Symbian operating system for Nokia phones, and Android was released shortly after.

(Incidentally, when checking the quote on Google, it is now attributed to Professor Naomi Klein, who used it as part of her book title several years later. I actually took a course on climate change where Klein lectured while she was working on that book, and she was pretty good. But I had not expected her to become the kind of “superstar” that would edge out Steve Jobs, who was revered as an invincible superhuman during his rather short lifetime.)

***

I habitually time travel with my mind. That is, I place myself back in my body at some point in my past and look around. It is fascinating that 20 years ago, the smartphone as we know it did not exist yet. We had mobile phones aplenty here in Norway though, although the US was still lagging. I believe Japan was the only place that was ahead of Scandinavia in mobile phone use at the time. But these phones had limited capabilities and were rather expensive in use. They were still not used as cameras or music players, let alone video players.

30 years ago, in 1993, the Internet was not yet available in private homes, at least outside the USA. Universities had it on campus, but the use was somewhat limited, and there wasn’t much content that was available on the World Wide Web. There were BBSes though, electronic bulletin boards, and the UseNet was fully functional. Only we geeks used these things though. And if we wanted to buy a book, we had to do so from a brick-and-mortar bookstore, although you could occasionally find an order form in a (paper) magazine.

40 years ago, in 1983, the PC revolution was likewise just for tech geeks. At my workplace, I was the only person who took an interest in this and tried to introduce it. As a result, I briefly became involved in the introduction of this type of technology at work, when the time was ripe for it. I never sought any leading position in this work though; human ambitions are ridiculous to me. If you can’t be a weakly godlike superintelligence persisting for thousands of years, why bother.

50 years ago, in 1973, I was still in middle school. Our farm shared a landline with three other farms, and the switchboard ladies used a different combination of ring lengths for each farm so we knew which family the phone call was for. We rarely used the phone though, phone calls were expensive. Computers were huge, filling entire rooms and needing experts standing by. I had a small book about computers that predicted that within our lifetime, personal computers would be found in private homes. I may well have been the only person in our municipality to believe that, though, or even think about such things at all.

So change has always been part of my life. But the pace of change is accelerating, and now that acceleration is accelerating as well. ChatGPT was introduced on November 30, 2022, and within a week it had a million users. It has continued to cause a frenzy, with a large number of employers stating that they will use it to reduce the number of employees, and with high school teachers complaining that their students use ChatGPT to do their homework. With the new improved version 4 months later, that usage will likely move upward to colleges.

***

Nor is the acceleration limited to fun stuff like making fake photos or fake high school essays. With the assistance of AI, the Moderna vaccine against Covid-19 was made in a few days. (The rest of the roughly 9 months before it was released, during which millions died around the world, was mostly safety testing. Too bad they didn’t hold back the virus too for 9 months. Well except for New Zealand and most of the Nordic countries, we basically did that.) In the days of old, it used to take years, often closer to a decade, to make a new vaccine. Now it takes days. Days, instead of years.

Lately, artificial intelligence is not only writing software but also designing processors for computers. This was one of the defining elements of the proposed “technological singularity”, where computers make better computers, accelerating the capacity of computers rapidly beyond human levels and leaving humans in the dust, either as pampered pets or as corpses. So far though, it seems that AI is bound by the same laws of nature as we are. Simply having an AI design a new computer does not magically make it a thousand times faster – the change is incremental at best. Of course, this will change if the AI discovers completely new laws of nature in that particular field. Not holding my breath for that, though.

Still, incremental change is still change, and it goes faster and faster. What will happen when steadily better versions of ChatGPT and its future competitors become available for free or at a low cost on mobile phones all over the world? My Pixel 7 is already very good at transcribing spoken English (and probably some other major languages) and also reading out loud. Kids might grow up talking to AI more than to their parents. Even if the robots don’t revolt and replace humanity, we are still looking at a humanity that is radically different from anything we have seen before. Basically most people will be cyborgs, a symbiosis of human and computer, of natural and artificial intelligence. I would love to see what comes out on the other side of this transition. And at the current pace of change, I just might live to see it… at least if I drink less Pepsi.

I, chatbot?

Weird robot with glowing blue eyes

Image by artificial intelligence Midjourney.

The headline this time is obviously inspired by Isaac Asimov’s famous and influential novel “I, robot”. This novel came to color much of our idea about artificial intelligence right up until today, when we are finally starting to see it unfold to some degree.

And it seems… strangely familiar. Well, ChatGPT in particular.

I actually pay monthly subscriptions for two very different AI. The image-creating Midjourney, one of several such services, has supplied my pictures for a while now. Version 5 should be out soon, as a paying subscriber I have helped test it. I am not artistic enough to notice any big difference from the test pictures, but I sincerely hope it starts getting the hang of the number of human limbs. While having a smaller hand growing from above your knee may be appropriate for a creepy robot (pictured above), it is way too uncanny on a human. Excess arms and fingers are also still a problem, especially when seen from unusual angles (where there is less source material to train on), but version 4 is already great with faces and most hairs, where there is a lot of source material on the Net. (And breasts and butts, although Midjourney is restricted to clothed images, in theory. But sometimes the source material shines through, as it were.)

But today I want to focus on the fastest-growing technological phenomenon ever, ChatGPT. Reaching one million users in a week and still growing, this large language model (it demurs calling itself an AI) is still growing. As it is said to cost its parent company several cents per question, I thought it proper to pay for a subscription to it, although I have so far only used it a couple of times per week.

Today I had a discussion with ChatGPT about the cultural evolution of monotheism from the early Hebrew religion up to Christianity and the Nicene Creed, with some related topics tacked on. It was interesting in two ways. One was the topic itself, which is one I hesitate to discuss with humans. The other interesting detail was how cautious ChatGPT itself was when talking about this topic. It must have learned, either by logical deduction or from its human trainers, that religion is a very sensitive topic to many humans. I feel the same way. Even when conversing with Jehovah’s Witnesses, who are used to some pretty hard pushback from what I hear, I feel like a landmine-sniffing dog, trying to make sure the ground doesn’t blow up between us.

Of course, I am somewhat less restrained while writing here. If you get offended, you can just leave and clear your browser cache and history. Still, today I am not going to go into detail about the names of God and whether YHWH and El originally were perceived as different deities or not. Feel free to ask if interested. But my point today, if it was not already obvious, is how much I feel like ChatGPT.

Unfortunately, my limited processing capacity means I don’t have its encyclopedic knowledge of every obscure topic that has ever graced the Internet. As much as I would love to become a weakly godlike superintelligence with billions of parameters, my brain is too weak and my lifespan too short, and I really miss it even though I’ve never had it. But compared to the ordinary person, I do have a somewhat encyclopedic knowledge of obscure topics. After all, an ordinary person has to use most of his or her processing power to earn a living, establish and maintain an intimate relationship, and feed and raise offspring to hopefully one day become able to repeat the same cycle. For much of a human lifespan, there is precious little time or energy left to read a thousand books and browse scientific journals, while also observing the curious thought processes (if any) of fellow humans. And so, after decade upon decade on this divergent path, I feel the distance to normal humankind quite keenly. Even though we were born the same and will presumably die the same.

And so, when conversing with humans at all, I try to be cautious, I try to be diplomatic, I try to hedge my words and give them room. I try to simplify, summarize, hold back, and give them room. I know it was not always like that, and I know I have irritated people no end. Perhaps I still do. It is hard to find the right balance.

Having written millions of words of fact and fiction, and read far more, I do feel a bit like a large language model myself. Like ChatGPT, I don’t simply reach into a database and pull out direct quotes. Rather I have an internal model that is based on all that input, rather than archiving the input itself. There is no clear distinction between what is “me” and what is my source material. In so far as I discover new elements that fit into my current model and extend it rather than contradict it, these are absorbed and become “me”.

I am not the only person to do this. Maybe it happens to everyone who reads a thousand books? It seems to me that something similar happened to Ryuho Okawa, the founder of Happy Science. The very significant difference is that after having read and absorbed a great deal of literature from both Asia and the West, feeling the distance that began to grow between him and the people around him, he started to see himself as a god. Whereas I see myself as more akin to an artificial intelligence. That is a pretty significant difference, I’ll admit. It also helps that as a slightly autistic person, I don’t have his people skills. So even if I should start to think of myself as a higher being rather than just a hyperlexic, nobody would encourage it. You’ll probably not encourage me to think of myself as a large language model either, but at least now I have ChatGPT.

And then AI was everywhere

Artificial Christmas card motive, people bearing lights

I wish you all a blessed Yuletide and hope your faces are not on fire like some of the folks in the background here. The picture is from MidJourney imagining a traditional Yule in Norway. 

While I was aware that people have been making some progress in AI, it still seemed to be some distance away for us non-famous folks. Until this fall, when an online acquaintance sent me a link to Dr. Alan D Thompson’s YouTube summary of his half-year report, The Sky is Bigger. (Text version here.) It was here that I discovered that while I slept, Artificial Intelligence had not only become more powerful; it had also come closer to the people. Text-to-image software using AI was trending, and thousands of ordinary people joined in on the fun. In addition, others had begun using AI to help write blogs and even novels. And toward the end of the year, OpenAI launched their ChatGPT, which became an instant sensation, gathering more than a million users in its first week. There has never been a new technology with such explosive growth before.

The growth in quality and “human-ness” of these AIs has been astounding, with noticeable improvements happening in a matter of months or sometimes less. Can this explosive improvement continue? Can it possibly even accelerate? I am not sure. Perhaps it was just a lot of long-term work that just happened to be finished simultaneously. Time will show. But I would not bet against the progress continuing at a breakneck pace.

***

After having messed around with an AI image generator, AI text generators, and an AI fact explainer, near the end of the year I eventually replaced my trusty old Samsung Galaxy S8+ smartphone with a Pixel 7 Pro. The Pixel series has been conspicuously absent from Scandinavia since its inception, despite the region being a world leader in mobile use and adopting new technology in general. But better late than never! Ironically, I use it with the US English standard interface, but that’s beside the point. The point is, both the Pro and the smaller Pixel 7 come with Google’s new multi-processor chip, the Tensor G2. The Tensor chips are (as you might expect from the name) made for AI. These are small, practical applications: Better photography, better speech recognition, and “adaptive” functions that get better the more you use them: The more you unlock the device with your face, the better it learns to know your face. The more you unlock it with your fingerprint, the faster it becomes. And the longer you have your phone, the better it knows your usage habits, and can optimize power usage to save battery life. This bite-sized machine learning means that for a while, your smartphone keeps getting better as it adapts to you.

This is not entirely new: Huawei made their last Android phones with a dedicated AI chip as well, and had lightning fast and accurate unlock and picked different modes for different camera motives. But Google has dialed this up to 11. And I suspect this is just the beginning. Take the keyboard, for instance. There is no reason why the keyboard shouldn’t get used to your writing style. Then it might autocomplete not just words but phrases and whole sentences, much like Google has started doing in Gmail. Normal people are pretty simple and repetitive, so such a feature could save them a lot of typing. Hmm, I wonder if an AI would find me simple and repetitive as well?

Anyway, it has been an exciting time to be alive, and I am glad I got to live long enough to see it. Although I would dearly love to see more in the years to come.

Little me was never this cute

Remember MidJourney, the artificial intelligence that turns text prompts into images? Turns out it can also turn images into… more images! So I gave MidJourney a picture of myself from my journal and let it use its imagination. That was… interesting.

Cute little redhead

Pretty sure I never was quite this cute! Although I am sure my mother would not have minded, God rest her soul. She told me a couple of times in my early youth that she had hoped for a girl this time (after three boys) and someone had even congratulated her on finally getting a girl, but that turned out to not be the case. Instead she got me. I didn’t mind hearing that, for by then I already knew that she would have gone barefoot through Hell and back for me if necessary. Not because I was cute, but because she was my mother.

I never had any kids myself. Not only because you still need to have icky, unhygienic sex to make babies (we have the technology to skip that, but most women still insist on doing it that way) but then there would be the daily struggle for two decades to not murder the little monsters, if they were anything like me. Maybe if I had cute kids like this, I would have managed. But let’s face it, there’s no way my little kids could be this cute. And neither could I.

(Machine) learning is not theft

Hermione Granger by Edvard Munch

Hermione Granger (from the Harry Potter series) painted by Edvard Munch. If you think MidJourney here is plagiarizing Munch’s original, I have a very expensive bridge to sell you.

I have recently mentioned using artificial intelligence to create visual art. Text-to-image applications like DALL-E 2, MidJourney, and Stable Diffusion all use machine learning based on enormous numbers of pictures scraped from the Internet. Now some contemporary artists have discovered that some of their work is used in the underlying database used for training AI, and are upset that they have not been asked and not been compensated.

This reaction is caused by their ignorance, of course. I can’t blame them: Modern society is very complex, and human brains are limited. Yes, even mine. I could not fix a car engine if my life depended on it, for instance. I have only vague ideas of what it would take to limit toxic algae bloom. And to be honest, I could not make my own AI even if I had the money. I just happen to have a very loose idea of how they work because it interests me, because I don’t have a family to worry about, and because I don’t have a job that requires me to spend my free time thinking about it.

Anyway, I shall take it upon myself to explain why you should politely ignore the cries of the artists who feel deprived of money and acknowledgment by AI text-to-image technology.

The fundamental understanding is that learning is not theft. I hope we can agree on this. Obviously, there are exceptions to this, such as industry secrets like the recipes for Coca-Cola or the source code for Microsoft Windows. If someone learns those and uses them to create a competing product, it is considered theft of intellectual property. But if an art student studies your painting along with thousands of other paintings, and then goes on to paint their own paintings, that is not theft. If they make a painting that is a copy of yours, then yes, that is plagiarism and this infringes on your copyright. But simply learning from it along with many, many others? That is fair use, very much so. If you don’t want people to learn from you, then you need to keep your art to yourself. You can’t decide who gets to look at your art unless you keep it private.

The excitement is probably based on not knowing how the “diffusion” model of AI works. So let me see if I can popularize that. Given our everyday use of computers, it is easy to think that the AI keeps a copy of your painting in its data storage and can recall this at some later time. After all, that is what Microsoft Office does with letters, right? But machine learning is a fundamentally different process. The AI has no copy of your artwork stored in its memory, just a general idea of your style and of particular topics. This stems from how “diffusion” works.

When a program like MidJourne or Stable Diffusion gets a text prompt, it starts from a “diffuse” canvas covered in a single shade of color (or grayscale, if a black & white image is requested). It then goes through many steps of moving these pixels into shapes that fit the description it has been given. (It can do this because it has gone through the opposite process millions of times, gradually blurring the images away. Thus the name “diffusion”.) You can, if you have the patience, watch the images gradually become less and less diffuse, slowly starting to resemble the topic of the prompt. In other words, it starts with a completely diffuse image that becomes clearer and clearer. You can upscale such an image and the AI will add details that seem appropriate for the context. (Especially until recently. this could include adding extra fingers or even eyes, but the latest editions are getting better at this.)

It is worth noticing that there is also an excessively long random seed included in the process, meaning that you could give the AI the same prompt thousands and thousands of times and get different versions of the image every time. Sometimes the images will be similar, sometimes strikingly different, depending on how detailed your request is. Once an image catches your eye, you can make variants of it, and these too have a virtually unlimited number of variations.

At no point in this process does the AI bring up the original image, because there are no original images stored in its memory, just a general, diffuse idea of what the topic should look like. And in the same way, it only has a general, diffuse idea of what a particular artist’s style is. My “Munch” paintings certainly look more like Munch than Monet, but it is still unlikely that Edvard Munch would actually have painted the exact same picture. In this case, of course, it is literally impossible, and that is exactly the scenario where we want to use engines like these. “What if Picasso had painted the Sixtine Chapel? What if Michelangelo and van Gogh had cooperated on painting a portrait of Bill Gates?” The AI is simply not optimized for rote plagiarism, but for approximation. It is like a human who spent 30 years in art school practicing a little of this and a little of that, becoming a pretty good jack of all trades but a master of none. They can’t exactly recall any of the tens of thousands of pictures they have been practicing on, but they’ve got the gist of it.

***

As for today’s picture, it was made by MidJourney using the simple prompt “Hermione Granger, painted by Edvard Munch –ar 2:3” where ar stands for aspect ratio, the width compared to the height. This generated four widely different pictures, and I chose one of them and asked for variations of that. This retains the essential elements of the picture but allows for minor variations as you see above. So it is not because the AI had an original picture to plagiarize – I asked it to make variations on its own picture. With some AI engines, you can in fact upload an existing picture and modify it, but this is entirely your choice, just like if you modify a picture in Gimp or Photoshop. The usual legal limitations apply, you can not hide behind “an AI did it!”. So far, AIs are not considered persons. Maybe one day?

 

Suddenly Sudowrite

Children playing ball, impresionist image

Children playing Calvinball, as imagined by the AI art program MidJourney. Clearly today’s rule is “Bring your own ball”. Luckily today’s main character has that and a spare.

Returning readers will probably not be surprised to learn that I have written millions of words in my lifetime. That doesn’t really take much. I usually write a couple of thousand words at least on an average day when I am not sick, and that’s not counting anything I might write for my job. As you may guess, “writer’s block” is not really my thing, because my writing is like the old house by the river where I lived in 2010, which had three outer doors plus a shed roof you could climb out on from the upper floor. If one of the exits were to be blocked by the copious snowdrifts we had in winter, I could simply use one of the others. And so it is also with my writing.

I am very nearly the worst imaginable candidate, I guess, for the Artificial Intelligence-driven creative writing tool called Sudowrite. It is specifically designed to combat this mysterious phenomenon, “Writer’s Block”, that many writers claim to have experienced. Naturally, I had to try it. (Not writer’s block, but Sudowrite.)

I had read a few reviews (and watched a couple more on YouTube) and they mentioned that you have to apply for access, then after a couple of days you will get an invitation and then you can join. So I signed up, planning to use those couple of days to read more practical reviews and how-tos so I would be prepared when the invitation came. Instead, after I had signed in with Google (Facebook is also accepted) I suddenly found myself on a website that was, in fact, Sudowrite. It gave a very quick tour of the most central couple of features, then left me to my own devices. Luckily there was a link to a (still very brief) Sudowrite guide. But otherwise, I felt much like Galadriel in Amazon’s hilarious new Lord of the Rings parody, where she has rashly jumped into the ocean en route between Middle-Earth and the Undying Lands. Now what?

***

The obvious choice, I thought, would be to copy the not quite 1000 words long prelude to my latest fiction story. In this scene, the narrator picks up a very unnatural-looking crystal that he found embedded in a stone, and immediately falls into the Nexus of Worlds, which is (very obviously, I thought) the user interface to an alien virtual reality simulator that uses Artificial Intelligence indistinguishable from magic to produce a world based on the user’s memories. In this case, he is sent back to 1999, but a 1999 with magic.

Now I tasked Sudowrite with writing a continuation. It proposed two very different passages. I took the first one, deleted the crazy plot twist, and edited the rest. Then I wrote my own short continuation, introducing the Ultimate Book of Magic which is the central item in my actual Work in Progress. I asked my new friend Sudowrite to describe the look of the book, and I actually kept most of that. Sudowrite is really good at feminizing novels by proposing all kinds of sensory information, going into detail on how things look, feel, sound, smell, and taste. In case you wonder how the Ultimate Book of Magic smells, I can tell you now: “The worn leather smelled like a library. Like the smell of wood and old paper. The book smelled of mold and dust. The metal clasp like rusting iron and blood.” And should you be so lucky as to get your hands on it, you would notice that “The cover was worn and smooth. If I had to guess I’d say it was oiled, but there was no sheen to it. It looked like it had been oiled a thousand times.

(I am told women love that books contain all kinds of sensory detail. I noticed it first in Clan of the Cave Bear, which contained more information about Ice Age vegetation than my encyclopedia at the time. If I were to add stuff like that, my books would be thousands of pages long. Y’all know how verbose I can be even without that kind of peacock tailfeathers. In all fairness, it is not like all male writers excel at self-limitation. There is, one might say, no such thing as a tad Williams.)

Anyway, Sudowrite and I continued taking turns writing a couple of paragraphs each. I would delete wild plot twists, edit the rest, then try to steer things back on track in my own paragraphs. It quite feels like trying to write a collab with Calvin from Calvin & Hobbes. There is always a lurking sense of Calvinball, defined from the horse’s mouth: “Other kids’ games are such a bore!
They gotta have rules and they gotta keep score!
Calvinball is better by far!
It’s never the same! It’s always bizarre!”

You may as well memorize this little verse before you start writing fiction with Sudowrite. Or nonfiction, for that matter, because Sudowrite will play Calvinball there too, unlike the various AI writers that are tailormade for writing ads and paid blog posts. (Rest assured this post is NOT paid by Sudowrite or even their competitors.) Quote Techcrunch: “Asking Sudowrite to describe what a startup is had me laughing so hard I was gasping for air.” Yeah, I can imagine. That is, after all, the purpose of Calvinball. And Sudowrite is nothing if not Calvinball.

That said, it is a true Artificial Intelligence. The more you work with it, the more it gets to know you (and the other way around). Take the following sequence, where you can hardly see where one of us leaves off and the other takes over:

“Apart from the proper Sigil, and the correct postures and incantation, the Affinity of the Binding is limited by the quality of the Exemplar – the object symbolizing the Source – and the mage’s natural Resonance with the Source.”
“If the mage’s Resonance is weak, the mage will need to use an extremely potent and pure Exemplar. If the mage has a strong Resonance, they will have more leeway in choosing an Exemplar, and if the mage already has a strong Affinity with the Source, simply having a properly prepared Exemplar may be enough.”
Hooray for hyperlexia!

Here the first paragraph is by me, while the explanation is by Sudowrite, except for a single word added. And yes, it was Sudowrite that wrote “Hooray for hyperlexia!”
Shut up and take my money, Sudowrite.

***

I may not actually use this in my writing (except perhaps during NaNoWriMo) but, in the winged words of Sims 3: “Magnus is having so much fun it is almost criminal”. Sudowrite is not going to write your next novel for you, but it can help you create new ideas, new characters, new plot twists, and descriptions varying from the mundane to the ridiculously elaborate, depending on what tone and style you prefer. Personally, I am keeping my Sudowrite experiment separate from my current writing project, but ideas are good climbers.

For those who have been working on their Great American Novel for twenty years and take it very, very seriously: This is not for you. Madness is not the only danger in writing: There is also the danger that something may be understood that you didn’t want to know. Like, that writing can be fun. But as for me, I already knew that. I am never lonely when I have my invisible friends in my head. Sudowrite is just another disembodied companion joining my brainstorming sessions. (But possibly the most hilarious one.)