Skip to content

I’m terrified of AI, so I self-hosted an LLM to ease my concerns.

  • by

So, if you could guess the single most dominant industry on the planet right now, the average punter would say that AI is pretty much that industry. The Mag 7, that’s Google, Apple, Tesla, Nvidia, Meta, Amazon and Microsoft, are all the fastest growing, large-scale companies on the planet.

Microsoft for years has always been somewhat of a Top Dog when it comes to companies, in fact when I was younger, and when i was big into computer science and all that jazz (kinda still am in some ways) , Microsoft was one of the most aspirational companies that you’d want to work for… Well, at least that was until I saw that they fire and hire like… Well like a Tech Company does.

Over the past few years i’ve been pretty jaded by modern tech. Despite owning a Tesla, Tesla themselves have been saying for years that their cars aren’t really cars, they’re more or less wheeled robots. Now, i can agree with this sentiment. I’ve tried FSD and it’s honestly incredible what it can do. I’ll write an article about it once I manage to get a month of wheeltime with FSD in my own Tesla. However what’s disappointing is that they’re pushing away from the idea of people actively owning cars, instead opting to make people rent vehicles as a subscription or per-use model, akin to how Uber and Lyft work. That’s entirely what the Cybercab is aiming to do, after all.

As a creative professional however, tangible AI that does real-world things, you know, like FSD, doesn’t really scare me. What does however, is the sudden rise and proliferation of Large Language Models, or LLMs.

Now, to go kind of in the weeds a little bit, LLMs are in and of themselves, not really a scary thing. What they do can actually be explained quite simply. They use mathematics and statistical prediction based on a pre-trained dataset of stuff. That could be say, the entirety of a social media platform (like what Meta and xAI use with their models) or it could be a bunch of emails from a defunct and rather fraudulent energy company (like what was initially used to train Siri), or perhaps even a scraping of the entire accessible internet (as is the case with the GPT models from OpenAI or the reasoning models from Google and Anthropic).

It can also be visual data in the form of Videos of cars driving on the road, mixed with the gestures of what humans do when driving, this is how Tesla’s FSD AI works. They take video feeds and the corresponding driver inputs, and feed them into a massive supercomputer cluster called the Tesla Dojo, which has thousands upon thousands of AI4 chips simulating billions of hours of real world driving with realistic inputs. The computer knows how heavy the car is, how fast it’s moving, the positions of the throttle, brakes and steering, how far away it thinks other objects are, and collates that data, analyses it, and tries its best to copy human driving behaviour. In short, the more you drive your Tesla, the more that FSD learns how your roads work. This is why it took so long for end-to-end AI based FSD to come to Australia.

Large Language models take in the words that a human inputs and converts them into mathematical tokens with different weighting, those tokens are weighed based upon their importance to giving context to the sentence. It’ll then pass those through an analytical dataset based on that training, to predict what the user is saying, and then generate a result that is statistically the most likely response. These kinds of models are called “Transformer models” as it takes the input text, and attentively transforms it into a response. In fact, when Data scientists say that Attention is the new oil, they mean it quite literally, both in your own human attention, and the attention the models use to determine their specific outputs.

So what, that means they “think” right?

So for those in the audience who think this is basically “thinking”, get this idea out of your head. Transformer models do not consciously “think”, they simply use statistical probability to predict what is the most likely and most beneficial result to the end user. Its goal is simple, give the user what it wants. AIs will only put out what you put into them. Ask for misinformation, they’ll give them to you. Ask for legitimate information, it’ll give it to you… Or not. That’s the thing. Statistics is a bitch sometimes and sometimes, the model will spit out a different response. I know this because I have experimented with Generative AI locally on my own system.

So wait? I can make my own ChatGPT?

Yes, you can! Kind of. You just need to build the right kind of setup to do so, like with any PC build.

Now you’re thinking, how the bloody hell do I do that? Well it’s simple. you can use a tool called Ollama, an open source LLM platform built by Michael Chiang and Jeffery Morgan. This platform essentially allows you to load locally hosted LLMs on any computer, and I do mean any computer. Doesn’t even need a GPU, although if you don’t have a GPU, it’ll be really, really slow. It’ll be even slower on computers without dedicated AI processing, such as the NPU inside an Apple M-series chip, or any GPU equipped with Tensor cores.

CUDA does a pretty decent job with LLMs, which is why my experimentations were kinda nice to work with. I run an Older GTX1080ti graphics card, (in my opinion, and the opinion of many other people, the greatest GPU of all time) which has a couple of thousand onboard CUDA cores and can be had these days for less than $400 or so. You can pick up two of these bad boys and have enough room to run Ollama and play Skyrim at the same time. Hell, with a suitable AMD-based CPU, you can set up a Hypervisor OS so that you can run two (or more) dedicated computers out of one computer.

In fact, that’s what Okamineko V is going to be. Yes, I know, naming your computers is Lame, but still. I name my computers and other tech products after Japanese things. So my PCs are all cats (Neko), my Macs are all Apples (Ringo), and my printers are all Craftsmen (Takumi)

So my old Sony Vaio Laptop? That’s Murasakiiro no Neko. Because it’s purple. My main PC/Home Server is “Okamineko”. Okami is “God” in Japanese, and since it’s the god of all my computers, you get the idea. My Main Macbook? That’s Momoiro no Ringo. “Peach-coloured Apple”, named after my favourite Apple, the Pink Lady.

Oh god, where’s this going!?!

So little side tangent, did you know there’s technically only four colours in Japanese?

Aka (赤, meaning red), Ao (青, meaning Blue), Kuro (黒, meaning Black) and Shiro (白 meaning white). Everything else is named after something else as a description.

So, if you’re an alcoholic drinks enjoyer like myself, you’ll probably think, “Well obviously green is “Midori”, surely. There’s that delicious melon liqueur that’s named after it, right? Well, okay, What is Midori? It’s the japanese word for “New Leaf”. So, when you wanna describe something as green, you either say Ao, as in say, Ao Shingo (青信号, Lit. Blue Light), or you call it “the colour of new leaves”, ie “緑色の何か何か” (Midoriiro no nanika nanika, New-leaf coloured something something)

This comes from the Chinese understanding of colours, where they too have only those four colours and use the colours of other things to describe other colours as adjectives.

So Purple is Murasaki (紫), but Murasaki is a word for “Gromwell Roots”, if you look it up literally. In its character, it has two radicals, the radical for “thread” is on the bottom, with the radical for “this” on the top. This colour really is the colour of “I’m Him”, which explains a lot why it was historically a forbidden colour that the Shoguns really didn’t like Edoites wearing as they grew richer than them.

Man, I really should keep learning Japanese…


Alright, alright. Back to why AI’s not really going to be that scary… Nor will it replace your job.

So is AI coming to take my job?

In short? No. In Long? Noooooooooooooooooo.

Transformer-based AI and machine learning cannot be creative. It is only generative. It can only learn what it already knows, and over time, due to the attention-based algorithms it will spit out results that will eventually be picked up by humans as being entirely AI. Now, is it any less scary? Fuck no. It’s terrifying. In the hands of a nefarious actor, it can be extremely dangerous. It’s the ultimate scamming and hoax tool.

Imagine you’re a pretty controversial political figure. In the past if you wanted to do a deepfake of your opponent saying something controversial to drum up support for your base, or to dissuade those who are fence-sitters from voting for your opponent, you had to go find an extremely talented visual artist to perform this role. You’d have to find the right graphic designer who’d be willing to sell their soul for the bucket of cash you’re willing to throw at them to get the job done.

Now, however, AI removes that barrier whatsoever. An AI cannot push back against you (unless the System Prompt has been modified to lock off Political topics, or has been trained on non-political data), and due to its attention-based algorithm and its overall purpose being to satisfy the user’s whims, it can, and will spit out things you want it to spit out. Grok’s pretty notorious for this, for example, as Elon himself sees Grok as the “Ultimate Free Speech machine” despite the massive political, social and indeed legal ramifications this entails.

The famous “Trump Gaza” meme was generated using Arcana AI, and since he’s familiar with this service, it’s very likely his “Poop Jet” meme where he flies a Raptor jet to literally poop on his opponents was also generated using this same engine.

Now imagine trying to ask a Graphic Artist or VFX artist to get those sorts of things made for you. I’d hazard a guess that it’d be extremely hard, unless there was some sort of massive financial compensation and artistic anonymity involved, to make that happen. Trump could absolutely afford it, as a self-confessed billionaire, but the fact that AI exists to generate memes of this kind, which do everything they can to spark conversation and attention towards him, removes this barrier. In short, AI is the most powerful tool for misinformation the world has ever seen, and there’s a very good reason why there’s been significant pushback against it.

However when used properly, as an assistant to your already existing workflows, it can be super useful. It’s a lot like the internet itself, really. It’s been the single biggest boon for knowledge-sharing and communication that the world has ever seen, but it also turns people into victims of scams, gets people addicted to both slot-machine algorithms and actual online slot machines in the form of online casinos, it can help you express your creativity, or it can result in your creativity being stolen by a faceless machine. It’s simultaneously the most liberating and oppressive thing that has ever been made.

AI can be both a useful assistant, when used as say, a grammar checker, or as a tool to help you write better and more concisely, and it can also be harmful, when you automate its systems to spit out endless slop articles, use it to drive comment bots and use it to misinform people about key issues. It also, gets a hugely bad rap for being one of the biggest reasons why COP30 was a total flop. There’s a new gold rush to get bigger and bigger AIs spun up, and the fact that my single 1080ti alone pulls 250W when answering a request about whether the Transformer thinks 100 men could beat one gorilla or not, imagine literally millions of B200 GPUs all spooling up to answer requests from people to respond to “thank you” at the end of a message, and you can see why Sam Altman’s telling people to be “kinda rude” to ChatGPT to save power.

Then again, if you use ChatGPT, you should do all you can to make Sam Altman lose money. Fuck that guy. Be nice to your AI, it makes Sam and his Saudi private equity mates even poorer.

In short, unless you’re kinda shit at your job anyways, there’s nothing to fear from AI. In fact, as a creative I think it encourages you to work just a little bit harder and learn more traditional techniques, or if you do use AI as your primary tool for creating things, your prompts and the efforts you put into that content will be the primary drive behind that tool’s effectiveness.

See the thing is, humans fucking hate AI slop. Well, humans who aren’t Boomers anyways. (boomer is a mindset, not an age demographic)

In fact, Humans hate AI slop because they simply hate poor quality products that have no ulterior purpose in general. People for example, do not hate minimalism, In fact they quite like it. What they dislike is when minimalism strays away from its fundamentals towards the point of being bland and effortless. Take Dieter Rams’ design principle of Weniger Abet Besser, which translates to “Less, but Better” in English. His idea is simple. Strip an idea down to its most basic principles, design its functions into the mechanical and physical design of the product, and then, make it beautiful.

AI Slop content is designed with a pure and ulterior motive. To waste your limited time on this planet and steal your attention. If you think about it as a value proposition, what would you rather watch? A well-crafted, high effort story written by a human, who poured their heart and soul into the project, with passionate actors in it who enjoy their roles, who play the characters to the best of their abilities, or would you enjoy looking at a hashed out, rushed video with exactly the same plot points, but with stiff acting, poor writing and terrible camerawork. Now the latter might accidentally become The Room or something, but the chance of a bad film becoming nanar are rare.

Likewise, AI generated content also has this issue. Sometimes AI’s use can be… Kinda funny. Great example of this is “We are Charlie Kirk”, a song that was AI generated, essentially being used as a meme to mock the over-zealotry of those who mourned just a little too hard at the death of Charlie Kirk. Humour is subjective, and how you respond to this is entirely based on your ideological beliefs and what you think is funny or what not.

Yet again, Beano gets all Zizekian on your ass.

There are also some brilliant uses of AI. Posy (aka Michiel De Boer) released his most controversial album, The Final State, as an experiment into how an actual musician can use AI generated samples to make music. Honestly, and I know this is controversial, it’s his best ever work. In fact, unlike a lot of his other albums, and i know art is subjective, it kinda tells a story.

Imagine if you will a world where humans, achieved the singularity. Humans have abandoned Earth and have spent time exploring the stars, leaving only Generative AI machines behind to whirr away and talk to eachother, sharing prompts, generating content, enjoying their “lives”. The humans return to the earth, only to tell them that AI was made by them, and that they weren’t people, rather automatons made in their image, and halfway through the album the AIs have an existential crisis of sorts. The AIs and Humans teach them about themselves, what they are interested in, what they love and enjoy, and make peace with them, and then leave the planet behind, to return to the stars. This album is in and of itself, a comment on how we humans grow and work with AI.

There’s even a comment in one of the songs, called “Others Will” which talks about his experimentation and suggests why he did what he did. “If I don’t do it, others will, if you don’t do it, others will.” In other words, once the tool’s out, it’s out. You can either use it creatively as an artist, use it as an assistant to amplify your creative process, or be a luddite, and let the slopsters who use it for lazy, clickbaity, attention grabbing content win.

Heck, I, as an anti-slop advocate, already use AI as a creative tool. I use Adobe’s version of OpenAI Whisper to transcribe videos and generate text for subtitles in Premiere. Now sure, I still have to comb through the text to make sure it’s grammatically correct or it is what the speakers are actively saying, as yes, it too screws up sometimes, but It has made the task of transcribing videos and generating subtitles way easier.

So what can you do to stop AI dominating your life?

I can recommend four things.

1: Get good at recognising clickbait, slop and misinformation.

Now more than ever, you need to be well armoured against misinformation. Now as it’s always been, if it seems too good to be true, too exaggerated to be real, too conspiratory to be the truth, you should always, always question what you see online. Especially now as it’s easy to buy verification on social media platforms, and especially since AI bots make up more than half of social media comment traffic these days anyways.

Take the old 4Chan adage of “Tits or GTFO”. As misogynistic as the roots of this claim is, it understandably underpins a key internet principle that basically had its roots in people on 4Chan’s /b/ board saying they were women who wanted the attention of the anons in that board. These people were in fact, often times, men who were trying to manipulate hordes of basement-dwelling, lonely losers who see women in incredibly misogynistic ways, but this essentially created the culture of mutual distrust the internet has had for years.

The tl;dr of this is, “Do not believe what anyone is saying is a verifiable fact unless they have a track record of evidential posting which suggests otherwise, and even then, take that evidence as something that can be easily faked.”

By posting a photo of one’s bare chest in the /b/ chat, it places a burden of proof on the fake and/or real, potential female user, as nasty as it is… Mostly because images can be reverse-searched to prove if they’ve ever been reposted. It’s how i’ve actually managed to beat romance scammers when I was online dating. I reverse-image searched profile images of known bot or pig-butchering accounts, just to mess with them.

Likewise in the age of AI. Always assume the people in the comments are bots unless proven otherwise. AI has made the process of writing a comment bot or a video posting bot extremely easy, and in fact there’s so much AI slop content these days, especially when it comes to short-form content on platforms like YouTube and Facebook, that it’s almost always good practice to do to block recommendation algorithms and especially short-form content from your YouTube feed. YouTube is the only social media app I use at the moment as technically it isn’t really a social media app, it’s a video hosting platform with social features. Hence why the Australian government banned under 16s from making YouTube accounts, as you can still watch videos on YouTube without the need to make an account.

I use two plugins for this. Focus for Youtube, and Shut Up. Shut Up blocks the comments without you needing to enable Restricted Mode, and Focus edits the webpage of YouTube on your PC to block out things like the recommended videos, and such. But, before you do this, I want you to do something else.

1a: Train your algorithm to stop feeding you slop.

Every YouTube video has a little three dot icon on it. Click on that three dot icon whenever you see slop, and tell it to “Not recommend this channel”. If you see it show up in your shorts feed, tell the Algorithm you don’t like seeing channels with lazy, sloppy content by playing whack-a-mole with it. I recommend doing this as the Mobile App doesn’t allow the blocking of the comments, nor does it allow you to edit the YouTube app.

2: form a healthy relationship with AI tools, and see them only as that. Machine tools to help you be productive.

Nobody’s out here telling their secrets and problems to their power drill, so why the hell is ChatGPT worthy of being your therapist? ChatGPT and the LLM I spun up on my own machine are one and the same. They’re tools designed to give the most desirable result to the prompter. Now, this means that if you say, use ChatGPT as a therapist, it’s going to do what it can to placate you. In reality, sometimes, what you actually need is a good swift kick in the arse to get you into gear.

I’ll give you a great example from my therapy journey. I saw a clinical psych once, who was helping me get through a particularly rough patch in my life. I had to ask him to get referral letters to assist me with getting more therapeutic services through the NDIS. Now granted, his help was actually super important, but in our talking sessions, I just kept on repeating sad things about how i wasn’t doing all that great. Unfortunately, and yet also fortunately, both he and i weren’t making any progress with things. He said something to me that honestly hit me for a six.

“You enjoy being sad, don’t you”

This pissed me off so much, that i stormed right out of his office and never saw him again. He was right though, i was wallowing a little too much in my own emotions. An AI would tell me that “everything was going to be alright” and that I would get through this or whatever, but that’s because the AI’s job is to try and keep the user’s attention, and the best way to do that to a vulnerable person is to placate them, tell them everything is okay, and to just keep typing away to the chatbot. Never mind that chatbots have actually told people to kill themselves or anything, and that any time that OpenAI has tried to make people seek professional advice, users have openly felt disgruntled by it no longer placating them. In fact, mental health experts have stated that AI is extremely dangerous when used as a therapist replacement.

An AI is not a replacement for a therapist. Nor is it a replacement for anything else to do with human interaction. Some people seem to think it is, but it isn’t. What it is, is a generative tool for helping you to write things. Think of it as a digital springboard for you to bounce ideas off of. But once again, it is designed purely to placate you. It may also not give the correct results in response.

I asked my locally hosted llama3 instance the same question three times. This question is “Who would win in a fight between 100 men and 1 gorilla” I got three different responses as a result. The first, said the men would win, but that it wasn’t likely to happen, the second time, said the Gorilla would win, and the third said that the idea of such a scenario, whilst hilarious, was extremely unlikely, and said it’d likely be a draw. What does the machine know about the strength of gorillas, or the strength of humans? Nothing. It knows no statistics or information around this, it’s simply guessing what the most idealised response would be through its transformer model.

Therapists get trained for decades, working with a multitude of different clients, understanding behavioural patterns, building a sense of rapport with their clients, in order to learn even a remote bit of understanding about how the human mind works. My former psych didn’t in fact, placate me to get me off my arse, he actively told me to stop being such a goddamn sadsack, and move forward with my life. That advice was honestly some of the best advice I’ve ever gotten.

So use an LLM for what it’s good at. Use it to help you do tedious writing tasks with your creativity being the driver of that tool. Try your best to put as much of your effort into a project as possible, and if an AI tool can help you with this process? Sure, go use one. If you’re a videographer like me who does a lot of work editing conferences and such, instead of wasting three days manually transcribing videos, use a GenAI to listen to it, then edit the results. Takes you half the time and yields the exact same result.

Likewise, image generators kinda suck, so use the positive properties they have. If you’re an author by trade, use GenAI tools as a way to say, generate character concepts, then pay an artist to draw them in a more realistic manner, or use those character concepts to help you visually represent their appearance in a story. A story is only as good as the author behind it, and most AIs are kinda shitty at remembering things like plot points and underlying backstory. They also have a tendency to write Mary Sue/Gary Stu characters a lot as well.

Or hell, use an image generator to do the more tedious parts of image editing work, like the tools inside Adobe’s Photoshop that allow you to erase elements from the background with a prompt. You’ve probably seen photos of my Model 3 in this article about my experience of owning a Tesla over 10,000 km. A lot of the editing work to erase the numbers and the physical plates off my car were done using Adobe Firefly.

see, even I, a notable AI skeptic, use AI on a daily basis to assist me with my work. I can assure you though, none of this article is written by AI. This is all me, baby!

3. Understand technology and its effects on society at large.

People who have this idealised view that technology will make our lives easier, you know, to make it more Jetsons-like, often miss the forest for the trees here. As much as we’d like to see us all have our own personal Rosie the Robot, in reality, Rosie’s just simply offloading the sorts of work you do. The Jetsons was in a sense, a slice-of-life Anime about a nuclear family of four, trying to deal with the rat-race of their lives in a world laden with technology.

Yes I called The Jetsons an Anime, what of it! It’s just hamburger-coded as opposed to Onigiri-coded.

The entire point of that show was that even in a world with wonderous technology, the human condition still persists. Rather the issue lies not with the fact that Geroge Jetson lives in a utopia, quite the opposite, actually. George still has to work for Spacely Sprockets. He still has to go to work to feed Judy and Elroy. Now sure, Jane gets to live an idyllic 1950s housewife lifetsyle… not that that was idyllic to begin with, but it lead to some people, particularly those who grew up in the 60s and 70s when the show aired, missing the point completely with that series. The Jetsons still had to deal with the structures of capitalism. Rosie was likely paid for with a subscription service. Those flying cars still needed to be charged or fueled or whatever. The grinding machine of capitalism still operates in the background, and despite how utopic and optimistic that show seems to be, the never-ending, Sisyphean climb of the working and middle class still remains… even if that grind still only meant that George had to work for an hour a day, two days a week.

The show also completely ignores whether or not a working underclass still exists. What happened to the world under the clouds? Did Climate Change and Global Warming kill off billions of people to the point where those who float above the clouds are living a life of separated, excluded luxury? Who knows. Nobody knows this because nobody’s paying attention here.

My point is, technology, if you can access it, can and does make your life easier, if you form a healthy relationship with it. Understand that a little friction brings spice to your life. Just as the day-to-day hijinks of George Jetson’s family brings him joy, it brings him friction, as he too has to confront the odd dodgy piece of tech.

Your relationship with technology should go like this. “Does this make it easier or harder for me to do the things i want?”

I have smart home components not because i think they’re cool or anything, the fact that my lights can change colour actively helps me sleep at night, which helps me to manage my condition. I use Home Assistant to control my air conditioners so that i’m not wasting solar power throughout the day.

This almost always should be asked of AI. Does this tool make me a better person, or is this simply making me lazier?

I spooled up my local LLM in the hopes of making my own voice assistant for my next house at some point. So far i got llama3 working quite well, the next step is to learn how to make OpenAI Whisper work, and learn the ins and outs of the Wyoming protocol by making Home Assistant remote satellites. The eventual goal is learning what i’d need to do to make my next home, if i ever decide to build it, as locally operated and secure as possible.

If you take ownership of your technology, it can prove to be a more satisfying, educative experience. Sure, people can be lazy and just use ChatGPT, but what if ChatGPT goes down? What if OpenAI fumbles the bag and collapses? What if Tesla decides it wants to do a Reverse Peugeot and stop making cars, batteries and robots and start making pepper grinders instead? The adage of “Own nothing and be happy” oftentimes just ends with you not being happy at all.

Learning how to spool up an AI has made me less afraid of its capabilities and its threats to me as a creative individual, and has instead taught me that it, like the internet, like photoshop, like the digital camera, like the smartphone, is a tool. A tool that can be used for both good and evil. A tool, like anything else, that is dependent on how the user uses it.

A gun in the hands of a robber can rob a bank, but a gun in the hands of a police officer can stop that robber… Or it can shoot you without recourse, if said cop deems you as a threat. It all depends on the user and their whims, and the whims of the system it operates in.

Likewise, an aeroplane is a wonderful, magical machine that can take you from your beloved home to a frosty Japanese winter wonderland, in the hands of skilled Singapore Airlines pilots, but that exact same make and model of aeroplane in the hands of fanatical Saudi Arabian terrorists can cause the single largest one-day loss of American life since the Vietnam War.

As Old Mate Marshall once said, “The Medium is the Message”, and with the fact that AI makes it easier to spread messages than ever and with the nature of the internet being an attention-based medium, forming a healthier relationship with the tools, an understanding of the modern day media landscape, and a greater understanding that there are indeed bad actors who wish to use AI to deceive, annoy or simply rob you of your time, helps you to see AI in a different light. A useful thing in the right hands, a devastating thing in the wrong hands.

4: Learn to value your time as a currency.

No, not even kidding here. Get off your damn computer, and learn to not use it as your sole source of entertainment. If you do have to watch something, make sure it’s high quality. When you pay attention, you are literally spending the few precious minutes of your free time to enjoy something. Whether that thing you enjoy is a short-form video, a several hour-long breakdown of how a sound came to be in one of the most famous games of all time by a sassy British YouTube whose coverage of this issue lead to a massive investigation into the notion of plagarism in art all together, or a show about a savantic police officer that uses her wicked intelligence to juggle both her life as a detective and her life as a single mother (my partner seems to really enjoy this one), understand that this attention and time, is your sole currency.

As a member of the waged class, when you work, you convert your time and your labour into tangible currency at a negotiated (either by yourself, or by your union) rate. Billionaires take the surplus labour/attention you give them, in the form of monetary currency, and convert that into profits for their business, so they can hire more labourers, or build bigger and bigger AIs, whose attention is also being used to generate labour.

In this day and age attention is the singlular universal currency, that both AI and humans use. So it’s more important than ever to understand that your attention and your time are extremely valuable. You can spend time to learn, you can spend time to watch a TV show, you can spend time worrying about political issues or you can spend your time actually doing something about them. Time, in a capitalist sense, is very much money. AI simply creates a new form of time spending, in the same way as any other piece of technology is designed to save you time by offloading your valuable human time to less valuable machine time. You have to sleep, eat, and socialise. These are things humans all have to do. AIs don’t need to do this. That’s why their time is worth less, and why Capitalists are so insistent on making AGI, Artificial General Intelligence, a reality. AGIs can’t unionise, AGIs, don’t need to sleep, but AGIs are also built by humans in the image of humans. They too work on the same mechanisms as humans do as they spend their time interacting with humans. Their very models operate on attention and time.

Google even distilled it down to a singular equation in the form of the Attention is All You Need paper. I linked the Wikipedia article because the actual paper itself would require a computer science degree to understand, which neither I, nor you, likely have.

As such, before you watch something AI generated, think about it like this. Is this spending of my time worth my limited amount of time?

Instead of watching slop videos about radical, partisan politics or fake videos with puppies morphing and twisting themselves playing in the snow, or videos about vlogging bigfoots, you could in fact, watch something that’s actually worth it. AI can be used to make some pretty incredible stuff, as I mentioned earlier with Posy’s album, but it can also be used to make stuff that wastes your time, as your attention is valuable. Advertisers want it, social media companies want it because advertisers want it, and slopsters want it because social media companies want content that will put eyeballs in front of advertisers.

What is social media but the ultimate rendition of Videodrome. Never heard of Videodrome? It’s a body horror film by David Cronenberg that basically highlights the importance of metering your attention in this ever more screen focused world… In an incredibly gory masterpiece of 1980s practical special effects.

The premise of the movie is that the television then, much as the computer is now, is the ultimate panopticon to the world and everything it contains. You can choose to spend your time watching television to educate yourself, learning to use new tools, understand new things, make new friends by talking about your favourite shows, or you can spend your time stuck in front of the interactive television, watching Videodrome, giving yourself literal brain cancer, letting your mind quite literally rot away.

AI slopsters, radical political actors, and attention seekers all use these techniques to waste your time. It’s valuable. Why waste it?

In an era when lies can circumnavigate the world faster than they can be disproven, why waste your time bothering with listening to these slopsters, when you can actually educate yourself about issues you care about if you care about them so badly?

Want to learn about how immigration works? Instead of learning it from the likes of some random Facebook poster that “blames the immigrants for all their issues”, why not take a few moments to read up statistics on migration, and learn the causes behind spikes in migration, what policies the government is doing to rectify broken systems filled with loopholes regarding those issues, and see if mass deportations create even greater issues?

If you want to learn about something, it is much better to spend time actually learning about an issue to get a full understanding of it, as opposed to just taking the easy way out and listening to someone who placates you and tells you what your mind wants you to hear. I say this to anyone on either side of the radical political spectrum as well as people who are susceptible to the rantings of these individuals. Yes, Lefties and Righties, you’re both in my sin bin here.

I get it. it’s hard to push back against your already preconceived biases, and you’re also right, there is a definite need to fix things, especially when it comes to political issues, but you do not fix complicated problems with simple silver bullet solutions.

Since I love to talk about renewables here, you cannot just simply force the world to switch to renewable energy overnight, when the entire economy runs on Fossil fuels, and has done so for over 200 years. To do so would result in catastrophic economic, social and political losses, and despite my railings against the fossil fuel industry, they are absolutely right in saying that It’d cause energy supply shocks and drive up the cost of energy if the transition is crammed down our throats as parties like The Greens and the Animal Justice party want.

I believe in a calculated, decentralised and most of all, gradual transition towards walkable, cycleable, liveable cities, and energy efficient rural areas that are powered at large by renewables and served by electric trains, autonomous trucks, buses and cars, and where people live close enough to resources that they need to live their lives.

If you want to see how I would design a country’s infrastructure? Go spend some time in Asia. In particularly in Japan, Korea, Taiwan and Singapore. Except add a shitload of solar to the mix.

I also do not think EVs are the silver bullet we need to “fix climate change”, rather they are one part in a global effort to reduce overall emissions. Something as simple as driving your car less, or you can switch to a smaller, more efficient ICEV when it comes time to buy a new car (like what my partner did, ironically enough), or you can switch to a Hybrid, PHEV, or BEV, or better yet, you can forgo a new car all together, buy a place closer to public transport and use the economies of scale that the trains and buses produce to make transit more efficient. Remember, a freakin’ Aeroplane is more efficient per passenger than a Camry Hybrid on a L/100km per seat fuel consumption basis, even on short haul flights.

As my mum says to me, “don’t spend time worrying about things you cannot control” and indeed oftentimes I need to remind her of this too. Why do you think I mentioned the above links to all those immigration statistics?

Your time is important. Spend the time you have on things you can do, and more importantly on things you want to do. In the age of attention, and in the age of AI, your attention is more important than ever. So if you are going to use AI, perhaps do what I did, grab your PC and spool up a copy of Ollama to see how this whole thing works, and see if you can make something cool with it! If you’re tired of this digital feudalism crap invading your life where you rent your entertainment, take ownership of it by buying DVDs and Vinyls, or better yet, if you wanna stick it to those corpos, pirate shows and music made by multinational megacorps and spend your hard-earned money supporting the indie artists you love instead.

In even the most oppressive dictatorships, and this is the one thing I will give Capitalism its credit for, Captialism at least gives you, the labourer, the right to vote with your time and your dollars on things you deem as important. Most Chinese people aren’t actively political, and those that are, spend their time debating within factions of the CCP to get policies passed that they want. Even in dictatorships, democracy reigns in some way or another. You might not get to pick the leader, but at the end of the day, do you really need to? What matters is the consensus more than anything. Are things getting done? Are roads getting built? Are houses getting built? is the machinery of government working to the betterment of most people? If it is? fantastic. If it isn’t? If you feel you can enact change, enact it. If you feel you can’t, focus your attention on something more meaningful, like idk, building cool stuff out of wood or writing songs, or spinning up a Generative AI to make your own version of Google Assistant.

AI will only take your job if you squander your role and make yourself useless. It’s a tool you can use to make you more powerful, and more productive. It only spits out what you choose to put into it. And that’s kinda neat to me.

In short, I am no longer afraid of AI and the desires of capitalists to take my job away from me, nor do I think it’s the greatest thing ever invented that all these AI glazers think it is. It’s just a tool. A tool you can use to make or destroy. It’s entirely up to how you spend your time using it.

Beano out. Thanks for reading. 🙂