SLUSHBIN.ORG

Get comfy!

Blog

AI is theft

Tue 23 Aug, 2022
NOTE: this blogpost got a big rewrite on 12th of january 2023 as the previous version was too verbose and dramatic!

Here's my rant about AI. I wrote this because I think the way we think about AI is dangerous in the long term and not because of some nerdy reasons of AI becoming super intelligent and taking over the world. But instead because we see essentially dumb, but fast, machines as intelligent creatures, which sells the illusion that people shouldn't be valued, or even worse, that machines who's ouput comes from people are somehow more intelligent than us which makes us close our eyes to the mistakes the machine commits.

To make my rant quick, here are the arguments why it's a scam:

  1. What is or isn't AI has no meaningful distinction

    If you look at what has been seen as AI over the years. It can't be just seen as the current methods. There have been all sort of ways of doing "AI" which are wildly different from each other yet fell or have fallen under that umbrella term. Which to me implies it's not really some rigorous term. A clock for instance is a device used to count time. And there have been multiple ways of constructing clocks over the years. You can use shadow from the sun or moving gears, yet it remains a device for counting time. AI on the other hand can be virtually anything. You can have AI image generation, AI autopilot, AI presentation software. Which seems fine till you realize you can just remove the AI term from it completely and not much changes. AI image generation software for instance becomes just an image generation software. There's no need to imply a myth of intelligence in there. It can just be software and you aren't missing anything (outside of VC money and myth making)

    And you can see next to every software as AI. Google maps can be a good example. What's really stopping someone from claiming an intelligence is checking if there's no road traffic on your route and calculating the fastest way to get to your destination? The algorithm that checks the quickest road even comes from AI reserach. As there's a very big chance what google maps uses is just the A* algorithm. No one though thinks of google maps as intelligent in the same terms as one would see a person as intelligent. It's useful and impressive piece of engineering. No doubt about that, but it's not intelligent. Or rather it's not new and sexy enough to be called AI.

  2. What is artificial intelligence and natural intelligence might be two different things

    If you go against the people who really believe in AI, you will get hit with the argument that people too are essentially just computers and by making a distinction between people and machines you are just backwards. Kinda on the same level as if you tried to deny evolution.

    And well, that people are just computers might be true. But only if you use the most liberal use of the term computer. It will essentially come down to saying that people can be seen as information processing system. Which is true. But a better question might be what can't be seen as that? If I look outside of my window. I see trees which can easily be seen as information processing systems. I see a street light which is connected to a grid which can be seen as information processing system. The whole town I live in can be seen as information processing system. The whole universe can be seen as that for that matter. It's such a general term as organism is. Yet no one would try to equate a cactus with an elephant even though both can be fitted under the term of organism.

    So with that in mind. Can we see the current fad in AI as not just an information processing system. Which is not difficult to find in nature and neither is it to make, but as an intelligent system? I would say not. It's a somewhat good at pretending it is though. Going over every example would take forever, but let's take the issue of language.

    AI language systems that you can find in use work on the basis of correlating words that seem to be grouped together based on previous text. Which can indeed produce text but it would be ridiculous to expect an actual meaning behind it. Since language can't produce language. Language works because we have some understanding of the world in non language based way and we just assign words to it. The word "cat" by itself is meaningless. And trying to describe a cat by language alone is borderline impossible. Imagine you met an alien and had to describe a cat to him. You would say it's a pet on 4 legs that has whiskers. Issue is not only is that a good description of a dog. You would also need to describe what a pet is. To top it off, not all cats are pets. It's not that this information is in the language. Language just gives us terms to refer to things out there. So how could one possibly make a system which understands language if you need something deeper than language to understand it?

  3. AI steals work from people

    Like mentioned above, all of the so called AI models do is find correlations between text, images or whatever you give them, and produce outputs based on that. Which indeed can produce a perfectly valid piece of text or image. The problem lies in that these machines wouldn't work without the data people provide. And that data is never paid for, and it's taken without even asking in most cases.

    The issue is that the user of the AI system might get what he wants for a low cost, but no money goes to the people who provided the data and made the thing be able to produce some output. It's just not sustainable in the same way cutting down trees without planting new ones is not sustainable. Which kinda begs the question why would anyone do anything if what he produces will never be seen in the sea of AI work, and for that matter, never get paid?

    Of course there might be a question why pay for someone's data if his work will be automated with what data we can get. The issue is that AI systems just aren't intelligent. They just correlate data they get and produce output based on that. We never did things the same way. We don't construct buildings the same way we did century ago, neither do we clothe ourselves the same way as century ago. We don't even speak the same as a century ago. For a self driving car to exist someone needs to drive a car first and collect data to suggest how driving should work. It's not hard to imagine that if we got full automation a century ago, a commercial car might have not existed since no one could have afford it from meager UBI checks.

  4. It enables more bullshit

    Last one is that it's undeniable how much bullshit AI has produced and will produce. By that I mean all the dumb targeted ads you see on nearly every place on the web. All the suggestions of what next video to watch, which usually doesn't just lock people into a bubble, it also makes the bubble as "shocking" and "enraging" as possible since that is an easy way to get views. All the beauty filters to make your face or body thinner which make people at times comedically insecure.

    I saw someone joking online that he will take pics and have an AI rate it for him to get optimal result. And it hit me that something like that probably will happen soon, if it's not happening already. And you might say "so what?" once again. The reason is simple. It's bullshit. It doesn't really improve anyone's life or help anyone. And even those who do like some of these things, usually will say it's not worth it. You can find plenty of who feel miserable after spending hours using some online service daily.

So what?

The thing though is not that AI is bad. Obviously it can be useful. I don't think people who do some dangerous job would mind if we had robots to do it for them. So long they don't end up in some bad place. And a bad place certainly can be one where you have to beg for UBI checks while the data that makes you valuable in a digital economy is taken from you without compensation.

The issue is how to make a digital economy where people get to be in control of their data and have machines which they can use to make their work easier and life more enjoyable. It's not impossible to imagine that we can design tools with a human centered design and keep in mind the potential bad consequences of irresponsible technology use. We have a history of polluted rivers and children working in mines to know what irresponsible technology use can be like.

What we are heading now is bigger lack of job security and more inequality. Which doesn't lead to a place that's worth living in. If we can find some way to make the wealth center less to the very top and instead get spread around, we can have a much more humane world to live in


Return to post list