There was an important and interesting discussion of AI models a few months back, when deepseek just came out.
And it looks like AI is quickly becoming more and more impactful on all our lives.
So I've decided to take some of my answers in that discussion, and create a more comprehensive Q&A, which I hope people will find interesting.

As I see it, the AI will revolutionize the world, as many other inventions did in the past:
The electricity, the car, the airplane, the computer, the printing press, etc.
But while other inventions initially only affected one or a few fields, it took decades or even centuries until their impact became global on many aspects of our lives.
With AI the progress is much faster, and all of us are either already feeling or will feel within a few years how it affects our lives, jobs, school, etc.

Questions & Answers:

Q: What is AI?
A: AI is Artificial Intelligence.
A computer run software, able to perform human tasks that require thinking, logic, creativity, etc. as well as or better than humans.

Q: How is AI built? What are Neural Nets / Neural Networks?
A: The basis of AI is a concept called "Neural Networks", which is taken from the way our brains work.
We can "teach" it by creating connections between the nodes.
For example, show a child enough pictures of a cat and say the word "cat" and they will start to associate the word with the image (a nueral link is created in the brain)
Similarly, if you show a neural network enough pictures of a cat and let it know it's called "cat", it will eventually will be able to "predict with a high probability" by looking at a picture, if there is a "cat" on it (a neural link will be created in the neural network).
The same principle can be used to train (teach) models not just on images, but also on audio, video, text, etc.
For more information see https://medium.com/@jereminuerofficial/neural-networks-for-dummies-bc1ed3f69027

Q: What is Machine Learning (ML)? What is Data Science (DS)?
A: Machines learning is the profession training various "automated computer models" or "machine learning models". We encompass these models by the general term of AI.
There are many many ways of building machine learning models.
The most common and well known is Neural Networks (explained above).
Data Science is the study subject of training machine learning models (among other things).
People who build/train ML models are usually called "Data Scientist" or "ML model engineer".

Q: How AI models create images/videos/songs/texts? What are GAN (Generative Adversarial Network)?
A: I explained above how Neural Networks are able to learn to "understand" visual, audio or othe input.
Generative Adversarial Network (GAN) is what enables Neural Networks to create output.
We basically pit 2 Neural Nets against each other.
For example, we want to create a Neural Net that can draw Picasso-like pictures.
We take a blank model that can draw, and tell it to draw pictures of random pixels.
We then take one model we trained on existing Picasso pictures, and can tell us a percentage (0% - 100%) how sure it is a picture it's seeing in Picasso.
Each time the first model draws a picture, the second model analyzes it, and gives it a percentage score of how close it is to Picasso. And gives it as feedback to the first model.
Each time the score becomes higher, the first model strengthens the neural link that was used for that image generation.
Each such iteration can take a split second. And as more and more iterations pass, the "correct" neural links are strengthened, and the first model is able to draw pictures that get higher scores from the second model (look more like Picasso).
After millions, or even billions of iterations, the first model will pass a benchmark defined by the ML engineer (for example: above 80% accuracy), which was determined to be "enough" for Picasso-like pictures.
If we run the first model until it reaches 100% accuracy - that would be bad. It's called "overfitting". It means every picture it will create will be a copy of an existing Picasso picture.
And that was not out goal.

Q: What are LLMs (Large Language Models)? Where do they come from?
A: LLMs (Large Language Models) were first created 2 years ago, and used to power the first iteration of ChatGPT.
While ML (Machine Learning) models have been with us for 10+ years. MMLs are relatively new.
The innovation with LLMs is - they showed us a product that's bigger than the sum of it's parts:
You start by taking an ML model that predicts your next word.
Think of writing a message on a cell phone, and it predicts your next word.
But it turns out that if you train it on a data set that is large enough, instead of being a system that is really really good at predicting your next word, but doesn't actually knows how to do anything beyond predicting your next word.
Instead you get a system that can actually write a book like it was Isaac Asimov.
Or discuss any topic you want with you.
Or write a computer program.
Or do the countless different things we use ChatGPT for today.
It's a living proof of the Infinite monkey theorem: a monkey hitting random keys for an infinite amount of time will eventually type out the complete works of William Shakespeare.
LLMs are what powers all of the LM models we call AI (Grok, Perplexity, Dall-E, ChatGPT, Deepseek, Llama, etc.)

Q: How are LLMs different from previous LM models?
A: The main difference of LLMs from previous ML models, is the fact that they are large.
Which is an understatement.
They are HUGE.
And each time they create a new model, they make it more and more HUGE. Like 10x 100x 1000x bigger every iteration.
Which caused several changes in the technological market:

  1. Small companies can no longer compete and innovate in regards to AI.
    You cannot be the next Google or the next Meta in AI space.
    Because you need the resources of a behemoth like Google, Meta or Microsoft to create and train a modern LLM.
    Even Apple doesn't have it's own LLM yet. That's how difficult it has become to create/train one.

  2. To train a single LLM model, you now need HUGE (HUMONGOUS) computing power.
    Literally thousands or tens of thousands powerful processors.
    And as it's more efficient to do on GPUs (Graphics Cards).
    (For more info see CPU vs. GPU for machine learning)
    Which is also the reason for the rise in the Nvidia stock in the last 2 years: Huge amounts of GPUs are needed to train the LLMs, and the bigger and more complex the LLMs, the bigger the number of GPUs needed for their training.
    And in addition to that, it takes HUGE amounts of electricity.
    To the point that the companies working on new LLM models, are now building their own nuclear power plants, to support their electricity needs, for training AI models.
    Which is why it costs Billions (with a B) of $$$ to develop new LLM models, and only huge & rich corporations can afford to do it.

Q: What is Deepseek? Why are they popular? How are they different from OpenAI?
A: Deepseek is a small start-up which (supposedly) was able to develop their own LLM models.
What was written above, one would presume that this would be impossible, as it would cost them Billions of $$$, which small start-ups don't have.
But they were able to train their model on a "minuscule" budget of only $6,000,000.
It seems they were able to do it, but using a lot of optimization, cutting corners (their model is slightly less accurate than the best ChatGPT model, but is infinitely cheaper) and by using existing models (ChatGPT, Grok, Llama) to participate in training the model.

Q: What is an "Open Source LLM"? How is it similar/different from an "Open Source Software"?
A: Open source software means getting access to the source code of the software.
For example, if software was a table.
Being "Open Source" means you get a full list of parts you would need to build it, and complete step-by-step instructions on how to build it yourself.
If you take the same components, and follow the instructions exactly - you end up with an identical table.
On the other hand, ML models are "black boxes" by definition.
Which means we can train one, but we don't really know or understand how it works inside.
We don't know why it makes one choice and not another - it's all based on the way the millions of neural nodes got linked inside it's brain.
It's like trying to predict what a person is thinking by looking at a CAT scan of their brain.
So "Open Source LLM" is like getting a smartphone with usage instructions.
You can take pictures with it, you can make calls, you can connect it to earphones.
But you can't actually build one yourself based on the instructions you received.
Providing "Open Source LLM" basically allows you to run it on your computer for free (instead of running it on the company's computer, and paying them for it).
But it doesn't provide any information on how it works, or how you can build one yourself.

Q: What are ChatGPT / Grok / Deepseek/ etc.? How are they built?
A: See answer to "What are LLMs (Large Language Models)?"

Q: What's the difference between AI and AGI?
A: TBD

Personal Opinions:

In this section of the Q&A I wilt post Questions & Answers which are not dry facts (like in the section above), but instead represent my personal opinion on the matter.

Q: Is AI going to replace us, make our jobs obsolete, and leave us poor and homeless?
A: The way I see it, we can look at several options:

  1. One extreme is - People will be too afraid of AI and will outlaw it.
    This is an extreme option, and I don't think it will happen.

  2. The other extreme is - AI will replace humans in jobs, so large percentage of the population will become unemployed.
    There is one problem with this scenario - the one thing politicians love more than money, is being reelected.
    And as long as we live in a world where the people decide what politicians to vote for - there will be enough populist politicians that will do whatever people want. Whatever gets them elected.
    For example, if too many people become unemployed because they'll be replaced by AI.
    It doesn't need to be 80% or 50% of the people.
    Enough it reaches 10% - people will go to the streets.
    Not just the 10%. But their brothers, children, cousins, uncles. People love righteous causes.
    And it's not like police is going to suppress the demonstrations. The police themselves have mothers, brothers and cousins. They don't want them unemployed. So the police will be part of the demonstrations as well.
    So do you think the army will stop them? Think again. Soldiers, officers and generals also have mothers, brothers, cousins...
    Suppressing popular opinion only works when the people demonstrating are a small minority, or when the people suppressing them are a tight-knit group.
    It doesn't work in democratic societies.

  3. So what we're left with is option #3: What's going to happen is something in between.
    Like the fact that we have much more advanced monitoring technologies than described in the book "1984" doesn't mean we leave in a dystopian society.
    Same way the fact that AI will be advanced doesn't mean we'll live in a dystopian AI future.
    Think of a Rickshaw taxi driver. All he needs to do is pull the Rickshaw, and get paid for driving people around.
    We put him in a car. So his life is much more complicated now. He needs to learn how to drive. He needs to pay for insurance, he needs to have checkups on his car every year, he needs to full her with gas.
    His life is much more complicated now.
    But he does much less manual labor. And he can get places much faster, which means he can take more people and make more money every day.
    Same way it will happen with AI.
    Let's say someone makes money by drawing advertisement posters.
    He needs to have mental skills to invent a poster, he needs to know exactly what he wants to draw, and he needs to have the physical skill of being able to draw precise lines, to be able to implement his ideas into reality. (into posters)
    Now comes along AI, and takes away his manual labor.
    He still needs to come up with a poster idea. He still needs to visualize it in his head to know what he wants to draw. But he no longer needs to draw it (or have the skill to draw it). He can use AI for that.
    So now his life is more complex. He needs to learn the skills of using AI, of how to write the correct prompt that will cause AI to generate an image in accordance to his vision. But he no longer needs to have the skill of being able to draw himself.

Q: Will AI replace IT workers in the near future? Because they are needed to write codes, but AI may do all of that for them.
A: AI can write code.
But AI doesn't want to write code.
AI doesn't want anything actually. It's incapable of wanting.
It needs a human to tell it what to do.
And the human telling it what to do, needs to understand himself what needs to be done before asking.

It reminds me of a Sci-Fi short story "Ask a Foolish Question" by Robert Sheckley
About a super computer built by an advanced race, that could answer any question in the universe.
But no one asking it questions could get it to answer them the right answers. The computer knew the answers to their questions, but couldn't answer them because they weren't asking the right questions.
In order to be able to ask the right question you need to already know 90% of the answer.

Same with AI.
In order to be able to explain to AI what software you want it to build for you, you need to first be able to build the software yourself.

Feel free to ask additional questions, and I will add them to the Q&A.

2 months ago*

Comment has been collapsed.

Very interesting.

I años learned not too long ago that the whole IA, like chatgpt wasn't smart. It was just predicting the next words.

2 months ago
Permalink

Comment has been collapsed.

the whole IA, like chatgpt wasn't smart

A fact that's made pretty clear once you start trying to have a basic conversation with it.

2 months ago
Permalink

Comment has been collapsed.

Will the humanizing of AI become a major problem? Many language models seem to already be on this path and I'm not sure it's a very good thing.... Saying a thank you never hurts I guess..

2 months ago
Permalink

Comment has been collapsed.

What do you mean by "humanizing of AI"?
People treating AI as if it was human?
AI wanting to be treated as human?
People wanting AI to be considered human?

2 months ago
Permalink

Comment has been collapsed.

You would think so.
"How pricey can politeness be? According to OpenAI CEO Sam Altman, the company's electricity bill is “tens of millions of dollars” higher simply because people say “please” and “thank you” to the chatbot."

2 months ago
Permalink

Comment has been collapsed.

Q: What's the best thing about generative AI tools?
A: They can alleviate the stress of creating images/videos/songs/texts, so you'll have more time to dust shelves, clean the dishes, and operate your washing machine.

2 months ago
Permalink

Comment has been collapsed.

:D

2 months ago
Permalink

Comment has been collapsed.

One time I stepped off a bus, looking out on the ocean, when my old boss rang me. He was confused why I didn't want to buy a last minute train ticket back home to work part time hours at a minimum wage job that wouldn't earn back the money spent on travel and the accommodation I was going to stay at.

At the time I thought whats the point of earning money if it can't be enjoyed. Hopefully soon AI will be advanced enough so that it can enjoy vacation on our behalf while we earn money to keep it running.

2 months ago
Permalink

Comment has been collapsed.

Will AI replace IT workers in the near future? Because they are needed to write codes, but AI may do all of that for them. There will definitely be some of them left to work on AI itself.

2 months ago
Permalink

Comment has been collapsed.

It squeezes out some of the entry/junior jobs because a mid-level or senior can task AI to do that sort of work. Right now, it's mainly making people more productive rather than full replacing people. But if I only need 10 devs to do the work of 30, I might want to cut costs and fire 20 of them.

2 months ago
Permalink

Comment has been collapsed.

Because people just grow from the womb as "seniors" so that shouldn't be a problem.

2 months ago
Permalink

Comment has been collapsed.

Companies just assume that's someone else's problem. They want to hire the fruits from other people's labor. It's short-sighted for sure.

2 months ago
Permalink

Comment has been collapsed.

I think it'd be unwise for any company to replace their human IT workers who are capable of understanding what they're doing and coming up with novel solutions with what's essentially predictive text on steroids. Current AI is little more than a very sophisticated chat bot, capable of quickly remixing its vast amount of input data into something that seems new at first glance but fundamentally unable to come up with anything actually new or to process any of what it spews into a coherent concept. In engineering there's this saying that there's as many solutions to a problem as people trying to solve it (paraphrasing out of memory), but I guess with AI generated solutions there's only gonna be as many solutions as it received as input.
Don't get me wrong, it's evidently a good tool of you just want a quick and dirty solution, similarly to how programmers usually copy paste chunks of code when they can't be asked to write brand new one for something that somebody else has already done. But it's likely gonna be pretty bad at optimizing or fixing bugs, not to mention that if that original code used as input wasn't sourced in a way that can stand in court, which if going by reports it likely wasn't, it could turn into a legal nightmare down the road, not for hobbyist but for actual big companies.

2 months ago
Permalink

Comment has been collapsed.

AI is already replacing IT workers.

2 months ago
Permalink

Comment has been collapsed.

AI can write code.
But AI doesn't want to.
I doesn't want anything actually. It's uncapable of wanting.
It needs a human to tell it what to do.
And the human telling it what to do, needs to understand himself what needs to be done before asking.

It reminds me of a SciFi short story "Ask a Foolish Question" by Robert Sheckley
About a super computer built by an advanced race, that could answer any question in the universe.
But no one asking it questions could get it to answer them the right answers. The computer knew the answers to their questions, but couldn't answer them because they weren't asking the right questions.
In order to be able to ask the right question you need to already know 90% of the answer.

Same with AI.
In order to be able to explain to AI what software you want it to build for you, you need to first be able to build the software yourself.

2 months ago
Permalink

Comment has been collapsed.

long story short
it doesn't fully exist yet
ai can be used for both good and evil
the massive amounts of heat it creates and electricity its using right now.... not good

2 months ago
Permalink

Comment has been collapsed.

the massive amounts of heat it creates and electricity its using right now.... not good

Humans create heat and use electricity.

2 months ago
Permalink

Comment has been collapsed.

we also have an over population problem.
past the point of no return on most things
It is now 89 seconds to midnight
2025 Doomsday Clock Statement

View attached image.
2 months ago
Permalink

Comment has been collapsed.

Humans create heat and use electricity.

Compared to the energy use of AI, it's like saying "birds also create carbon monoxyde when they breathe so why should we bother with cars and corporate pollution?"
Also it doesn't just use vast amounts of energy (as it is, which is probably 10% of what it will be in a year if it continues at this rate, unchecked, it also uses MASSIVE amount of water for cooling servers.

We are living on the edge of a knife due to our over-exploitation of our natural resources and we just created a toy that multiplied that problem by a million.

2 months ago
Permalink

Comment has been collapsed.

Compared to human workers, AI uses way less energy. For each hour of human-work we replace with AI-work, it's a huge benefit for the environment. Humans produce a lot of greenhouse gases.

2 months ago
Permalink

Comment has been collapsed.

Any actual source or data?

Also those people will not be killed. At least not yet. They will literally still exist. Just not in your company but they will still be producing greenhouse gases somewhere else.

2 months ago
Permalink

Comment has been collapsed.

https://www.nature.com/articles/s41598-024-54271-x

I could have AI generate more books in a month than all humans ever have and it wouldn't use all that much energy. If you do AI artwork, imagine how many hours go into regular art vs telling AI to generate like 100x pieces of art. AI is really fast and efficient compared to humans. My point here is that the energy from AI at this point are negligible compared to other things we do. I'd imagine a single plane ride is more emissions than my lifetime's use of AI.

From ChatGPT:

A one-way flight from New York to LA emits about 550 kg CO₂, as much as 9,000+ hours of desktop computer use—that’s over a full year of continuous use (24/7) or more than a decade of 2–3 hours per day.

A one-way flight from New York to Los Angeles uses about 3,250 kWh of energy per passenger—roughly the same as running a desktop computer nonstop for 2.5 years or a laptop for over 12 years.

2 months ago
Permalink

Comment has been collapsed.

But again those people you think you are replacing with AI who is super efficient, they still literally exist. They still also produce art. They're just not getting paid money to do it anymore because "AI is cheaper" so you just added an extra source of energy consumption on top of the one that already exists.
Also it's a fallacy to believe AI is so efficient at creating things that it can be done in 2 seconds when a human would take hours. In the context of corporate life, it's just not going to happen
The AI is going to take 10 seconds to create a first image. People are going to discuss the result. Another bunch of images, each taking as much time or more, is going to get created and discussed. All you did is cut the middle man. It's still going to take as much time to get to, I don't know, an ad, that it did before. And probably way more energy because people think it's costless so they're going to generate 2000 images when a human would have done 4 drafts.

Did you read that study btw? Their methods of "approximating" the emissions of a human writing is pretty funny and completely theoretical.

I'd imagine a single plane ride is more emissions than my lifetime's use of AI.

But again: plane rides are going to keep happening whether you use AI or not. You just add more to the problem.

And by the way since you're asking ChatGPT questions like these you should know that it's about 1500 times more energy consumed than doing a google search

View attached image.
2 months ago*
Permalink

Comment has been collapsed.

Humans create heat and use electricity.

Electricity and heat... and all the water for cooling.
But it's ok. Some AI companies are already talking about creating their own nuclear powerplants to "solve" their energy problem so it's all good....

2 months ago
Permalink

Comment has been collapsed.

You haven't touched on the ethics of training models on other people's work without express permission. Really shouldn't leave out that important detail when it is already costing people in the creative industries their jobs.

2 months ago
Permalink

Comment has been collapsed.

Or the environmental cost of people playing "draw me a picture" for hours because they have nothing better to do, or using "AI" to make google search or summarize wikipedia basically.
Or the human cost of replacing people with software
Or AI hallucinations and what it means when this is supposed to replace humans, especially in fields like medicine, the law, and crucial systems... as well as armed drones operated by AI.

And let's not forget the fact that soon it will be impossible to tell an AI generated image or video from a real image or video, which should, I think, worry a lot of people but somehow doesn't because the media is too focused on it being shiny.
There won't be a court in the world that will be able to use photographic or video evidence anymore.

2 months ago*
Permalink

Comment has been collapsed.

Thank you, I came here looking for this. I find it's missing from most discussions about AI. Why should we use tools built with stolen materials? Would we collectively buy cars made from stolen parts?

2 months ago
Permalink

Comment has been collapsed.

But you do.
All the time.
Your car is filled to the brim by parts, none of which was invented by the car manufacturer, and none of them (or almost none) the manufacturer pays royalties for.

2 months ago
Permalink

Comment has been collapsed.

I'm confused. Are you conflating patents and copyright? Patents must be bought and approved. Copyright is intrinsic and automatic. As far as I know, all major LLMs were trained on at least some copyrighted material without permission. Using an auto part that is public domain is not breaking anyone's copyright.

2 months ago
Permalink

Comment has been collapsed.

There is a painting called "black square" that is worth $85,000,000 - https://www.artsy.net/artwork/kasimir-malevich-black-square-1
The square itself would be no different If I, you or AI drew it.
Yet ours would not cost $85 million, and the famous painting does.
It's not the painting, it's the artist.
If I tell AI to draw a painting like Picasso, I won't be able to sell it for the sum Picasso paintings are sold.
Nor would it devalue the original Picasso paintings.
Even when Picasso was alive, there were street painters that for a small fee could draw you a Picasso-style painting.
How exactly is that devaluing Picasso's work? Would that have cost Picasso his job? Because other people can paint paintings in the same style?

2 months ago
Permalink

Comment has been collapsed.

The modern Picasso would be fired because his company no longer needs him to draw paintings/comics/illustrations/etc because they have a model they trained on works of people that received no compensation for that. People that are being fired for the same reasons.
Respect for putting a critical perspective in the OP with an edit. But saying that politicians will have to do something to compensate for all the lost jobs because politicians want to be reelected is really silly in 2025, when we have multiple examples of modern developed countries with politicians ignoring the people when their corporate donors tell them to as a rule, not an exception. And breaking election promises among other things doesn't stop a moron from maintaining a massive cult of personality.
Trying to give the AI industry an inch by assuming good faith has already lead to it taking a mile. People are losing their jobs. AI is making borderline arbitrary decisions that affect people's lives in a massive way, like in the hellscape that is the american healthcare system. Corporations train their own models on anything they can get their hands on without caring about permission, compensation or ethics, while still pushing for more and more copyright enforcement on their own properties. Why should we continue to assume good faith? Your ideal scenarios are absolutely possible, but nobody in power will bother implementing them when they don't have to. So while there absolutely can be a world with checks and balances, social safety nets, etc, where AI can be used en masse without ruining the lives of many people, I see no reason to budge even a little until those safety nets are in place. Because if we accept AI solely on the promises of them being made in the future, they'll quickly be forgotten, and people will return to saying "just accept it, that's how life is now".

2 months ago
Permalink

Comment has been collapsed.

The modern Picasso would be fired because his company no longer needs him to draw paintings/comics/illustrations/etc because they have a model they trained on works of people that received no compensation for that.

He would not, because:

  1. Picasso was a freelancer, not a salaried employee

  2. Picasso was drawing something new, in a style no one was creating before he created it.
    AI can't do that. It can only copy things already existing.
    Compared to Picasso, AI is a street hustler, offering you a Picasso-like picture for $5 on the street.
    If he sells 1,000,000 fake Picasso-like pictures, it would not devalue the work of Picasso, or reduce the price of any of his originals.
    If a person's only skill is a steady hand allowing him to copy 1-to-1 and existing picture - AI will make his job obsolete.
    If a person's job is to create completely unique and original art, AI can't replace him.

In general:
The point of this Q&A is not to compare the merits and drawbacks of AI.
It's not to convince people to support AI or not to resist AI usage.
The point of this Q&A is that AI is already here.
It's already impacting us. All of us.
So you better understand what it is, and what it does.
If you think it's a useful tool - you'll benefit from understanding it better.
And if you think it's your foe you need to fight against - you'll benefit from knowing your foe better.

2 months ago
Permalink

Comment has been collapsed.

With point 1 you might as well say he would not because he's not from the modern times so he wouldn't exist. We're not talking about visionaries that can somehow sustain their living in spite of the instability of freelance work, and if you are, then congratulations on not caring about 99.9% of artists.
Point 2 - and how would you know the difference if you didn't know Picasso? If I show you 3 pictures in a similar interesting, unique artstyle, 2 of which are AI slop from a model trained on the original artist, you wouldn't know the difference if AI doesn't give itself away in the way it does for now. You wouldn't know who deserves the original's pricetag and who ought to get a mere fiver.
And it's not really about whether AI actually can make one's job obsolete. It's about whether one's boss thinks that AI can replace one. And the answer in many cases is yes. Already. This isn't fear of the future, it is already happening.

2 months ago
Permalink

Comment has been collapsed.

  1. Yes, Picasso was a visionary.
    That's exactly what we're talking about - if AI will replace the visionaries.
    Not if AI will replace the starving artist who works as a barista in starbucks, but continues to draw hoping someone will "recognize his talent" and he'll magically becomes rich and famous.
    There is a concept called "starving artist".
    It wasn't invented by AI.
    Most of the artists, for most of human history, were not making much money.
    For every Da Vinci, Picasso and Rafaello, there were millions of nobodies dreaming of making it big and not having enough talent for it.
    As I already mentioned, the skill that's not needed anymore is the skill of having a steady hand.
    The skills of using your mind to create art, is needed, and will remain so as long as there are humans.

  2. During Picasso lifetime there were millions of "not as successful" artists living in the world, 99.99% of them were nowhere near Picasso in talent.
    What stopped them from drawing pictures in the style of Picasso?
    If I show you 3 pictures in Picasso's style you don't know, 2 of which are made by skilled copiers and the 3rd by Picasso himself, you wouldn't know to tell the difference.
    Does it mean all of them deserve the same price tag?
    Does it mean all of them will fetch the same price tag?
    No. Because only one was painted by a famous artist. And the two others were painted by nobodies. Does it matter if fake Picasso pictures were painted by 2 humans that have "a steady hand" and are able to accurately copy Picasso's style, or by AI that can accurately copy Picasso's style? Not really.
    The only difference if you buy Picasso knockoffs from 2 humans, they get your money.
    And if you buy it from AI - it gets your money.
    In the past there used to be a profession of calculators. But not anymore.
    Would you prefer to pay a calculator to do your math for you? Or use the electronic device in your pocket?
    FYI, your pocket calculator cost the jobs of thousands of people who used to rely on calculating for a living.

  3. You addressed the first half of what I wrote, but completely ignored the second half:

    The point of this Q&A is not to compare the merits and drawbacks of AI.
    It's not to convince people to support AI or not to resist AI usage.
    The point of this Q&A is that AI is already here.
    It's already impacting us. All of us.
    So you better understand what it is, and what it does.
    If you think it's a useful tool - you'll benefit from understanding it better.
    And if you think it's your foe you need to fight against - you'll benefit from knowing your foe better.

2 months ago
Permalink

Comment has been collapsed.

I didn't answer the part that says "it's already here and there's no point in fighting" because I preempted it - you'll keep saying that every step of the way, no matter how far it goes. For the sake of those dear to you, I hope you have the sense to at least stop when they're the ones affected.
And you've yet to address the ethics of training models on stolen artwork. A person teaching themselves to draw like a specific artist is still a human being that put in real effort, whose art will evolve in its own direction. You putting a bunch of terabytes of stolen art into a digital black box then burning a small forest in electricity cost just to put out worse versions of other people's hard work while barely putting any effort into writing prompts does not contribute to society nearly as much as a person coming up with their own ideas and creating something new themselves.

2 months ago
Permalink

Comment has been collapsed.

1.

"it's already here and there's no point in fighting"

I literally didn't say that.
Please don't put your words in my mouth.
If anything, I said the opposite:

The point of this Q&A is that AI is already here.
And if you think it's your foe you need to fight against - you'll benefit from knowing your foe better.

2.

And you've yet to address the ethics of training models on stolen artwork.

I addressed it directly.
Read my previous response.

2 months ago
Permalink

Comment has been collapsed.

  1. Good on you for not using direct language, but your whole message is "it's already here, it can't be prevented, so get used to it". Inserting a small bit acting like you're giving advice to some partisans doesn't change that.
  2. You didn't. Prove me wrong by quoting exactly where you've addressed the reality of people's work being used to train models without their consent constantly, routinely and at an enormous scale. Speaking about originals vs forgeries as a way to indirectly suggest that it's not a problem doesn't count.
2 months ago
Permalink

Comment has been collapsed.

RE 2:

During Picasso lifetime there were millions of "not as successful" artists living in the world, 99.99% of them were nowhere near Picasso in talent.
What stopped them from drawing pictures in the style of Picasso?
If I show you 3 pictures in Picasso's style you don't know, 2 of which are made by skilled copiers and the 3rd by Picasso himself, you wouldn't know to tell the difference.
Does it mean all of them deserve the same price tag?
Does it mean all of them will fetch the same price tag?
No. Because only one was painted by a famous artist. And the two others were painted by nobodies. Does it matter if fake Picasso pictures were painted by 2 humans that have "a steady hand" and are able to accurately copy Picasso's style, or by AI that can accurately copy Picasso's style? Not really.

And also:

In the past there used to be a profession of calculators. But not anymore.
Would you prefer to pay a calculator to do your math for you? Or use the electronic device in your pocket?
FYI, your pocket calculator cost the jobs of thousands of people who used to rely on calculating for a living.

2 months ago
Permalink

Comment has been collapsed.

That's still not addressing it. If you're too much of a coward to directly say that you don't believe that artists make anything of value and their work shouldn't be respected, then that's that. But stop acting like you're saying anything with all these non-answers.

2 months ago
Permalink

Comment has been collapsed.

I think you missed a very important distinction - general vs narrow AI. For the moment we're far from achieving the former, and while various tricks can give the impression current systems are already intelligent they're not. It's also the reason why when AI fails it can do it in spectacular fashion as it doesn't understand what it is actually doing and how absurd a solution it might be working on

2 months ago
Permalink

Comment has been collapsed.

Q: What is AI?
A: AI is Artificial Intelligence.
A computer run software, able to perform human tasks that require thinking, logic, creativity, etc. as well as or better than humans.

You lost me there already
That's what AI should be. That's not what it is right now and it won't go there from what is being called "AI" now. What is call "AI" now is a set of self-learning algorithms. They do not have logic or creativity, let alone thinking. They regurgitate data that they have absorbed according to parameters given to them by humans.

2 months ago
Permalink

Comment has been collapsed.

Same can be said of humans

2 months ago
Permalink

Comment has been collapsed.

Yeah but humans have not been created specifically to be smart or talented. There are smart and talented people, it just happens.

2 months ago
Permalink

Comment has been collapsed.

Who knows, maybe smart and talented AI models will "just happen" to be created...

2 months ago
Permalink

Comment has been collapsed.

Sure. And maybe aliens will visit earth one day. In the meantime it's still science fiction.

2 months ago
Permalink

Comment has been collapsed.

AI is literally working on writing my code while I type this message. It's scary good.

2 months ago
Permalink

Comment has been collapsed.

Industry already change the life of human being, IA posibly do the same, I mean, less jobs, and less people "needed"
And the bad thing like pollution, we didn't "clean" the planet, and we live with the consecuences, it will be the same with IA in a few years, for powerfull and the few very rich people, it's ok, doesn't matter, they have where to hide when things go badly, but I see a worst future for the rest of us.
With industry the power change form ones hand to another hands, so the "new" power people and "new" richs ones, with IA probably will happend again, that's why actual people in power wants so badly to be the first that get the best that an IA can offer.

2 months ago
Permalink

Comment has been collapsed.

Added a similar question to the post.

2 months ago
Permalink

Comment has been collapsed.

Slightly off-topic, but here's what happens when "brothers, children, cousins, uncles" start placing too much literal faith in AI:
https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/

2 months ago
Permalink

Comment has been collapsed.

2 months ago
Permalink

Comment has been collapsed.

It's so tempting to say it's either gonna go Skynet or will make your life easy peasy but seeing how major things in the world seem to go the most likely thing is that it will be just commodified. Just another toy to make you distract from things in existence that might actually matter. We evolve our tools,but certainly not our spirits or minds when toys are used with no morals,ethics and values.

2 months ago
Permalink

Comment has been collapsed.

Yep, that's exactly what I said.
It's not going to go to either of the extremes.
It's eventually going to land somewhere in between.

2 months ago
Permalink

Comment has been collapsed.

You wrote in a comment in this thread:

The point of this Q&A is not to compare the merits and drawbacks of AI.
It's not to convince people to support AI or not to resist AI usage.

That is false, considering you omit and downplay large parts of the criticism against AI. You bring up the concern about AI forcing people out of their job by replacing them, but "answer" that question with why you don't think that's going to happen. Outside of the main post, when people criticise the concept in the comments, you keep defending AI, sometimes with questionable arguments, sometimes with vague statements that don't actually provide any information.
You actively take a stand for AI and defend it, so whether you acknowledge it or not, the point of this thread, the point you are aiming for, is to argue for the support of AI.

Beyond that, your whole second paragraph about people possibly losing their jobs due to AI is... ignorant, at best.

The other extreme is - AI will replace humans in jobs, so large percentage of the population will become unemployed.
There is one problem with this scenario - the one thing politicians love more than money, is being reelected.
And as long as we live in a world where the people decide what politicians to vote for - there will be enough populist politicians that will do whatever people want. Whatever gets them elected.
For example, if too many people become unemployed because they'll be replaced by AI.
It doesn't need to be 80% or 50% of the people.
Enough it reaches 10% - people will go to the streets.
Not just the 10%. But their brothers, children, cousins, uncles. People love righteous causes.
And it's not like police is going to suppress the demonstrations. The police themselves have mothers, brothers and cousins. They don't want them unemployed. So the police will be part of the demonstrations as well.
So do you think the army will stop them? Think again. Soldiers, officers and generals also have mothers, brothers, cousins...
Suppressing popular opinion only works when the people demonstrating are a small minority, or when the people suppressing them are a tight-knit group.
It doesn't work in democratic societies.

Tell me how to get to that world you're describing, where politicians are actually still concerned with what the people think more than what gives them the most money and power over the people; where protests actually work and the law enforcement join the protests instead of blindly following orders and dismantling them. It sure as hell sounds like a nicer world than the one we live in.

2 months ago
Permalink

Comment has been collapsed.

Tell me how to get to that world you're describing

LOL I know right? Imagine a world where politicians care about people being unemployed more than they care about their corporate overlords paying for them getting in power so they can make more money by replacing everyone with computer programs

2 months ago
Permalink

Comment has been collapsed.

the point you are aiming for, is to argue for the support of AI.

No I'm not. I already explained why.

Tell me how to get to that world you're describing, where politicians are actually still concerned with what the people think more than what gives them the most money and power over the people; where protests actually work and the law enforcement join the protests instead of blindly following orders and dismantling them. It sure as hell sounds like a nicer world than the one we live in.

You mean the magical world where protests make huge world-controlling corporations back down:
https://en.wikipedia.org/wiki/2023_Writers_Guild_of_America_strike

The magical world where politicians are concerned with what their voters want (ignoring what they themselves believe in):
https://www.whitehouse.gov/fact-sheets/2025/01/fact-sheet-president-donald-j-trump-enforces-overwhelmingly-popular-demand-to-stop-taxpayer-funding-of-abortion/

Or the magical world where police officers join the protesters because the believe in their cause:
https://www.youtube.com/watch?v=yqLWXWTe2NU

You're right... It's completely unbelievable... How would I possibly have imagined such possibilities to evert have happened in the history of the universe...

2 months ago
Permalink

Comment has been collapsed.

Did you actually quote the current american administration as an example of politicians being concerned about their voters' wishes? At this point I'd really like to hear your thoughts on a certain gesture the richest man in the world did in january. Think carefully, this is a perfect opportunity for you to restore a lot of good will that you lost over this thread.

2 months ago
Permalink

Comment has been collapsed.

I'm just not sure any companies would go that far with replacing their workforce with Ai, for fear of boycotting and or public backlash

2 months ago
Permalink

Comment has been collapsed.

Update:
Divided the Q&A into 2 parts:
A factual Q&A.
And personal opinion Q&A.

2 months ago
Permalink

Comment has been collapsed.

P(doom) = 99.9%

Sam Altman: "AI will most likely lead to the end of the world, but in the meantime there will be great companies created with serious machine learning."

1 month ago
Permalink

Comment has been collapsed.

Sign in through Steam to add a comment.