Very interesting.
I años learned not too long ago that the whole IA, like chatgpt wasn't smart. It was just predicting the next words.
Comment has been collapsed.
One time I stepped off a bus, looking out on the ocean, when my old boss rang me. He was confused why I didn't want to buy a last minute train ticket back home to work part time hours at a minimum wage job that wouldn't earn back the money spent on travel and the accommodation I was going to stay at.
At the time I thought whats the point of earning money if it can't be enjoyed. Hopefully soon AI will be advanced enough so that it can enjoy vacation on our behalf while we earn money to keep it running.
Comment has been collapsed.
It squeezes out some of the entry/junior jobs because a mid-level or senior can task AI to do that sort of work. Right now, it's mainly making people more productive rather than full replacing people. But if I only need 10 devs to do the work of 30, I might want to cut costs and fire 20 of them.
Comment has been collapsed.
I think it'd be unwise for any company to replace their human IT workers who are capable of understanding what they're doing and coming up with novel solutions with what's essentially predictive text on steroids. Current AI is little more than a very sophisticated chat bot, capable of quickly remixing its vast amount of input data into something that seems new at first glance but fundamentally unable to come up with anything actually new or to process any of what it spews into a coherent concept. In engineering there's this saying that there's as many solutions to a problem as people trying to solve it (paraphrasing out of memory), but I guess with AI generated solutions there's only gonna be as many solutions as it received as input.
Don't get me wrong, it's evidently a good tool of you just want a quick and dirty solution, similarly to how programmers usually copy paste chunks of code when they can't be asked to write brand new one for something that somebody else has already done. But it's likely gonna be pretty bad at optimizing or fixing bugs, not to mention that if that original code used as input wasn't sourced in a way that can stand in court, which if going by reports it likely wasn't, it could turn into a legal nightmare down the road, not for hobbyist but for actual big companies.
Comment has been collapsed.
long story short
it doesn't fully exist yet
ai can be used for both good and evil
the massive amounts of heat it creates and electricity its using right now.... not good
Comment has been collapsed.
Humans create heat and use electricity.
Compared to the energy use of AI, it's like saying "birds also create carbon monoxyde when they breathe so why should we bother with cars and corporate pollution?"
Also it doesn't just use vast amounts of energy (as it is, which is probably 10% of what it will be in a year if it continues at this rate, unchecked, it also uses MASSIVE amount of water for cooling servers.
We are living on the edge of a knife due to our over-exploitation of our natural resources and we just created a toy that multiplied that problem by a million.
Comment has been collapsed.
https://www.nature.com/articles/s41598-024-54271-x
I could have AI generate more books in a month than all humans ever have and it wouldn't use all that much energy. If you do AI artwork, imagine how many hours go into regular art vs telling AI to generate like 100x pieces of art. AI is really fast and efficient compared to humans. My point here is that the energy from AI at this point are negligible compared to other things we do. I'd imagine a single plane ride is more emissions than my lifetime's use of AI.
From ChatGPT:
A one-way flight from New York to LA emits about 550 kg CO₂, as much as 9,000+ hours of desktop computer use—that’s over a full year of continuous use (24/7) or more than a decade of 2–3 hours per day.
A one-way flight from New York to Los Angeles uses about 3,250 kWh of energy per passenger—roughly the same as running a desktop computer nonstop for 2.5 years or a laptop for over 12 years.
Comment has been collapsed.
But again those people you think you are replacing with AI who is super efficient, they still literally exist. They still also produce art. They're just not getting paid money to do it anymore because "AI is cheaper" so you just added an extra source of energy consumption on top of the one that already exists.
Also it's a fallacy to believe AI is so efficient at creating things that it can be done in 2 seconds when a human would take hours. In the context of corporate life, it's just not going to happen
The AI is going to take 10 seconds to create a first image. People are going to discuss the result. Another bunch of images, each taking as much time or more, is going to get created and discussed. All you did is cut the middle man. It's still going to take as much time to get to, I don't know, an ad, that it did before. And probably way more energy because people think it's costless so they're going to generate 2000 images when a human would have done 4 drafts.
I'd imagine a single plane ride is more emissions than my lifetime's use of AI.
But again: plane rides are going to keep happening whether you use AI or not. You just add more to the problem.
And by the way since you're asking ChatGPT questions like these you should know that it's about 1500 times more energy consumed than doing a google search
Comment has been collapsed.
Humans create heat and use electricity.
Electricity and heat... and all the water for cooling.
But it's ok. Some AI companies are already talking about creating their own nuclear powerplants to "solve" their energy problem so it's all good....
Comment has been collapsed.
Or the environmental cost of people playing "draw me a picture" for hours because they have nothing better to do, or using "AI" to make google search or summarize wikipedia basically.
Or the human cost of replacing people with software
Or AI hallucinations and what it means when this is supposed to replace humans, especially in fields like medicine, the law, and crucial systems... as well as armed drones operated by AI.
And let's not forget the fact that soon it will be impossible to tell an AI generated image or video from a real image or video, which should, I think, worry a lot of people but somehow doesn't because the media is too focused on it being shiny.
There won't be a court in the world that will be able to use photographic or video evidence anymore.
Comment has been collapsed.
I think you missed a very important distinction - general vs narrow AI. For the moment we're far from achieving the former, and while various tricks can give the impression current systems are already intelligent they're not. It's also the reason why when AI fails it can do it in spectacular fashion as it doesn't understand what it is actually doing and how absurd a solution it might be working on
Comment has been collapsed.
Q: What is AI?
A: AI is Artificial Intelligence.
A computer run software, able to perform human tasks that require thinking, logic, creativity, etc. as well as or better than humans.
You lost me there already
That's what AI should be. That's not what it is right now and it won't go there from what is being called "AI" now. What is call "AI" now is a set of self-learning algorithms. They do not have logic or creativity, let alone thinking. They regurgitate data that they have absorbed according to parameters given to them by humans.
Comment has been collapsed.
Industry already change the life of human being, IA posibly do the same, I mean, less jobs, and less people "needed"
And the bad thing like pollution, we didn't "clean" the planet, and we live with the consecuences, it will be the same with IA in a few years, for powerfull and the few very rich people, it's ok, doesn't matter, they have where to hide when things go badly, but I see a worst future for the rest of us.
With industry the power change form ones hand to another hands, so the "new" power people and "new" richs ones, with IA probably will happend again, that's why actual people in power wants so badly to be the first that get the best that an IA can offer.
Comment has been collapsed.
302 Comments - Last post 5 minutes ago by Wok
177 Comments - Last post 6 minutes ago by Reidor
8 Comments - Last post 15 minutes ago by TheRevenantKnight
38 Comments - Last post 20 minutes ago by fr0zenX
16 Comments - Last post 43 minutes ago by sensualshakti
318 Comments - Last post 1 hour ago by devotee
32 Comments - Last post 3 hours ago by DrR0Ck
1,275 Comments - Last post 5 minutes ago by Eremeir
437 Comments - Last post 18 minutes ago by aquatorrent
254 Comments - Last post 19 minutes ago by TwixClub
10,404 Comments - Last post 29 minutes ago by Teleute
27 Comments - Last post 37 minutes ago by Fluffster
3,987 Comments - Last post 1 hour ago by Midnight12891
61 Comments - Last post 1 hour ago by EyeofArrow
There was an important and interesting discussion of AI models a few months back, when deepseek just came out.
And it looks like AI is quickly becoming more and more impactful on all our lives.
So I've decided to take some of my answers in that discussion, and create a more comprehensive Q&A, which I hope people will find interesting.
As I see it, the AI will revolutionize the world, as many other inventions did in the past:
The electricity, the car, the airplane, the computer, the printing press, etc.
But while other inventions initially only affected one or a few fields, it took decades or even centuries until their impact became global on many aspects of our lives.
With AI the progress is much faster, and all of us are either already feeling or will feel within a few years how it affects our lives, jobs, school, etc.
Questions & Answers:
Q: What is AI?
A: AI is Artificial Intelligence.
A computer run software, able to perform human tasks that require thinking, logic, creativity, etc. as well as or better than humans.
Q: How is AI built? What are Neural Nets / Neural Networks?
A: The basis of AI is a concept called "Neural Networks", which is taken from the way our brains work.
We can "teach" it by creating connections between the nodes.
For example, show a child enough pictures of a cat and say the word "cat" and they will start to associate the word with the image (a nueral link is created in the brain)
Similarly, if you show a neural network enough pictures of a cat and let it know it's called "cat", it will eventually will be able to "predict with a high probability" by looking at a picture, if there is a "cat" on it (a neural link will be created in the neural network).
The same principle can be used to train (teach) models not just on images, but also on audio, video, text, etc.
For more information see https://medium.com/@jereminuerofficial/neural-networks-for-dummies-bc1ed3f69027
Q: What is Machine Learning (ML)? What is Data Science (DS)?
A: Machines learning is the profession training various "automated computer models" or "machine learning models". We encompass these models by the general term of AI.
There are many many ways of building machine learning models.
The most common and well known is Neural Networks (explained above).
Data Science is the study subject of training machine learning models (among other things).
People who build/train ML models are usually called "Data Scientist" or "ML model engineer".
Q: How AI models create images/videos/songs/texts? What are GAN (Generative Adversarial Network)?
A: I explained above how Neural Networks are able to learn to "understand" visual, audio or othe input.
Generative Adversarial Network (GAN) is what enables Neural Networks to create output.
We basically pit 2 Neural Nets against each other.
For example, we want to create a Neural Net that can draw Picasso-like pictures.
We take a blank model that can draw, and tell it to draw pictures of random pixels.
We then take one model we trained on existing Picasso pictures, and can tell us a percentage (0% - 100%) how sure it is a picture it's seeing in Picasso.
Each time the first model draws a picture, the second model analyzes it, and gives it a percentage score of how close it is to Picasso. And gives it as feedback to the first model.
Each time the score becomes higher, the first model strengthens the neural link that was used for that image generation.
Each such iteration can take a split second. And as more and more iterations pass, the "correct" neural links are strengthened, and the first model is able to draw pictures that get higher scores from the second model (look more like Picasso).
After millions, or even billions of iterations, the first model will pass a benchmark defined by the ML engineer (for example: above 80% accuracy), which was determined to be "enough" for Picasso-like pictures.
If we run the first model until it reaches 100% accuracy - that would be bad. It's called "overfitting". It means every picture it will create will be a copy of an existing Picasso picture.
And that was not out goal.
Q: What are LLMs (Large Language Models)? Where do they come from?
A: LLMs (Large Language Models) were first created 2 years ago, and used to power the first iteration of ChatGPT.
While ML (Machine Learning) models have been with us for 10+ years. MMLs are relatively new.
The innovation with LLMs is - they showed us a product that's bigger than the sum of it's parts:
You start by taking an ML model that predicts your next word.
Think of writing a message on a cell phone, and it predicts your next word.
But it turns out that if you train it on a data set that is large enough, instead of being a system that is really really good at predicting your next word, but doesn't actually knows how to do anything beyond predicting your next word.
Instead you get a system that can actually write a book like it was Isaac Asimov.
Or discuss any topic you want with you.
Or write a computer program.
Or do the countless different things we use ChatGPT for today.
It's a living proof of the Infinite monkey theorem: a monkey hitting random keys for an infinite amount of time will eventually type out the complete works of William Shakespeare.
LLMs are what powers all of the LM models we call AI (Grok, Perplexity, Dall-E, ChatGPT, Deepseek, Llama, etc.)
Q: How are LLMs different from previous LM models?
A: The main difference of LLMs from previous ML models, is the fact that they are large.
Which is an understatement.
They are HUGE.
And each time they create a new model, they make it more and more HUGE. Like 10x 100x 1000x bigger every iteration.
Which caused several changes in the technological market:
Small companies can no longer compete and innovate in regards to AI.
You cannot be the next Google or the next Meta in AI space.
Because you need the resources of a behemoth like Google, Meta or Microsoft to create and train a modern LLM.
Even Apple doesn't have it's own LLM yet. That's how difficult it has become to create/train one.
To train a single LLM model, you now need HUGE (HUMONGOUS) computing power.
Literally thousands or tens of thousands powerful processors.
And as it's more efficient to do on GPUs (Graphics Cards).
(For more info see CPU vs. GPU for machine learning)
Which is also the reason for the rise in the Nvidia stock in the last 2 years: Huge amounts of GPUs are needed to train the LLMs, and the bigger and more complex the LLMs, the bigger the number of GPUs needed for their training.
And in addition to that, it takes HUGE amounts of electricity.
To the point that the companies working on new LLM models, are now building their own nuclear power plants, to support their electricity needs, for training AI models.
Which is why it costs Billions (with a B) of $$$ to develop new LLM models, and only huge & rich corporations can afford to do it.
Q: What is Deepseek? Why are they popular? How are they different from OpenAI?
A: Deepseek is a small start-up which (supposedly) was able to develop their own LLM models.
What was written above, one would presume that this would be impossible, as it would cost them Billions of $$$, which small start-ups don't have.
But they were able to train their model on a "minuscule" budget of only $6,000,000.
It seems they were able to do it, but using a lot of optimization, cutting corners (their model is slightly less accurate than the best ChatGPT model, but is infinitely cheaper) and by using existing models (ChatGPT, Grok, Llama) to participate in training the model.
Q: What is an "Open Source LLM"? How is it similar/different from an "Open Source Software"?
A: Open source software means getting access to the source code of the software.
For example, if software was a table.
Being "Open Source" means you get a full list of parts you would need to build it, and complete step-by-step instructions on how to build it yourself.
If you take the same components, and follow the instructions exactly - you end up with an identical table.
On the other hand, ML models are "black boxes" by definition.
Which means we can train one, but we don't really know or understand how it works inside.
We don't know why it makes one choice and not another - it's all based on the way the millions of neural nodes got linked inside it's brain.
It's like trying to predict what a person is thinking by looking at a CAT scan of their brain.
So "Open Source LLM" is like getting a smartphone with usage instructions.
You can take pictures with it, you can make calls, you can connect it to earphones.
But you can't actually build one yourself based on the instructions you received.
Providing "Open Source LLM" basically allows you to run it on your computer for free (instead of running it on the company's computer, and paying them for it).
But it doesn't provide any information on how it works, or how you can build one yourself.
Q: What are ChatGPT / Grok / Deepseek/ etc.? How are they built?
A: See answer to "What are LLMs (Large Language Models)?"
Feel free to ask additional questions, and I will add them to the Q&A.
Comment has been collapsed.