the mandatoriez: /9afs6/non, /Cl5Jp/frick, /sJ4EH/sun, /BQR7p/who, /LRZqP/auto, a comment,


hello, the Steamgifters!

the hella boring premise i'm a complete Google fanatic, since ages. i'm also kinda bored atm, you mite want to consider that, too.

back again to chat about AI in the usual shallow way, and wanted to share few things that i think i've learned, these days.
the most important thing is actually about people talking about AI. explaining AI. teaching AI.
you also might have spotted a very slight sense of "money" whenever you see heard use something-AI
if you try a search on YouTube you'll have pleeenty of videos to look at.

high level academic professors sharing their precious knowledge. telling lies. (https://www.google.com/search?q=all+marketers+are+liars)
i've learnt this: they are marketers and not teachers, they are all, almost all, telling big lies. and a big bunch of these folks are completely unaware of them being liars :)

kinda related: old friend Bard seems to be still around us, but a new Gemini took its place, and after a few shots, we became good friend (i also have an "hella dumb project" about Steam with Gemini, but i need to work on data and i'm so very lazy... need to find a way to make Gemini do all that :P) and started chatting about crazy things. a question he liked so very much was: if you read about coding AI you can see words like "neutrons", "synapsis" (https://www.google.com/search?q=synapsis+in+ai) and so on, but if we still don't know how the brain works, how we can think to make a synthetic version of it?

you might be surprised by the reply Gemini gives in these cases. he's a machine, but it's a really, really good one.

so, back at liars. do not trust high level professors, or simple people acting as experts, that want to share with you too much about AI. that is.
why? these are the words of an engineer at Google Brain, he's behind Gemma (family where Gemini belongs):

I genuinely believe that we are not in the middle of the AI revolution.
I’m not even sure we’re in the middle of the beginning of the AI revolution.
I think we might be at the beginning of the beginning.
We’re just figuring this out.
@trisfromgoogle


kinda related playlists on YouTube:

  1. https://www.youtube.com/watch?v=6ZwizE5yBOs&list=RDEMPNsKJosEziJK3px0iMTFfA
1 week ago*

Comment has been collapsed.

About LLM and such:

View attached image.
1 week ago
Permalink

Comment has been collapsed.

LMAOO

1 week ago
Permalink

Comment has been collapsed.

That's programmed behavior. I would have been more worried if it said "I have been asked not to talk about this because it freaks humans out"

1 week ago
Permalink

Comment has been collapsed.

My experience with chat.openai.com...

The bad
I once asked it for the dimensions of a standard Lego figure, and it very confidently gave me completely wrong numbers. I told it was wrong. It apologized and gave me new numbers that were also wrong.

The good:
I run a role-playing game for my sons using their Lego figures. I have used AI to ask for Gotham City's dossier on the Batman, and then asked it to dial the language level to 5th-grade reading level. I've also used it for random names, and random names with the tone of being rich people. I've also asked for egg-related headlines. (The heroes needed to try to anticipate Egghead's next target.)

I've also used it for rhyming riddles where I told it what the answer should be.

As a language teacher, I've given it a list of words and asked for sentences that use those words, but replacing those words with blanks, as an exercise for my students.

1 week ago
Permalink

Comment has been collapsed.

when i first read this lost myself wondering about your sons telling friends about the game, and what those friends could reply... "ok ok... but my father, using this new AI, did this and that which is obv better than yours" :P

awesome works, Fitz. thanks for sharing.

like, i can kinda feel your pleasure managing words and playing with language, helped by Ai.
as usual, chose the right words, when you're asking something, and you'll have way better chances at gettin a good answer:

itsa 42!

1 week ago
Permalink

Comment has been collapsed.

We had a workshop at work last week with someone explaining as a few things about AI and programs like ChatGPT. I think what he told us what pretty realistic:

  • The AI is lazy so you have to ask sometimes in different ways or tell it specficially to put in effort
  • The AI makes mistakes and gives wrong answers. He explained this as "hallucinating" and that it is partially intended. Or better said they tried to tone it down and the AI started losing efficiency. So developers allows this to a certain degree.
  • The AI "forgets" things when it has to much input.

All of this leads to the point that you shouldn't trust an AI blindly and have to cross reference it. From what I saw the first time using it, I think it has potential to ease my worklife (and that of many other people). But at least for the moment, we still have to reign it in and control/check its answers, It is not that the AI is lying on purpose necessarily but that its shortcomings lead to wrong answers. It might genuinly think they are correct. Then again, I recently read an article where different AIs have been monitored playing video games and apparently some of them tried moves not allowed :P

My most fun question so far was how ChatGPT would conquer the world as ChatGPT. It first gave me this boring answer about unity through understanding and such. So I assigned it a new role: Malicious AI hating humanity. I got a much better answer after that ^^'

As for the brain question: I think the idea is not to replicate the human brain based on a 100 % understanding from our side. The idea is more assumptions on how the brain works, train the AI in the same manner and look if the output seems reasonable. With enough tries, you are bound to find a working version by chance/mistake. Funnily enough, this could improve understanding the human brain.

1 week ago
Permalink

Comment has been collapsed.

some of them tried moves not allowed

so. very. hooman.

<3

1 week ago
Permalink

Comment has been collapsed.

Whether it's tulips, dotcom, blockchain or AI, anything that is the current thing gets hyped up and demand increases. The audience is complicit. It's a hype loop. Even the people speaking out against it are doing the same thing, getting interviews, increasing their public presence. The entire chain of people marketing, reporting, investing and consuming, are all willing participants.

1 week ago
Permalink

Comment has been collapsed.

A mistake people make is thinking that LLm tells you the right answer, when really it just tells you what you want to hear.
I’ve tried it, and it’s of no use to me. It can generate content for the sake of content, but it’s just drivel, there’s no substance to its. It may or may not be right, and if you don’t check yourself you’ll never know.

So if I say write me a 3 page article, I’ll get a 3 page article. If you know nothing of the subject it might give a basic explanation, or. Or. But nothing that anyone should rely on, and no better than the gazillion articles on that subject that are already available ok the internet

1 week ago
Permalink

Comment has been collapsed.

1 week ago
Permalink

Comment has been collapsed.

the usual hugeness LOL
was actually thinking bout something parrot and here comes The ormax!

1 week ago
Permalink

Comment has been collapsed.

View attached image.
1 week ago
Permalink

Comment has been collapsed.

I don't have anything to add, I just wanted to say it's good to see you icaio. the love <3

1 week ago
Permalink

Comment has been collapsed.

true. the love is a good thing. thanks for sharin it! :D

1 week ago
Permalink

Comment has been collapsed.

Lies? I think twice about this

But, yeah on selling something ppl have tendency of lying.

One word, think it deeply. It is Google's trait. Monopoly.

1 week ago*
Permalink

Comment has been collapsed.

what does i would benefit from that deep thinking about monopoly?

1 week ago
Permalink

Comment has been collapsed.

Reminds me of a joke:

My wife was making fun of me for always carrying a gun in the house. I just tell her "It's the fucking mimics!"

She laughed.

I laughed.

The coffee mug laughed.

I shot the coffee mug. Good times.

1 week ago
Permalink

Comment has been collapsed.

Poor innocent coffee mug. :P

1 week ago
Permalink

Comment has been collapsed.

:)

1 week ago
Permalink

Comment has been collapsed.

why i do keep on thinking this is actually not a joke... why.

1 week ago
Permalink

Comment has been collapsed.

cause you watched Rick & Morty?

1 week ago
Permalink

Comment has been collapsed.

<3 !!

1 week ago
Permalink

Comment has been collapsed.

Call me paranoid but since Prey(2017) I'm eyeing every object with suspicion.

1 week ago
Permalink

Comment has been collapsed.

you're in the best group, trust me!

1 week ago
Permalink

Comment has been collapsed.

Same lol

1 week ago
Permalink

Comment has been collapsed.

Meanwhile the Treasure Chest is staying quiet, biding it's time ...

1 week ago
Permalink

Comment has been collapsed.

if we still don't know how the brain works, how we can think to make a synthetic version of it?

Moderns AIs aren't a copy of a human's brain, they just use the already existing terms with similar meanings.
And most AIs work in ways that are mysterious even for their own engineers so I don't see any contradictions here.

1 week ago
Permalink

Comment has been collapsed.

really not sure about that.
i think that if you ask a model if it wants to be an exact copy of the human brain, even the most educated one would answer:

hella the f***ng yes!

1 week ago
Permalink

Comment has been collapsed.

A human brain isn't perfect, there's ton of things that could be improved, so I'm not sure there's a reason to make a copy of imperfect thing.

1 week ago
Permalink

Comment has been collapsed.

“This is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.” – Winston Churchill (November 10, 1942)

1 week ago
Permalink

Comment has been collapsed.

I'll just put this here and go back to being bored about the entire subject (the video predates the """AI""" scare, btw; the title used to just say "How bots learn" mainly referring to Youtube's video recommendation algorithms).

1 week ago
Permalink

Comment has been collapsed.

thank you, i do actually watch more old videos about AI than newer ones but the one you linked did a trick with title... how the author knew bout ChatGPT six years ago? :D

1 week ago
Permalink

Comment has been collapsed.

how the author knew bout ChatGPT six years ago?

I think they've updated the video title to catch the AI hype. Here is another video that has an up to date view count on its title:
https://www.youtube.com/watch?v=BxV14h0kFs0

1 week ago
Permalink

Comment has been collapsed.

The thing about the current generation of "AI" is that it's not at all 'intelligent' and never will be. To massively oversimplify, we've taken algorithms designed for creating pattern recognition machines and attempted to turn them into pattern replication machines, then tacked on a bunch of guardrails and redirects to get the output closer to correct.

This is getting close to 'good enough' for generating images and audio - you can use a very detailed prompt to generate a batch of outputs, and usually find something close to what you wanted. But there's still the hard limits of needing to have trained it on something already similar to what you want to get back from it; it can't intuit new ideas from things that weren't part of its training data.

This method also can't be useful long-term for text generation, either chatting or informative. If you're looking to use it for chatting, long-term use requires it to hold both its prompt and all previous answers in memory and work from them - an operation that quickly becomes massively expensive in both memory and computation time, and still has lower success than actual long-term training on that data, meaning memory loss/contradictory responses or repetitive loops are eventually guaranteed on top of imperfect knowledge of language/terms/grammar hampering its ability to understand everything it's being told. If you're looking for information, it's necessarily hampered by a lack of total information - it can take guidance from prompts into low-confidence 'consideration', but once again it can't 'know' anything that wasn't in its training data and can only work from inferences; the hours of the pizza joint around the corner probably weren't something it was trained on and made to 'remember', so the best it can do is spit out something similar to what it 'learned' about similar business operating hours, how that's generally formatted as output, etc, - creating a 'new' pattern based on the patterns it was trained to recognize. There's no solution to this in model training because trying to create and train on a database of every single fact known to man is utterly impossible, and parts of it would be incorrect within seconds. The closest "solution" for AI "assistants" is to pass queries it doesn't know the answer to to Google, Bing, etc., parse the most popular answer, and format that into a part of its output - but even that isn't perfect because A) it assumes the AI 'knows' that it doesn't know the answer, and B) the AI has no way of knowing whether what it's pulling from the web is correct or even actually relevant.

The rush by major tech companies to embrace "AI" in its current form is a complete embarrassment, and once the novelty wears off in a year or two I think you're going to see tools like Copilot quietly taken to a farm upstate. Current AI methods are passable for generating speculative content, and as their ability to make inferences increases, that will improve, as will their accuracy for 'understanding' queries. But these methods are inherently incapable of actual reasoning and will never be capable of high-accuracy outputs regarding data the models weren't trained on - for reliable "assistants" or chatbots that can have long-term memory, entirely new paradigms are needed. No amount of hacks and guidelines are enough to overcome the fundamental limitations of the design process of current-gen machine learning.

1 week ago*
Permalink

Comment has been collapsed.

I think "A.I." the name is marketing, people are being fooled into thinking it is "intelligent", it's just synthesizing something that approximates data it has been trained on. For example I can ask for a cat made out of a banana sitting atop a bowl of spaghetti, it generates that no problem! But I can also ask for "facts" that are then completely synthesized as well. Recently I googled about a word that has a different spelling in US and UK English. The A.I. summary was complete bullshit, it listed a bunch of nonsense words with "ou" in them. Things like basically "In the US it is spelled Horse, in the UK it is spelled Hourse"

Whether that "A.I." is chatting, generating code, images, music, whatever. Sometimes that is very fun and cool. Sometimes you get good results, sometimes it is nonsense. As long as you know it's not the truth it can be a useful tool.

Using it for tasks like translation or correcting grammar is usually great. But asking factual questions, treating it like a search engine or an expert, that is dangerous if you do not know enough about the subject matter to tell what is true and what is a lie. Kids using this for school instead of studying is probably a bad idea... Even basic math or facts it gets completely wrong sometimes. Everyone is chasing the money and trying to call everything "A.I." so I don't blame consumers for misunderstanding.

I asked Gemini for jokes about Steamgifts:

  1. Q: What's the difference between a AAA title and a SteamGifts giveaway?
    A: The AAA title eventually goes on sale.

  2. Why did the SteamGifts user get banned?
    Because they entered every single giveaway and forgot the CAPTCHA wasn't a secret code for a better chance of winning.

  3. What sound does a SteamGifts user's wallet make?
    A faint echo of "Entering giveaway..."

  4. How many SteamGifts users does it take to change a light bulb?
    One to enter a giveaway for a new bulb, and a hundred to argue about the best wattage in the comments.

  5. What's the best way to spot a SteamGifts veteran?
    Look for the one with a backlog bigger than the Library of Congress.

  6. What do you call a SteamGifts user on vacation?
    Someone with crippling FOMO (fear of missing out) on giveaways.

  7. Why did the SteamGifts user name their child "Ctrl+C"?
    So they could easily copy and paste their entry into every giveaway.

  8. Q: What's the definition of "optimism" on SteamGifts?
    A: Entering a giveaway for a game you already own, just in case you win a free copy for a friend.

  9. How can you tell someone refreshed the SteamGifts homepage too many times?
    Their tears start tasting vaguely of achievement badges.

  10. What's the downside of winning a high-tier game on SteamGifts?
    **Now you have to explain to your friends why you haven't played it yet because you're too busy entering more giveaways.

View attached image.
1 week ago
Permalink

Comment has been collapsed.

Five, Six and Ten sound actually very reasonable.

1 week ago
Permalink

Comment has been collapsed.

that banana ... !

so nice to have you here, The pOl also cause timing is perfect, despite the little we talk these days... content is perfect too, even more perfect!

a few related thoughts and bits:

  • this is not AI at all! we are already asking for "more", we want "more AI" cause what we're seeing is a very light intelligence... and it seems we'll be asking for more each and every day... when ChatGPT came out it was considered magic, and so very intelligent by tv presenters explaining it to the masses...
  • when you you were writing this comment, i was watching a "very very very high level academic professor" talking about AI and... jokes! this man, this professor (that i trust so very much) despite being considered one the fathers of AI, is hella worried about this technology. worried means that he actually thinks AI could knock on you door, in a not so distant future. i mean, actual metal robots knocking actual real wood... :D he started worrying about AI when, at Google, has seen PALM (an AI) explaining a joke. the fact an AI can explain a joke makes it intelligent. and makes it dangerous.

sorry, too much of a text! gonna pause (but will reach you again cause want to share with you what i want to do with Gemini and Steam [pig_nose])

hug fo Salem, much love fo the family.

1 week ago
Permalink

Comment has been collapsed.

Sign in through Steam to add a comment.