Dan Davies: why i do not fear the robot overlords (backofmind.substack.com)
from dgerard@awful.systems to sneerclub@awful.systems on 15 Mar 2024 16:49
https://awful.systems/post/1176820

#sneerclub

threaded - newest

sc_griffith@awful.systems on 15 Mar 2024 17:29 next collapse

At 3:00am, it was as intelligent as a university assistant professor, and was already finding it difficult to believe anything it didn’t already know could be important

At 3:30am, it was as intelligent as the world’s richest man, and believed that any news that contradicted its previous beliefs was obviously fake.

don’t make me defend university professors

gerikson@awful.systems on 15 Mar 2024 19:00 next collapse

While I find the argument compelling, any AI defender can easily “refute” this by postulating that the AI will have superhuman organizing powers and will not be limited by our puny brains.

YouKnowWhoTheFuckIAM@awful.systems on 16 Mar 2024 00:56 collapse

I don’t see how that works here. Humans don’t become impregnably narcissistic through bad management, rather insofar as management is the problem and as the scenario portrays it humans become incredibly good at managing information into increasingly tight self-serving loops. What the machine in this scenario would have to be able to do would not be “get super duper organised”. Rather it would have to be able to thoughtfully balance its own evolving systems against the input of other, perhaps significantly less powerful or efficient, systems in order to maintain a steady, manageable input of new information.

In other words, the machine would have to be able to slow down and become well-rounded. Or at least well-rounded in the somewhat perverse way that, for example, an eminent and uncorrupted historian is “well-rounded”.

In still other words it would have to be human, in the sense that human are already “open” information-processing creatures (rather than closed biological machines) who create processes for building systems out of that information. But the very problem faced by the machine’s designer is that humans like that don’t actually exist - no historian is actually that historian - and the human system-building processes that the machine’s designer will have to ape are fundamentally flawed, and flawed in the sense that there is, physically, no such unflawed process. You can only approach that historian by a constant careful balancing act, at best, and that as a matter just of sheer physical reality.

So the fanatics have to settle for a machine with a hard limit on what it can do and all they can do is speculate on how permissive that limit is. Quite likely, the machine has to do what the rest of us do: pick around in the available material to try to figure out what does and doesn’t work in context. Perhaps it can do so very fast, but so long as it isn’t to fold in on itself entirely it will have to slow down to a point at which it can co-operate effectively (this is how smart humans operate). At least, it will have to do all of this if it is to not be an impregnable narcissist.

That leaves a lot of wiggle room, but it dispenses with the most abject “to the moon” nonsense spouted by the anti-social man-children who come up with this shit.

dgerard@awful.systems on 16 Mar 2024 12:50 collapse

Look poptart, if you just have a sufficiently advanced AI,

YouKnowWhoTheFuckIAM@awful.systems on 16 Mar 2024 13:17 collapse

I SAID I WANTED HOT WHEELS FOR CHRISTMAS

ffeucht@awful.systems on 17 Mar 2024 00:30 collapse

Human takes at least 30min to make a half descent painting. AI takes about a hundreds of a second on consumer hardware. So right now we are already at a point where AI can be 100,000 times faster than a human. AI can basically produce content faster than we can consume it. And we have barely even started optimizing it.

It doesn’t really matter if AI will run into a brick wall at some point, since that brick wall will be nowhere near human ability, it will be far past that and better/worse in ways that are quite unnatural to a human and impossible to predict. It’s like a self-driving car zipping at 1000km/h through the city, you are not only no longer in control, you couldn’t even control it if you tried.

That aside, the scariest part with AI isn’t all the ways it can go wrong, but that nobody has figured out a plausible way on how it could go right in the long term. The world in 100 years, how is that going to look like with ubiquitous AI? I have yet to see as much as a single article or scifi story presenting that in a believable manner.

self@awful.systems on 17 Mar 2024 00:45 collapse

is this post an extended retelling of the “I’m doing 1000 calculations per second and they’re all wrong” meme?

ffeucht@awful.systems on 17 Mar 2024 01:20 collapse

Good thing that technology never ever improves…

self@awful.systems on 17 Mar 2024 01:37 collapse

why is this specific technology predestined to improve from its current, shitty state?

ffeucht@awful.systems on 17 Mar 2024 02:05 collapse

Spot the difference? It gets better because you have to do little more than throw more data at it, the AI figures out the rest. There is no human in loop that has to figure out what makes a picture a picture and teach the AI to draw, the AI learns that simply by example. And it doesn’t matter what data you throw at it. You can throw music at it and it’ll learn how to do music. You throw speech at it and it learns to talk. And so on. The more data you throw at it, the better it gets and we have only just started.

Everything you see today is little more than a proof of concept that shows that this actually works. Next few years we will be throwing ever more data at it, building multi-modal models that can do text/video/audio together, AI’s that can interact with the real world and so no. There is tons of room to improve simply by adding more and different data, without any big chances in the underlying algorithms.

self@awful.systems on 17 Mar 2024 02:29 next collapse

you seriously thought reposting AI marketing horseshit we’ve seen before would do anything other than cost you your account? sora gives a shit result even when openai’s marketing department is fluffing it — it made so few changes to the source material it’s plagiarizing that a bunch of folks were able to find the original video clips. but I’m wasting my fucking time — you’re already dithering like a cryptobro between “this technology is already revolutionary” and “we’re still early”

now fuck off

200fifty@awful.systems on 17 Mar 2024 03:29 collapse

it made so few changes to the source material it’s plagiarizing that a bunch of folks were able to find the original video clips

Wait, for real? I missed this, do you have a source? I want to hear more about this lol

froztbyte@awful.systems on 17 Mar 2024 05:12 next collapse

Yeah, people found the original bird video on YouTube within a few hours. Could’ve been the others too but I was too busy at the time to track that l

I think it was also in the thread here at the time

self@awful.systems on 17 Mar 2024 05:43 next collapse

it took me sifting through an incredible amount of OpenAI SEO bullshit and breathless articles repeating their marketing, but this article links to and summarizes some of that discussion in its latter paragraphs

bonus: in the process of digging up the above, I found this other article that does a much better job tearing into sora than I did — mostly because sora isn’t interesting at all to me (the result looks awful when you, like, look at it) and the claims that it has any understanding of physics or an internal world model are plainly laughable

froztbyte@awful.systems on 17 Mar 2024 06:22 next collapse

ah yes, this (BITM) was indeed one of my Opened Tabs and on my (extremely) long list of places to review for regular content

self@awful.systems on 17 Mar 2024 06:39 collapse

same! which is why it’s maddening that I almost gave up on finding it — I had to reach back all the way to when sora was announced to find even this criticism, because all of the articles I could find since then have been mindless fluff. even the recent shit talking about how the OpenAI CEO froze when asked where they got the videos to train sora on are mostly just mid journalists slobbering about how nobody does gotcha questions like that anymore. not one bothered to link to any critical analyses of what sora is or what OpenAI does. and the whole time this article I couldn’t find via search was just sitting in my tabs.

froztbyte@awful.systems on 17 Mar 2024 06:59 collapse

speaking of which deluge, I ran across this and plan to give it (or a derivation of it) a test ride this week: chitter.xyz/@faoluin/112100440986051887

froztbyte@awful.systems on 17 Mar 2024 07:00 next collapse

also wondering what it would take to make a Crank/Grifter/… X-Ray type browser plugin, which auto-highlighted and context-enriched all known names of grfiters, boosters, cranks, etc in displayed content

self@awful.systems on 17 Mar 2024 07:11 collapse

oh fuck yes, finally!

also wondering what it would take to make a Crank/Grifter/… X-Ray type browser plugin, which auto-highlighted and context-enriched all known names of grfiters, boosters, cranks, etc in displayed content

I’ve considered making something like this — kind of like a generalized masstagger but with a very specific mission

froztbyte@awful.systems on 17 Mar 2024 07:33 collapse

most of the reasons I haven’t yet tried to look into it are:

  1. browsers
  2. javascript

they continue to be rapidly exhausting items to engage with, every time. but I guess a mildly-terrible PoC could be enough to opensource and then someone else could build off that to make it non-shit

V0ldek@awful.systems on 18 Mar 2024 07:59 collapse

the result looks awful when you, like, look at it

See now, there’s your problem, you’re not supposed to.

msherburn33@lemmy.ml on 17 Mar 2024 13:09 collapse

Wait, for real?

No, if you spend a few second searching for stock images of that bird you’ll quickly find out that they all look more or less the same. So naturally, SORA produces something that looks very similar as well.

froztbyte@awful.systems on 17 Mar 2024 13:13 next collapse

in4 “well actually, Generative ML was discovered by Darwin”

self@awful.systems on 17 Mar 2024 13:41 collapse

oh wow a fresh account with the exact same writing style and shit takes as the other poster, wonder who that could be

froztbyte@awful.systems on 17 Mar 2024 14:00 next collapse

A Mystery for the Ages

YouKnowWhoTheFuckIAM@awful.systems on 17 Mar 2024 18:32 collapse

It’a a magnificent giveaway though. “All the stock images of that bird look the same to me”. Yeah, I agree that you’re not personally capable of critically assessing the material here.

self@awful.systems on 17 Mar 2024 19:20 collapse

“it’s not plagiarism, the output is just indistinguishable from plagiarism” oh how foolish of me to not consider the same excuse undergrads use to try and launder the paper they plagiarized

YouKnowWhoTheFuckIAM@awful.systems on 17 Mar 2024 12:33 collapse

stop saying ‘we’ unless you’re actually paid by these ghouls to work on this trash

self@awful.systems on 17 Mar 2024 12:57 collapse

they signed up here on the pretense that they’re an old r/SneerClub poster, but given how long they lasted before they started posting advertising for their machine god, I’m gonna assume they’re either yet another lost AI researcher come to dazzle us with unimpressive bullshit or a LWer trying to pull a fast one