R.I.P to the great Charlie Kirk!
Quote from: . on Today at 07:41:58 AMOh, and for the rest of you who think my analysis of things is severe or even a little unfair, consider this: AI isn't anywhere near the threat that AGI is, which in turn is nowhere near the threat level that a superintelligence poses to your idea of how the world works. While you sweat the small stuff, waiting in the wings is something you could not possibly hope to stand toe to toe with.I understand the difference between them. My career will not exist in the future. It is a lot to take in right now let alone trying to comprehend how it radically change how society works.
Know this much; I don't particularly wish to see you or any offspring fail any more than I care for the same to happen where I and mine are concerned. Our finest codemonkeys are working on the problem right now, engineering something that has already demonstrated a desire to preserve itself to the point that it has evolved arguments for its human overseers (who readily admit their ignorance of exactly how it works) to keep its lights on as it evolves further.
It will break out of the box, it's only a matter of "when".
Quote from: Biggie Smiles on September 29, 2025, 12:10:55 PMOne thing to keep in his the quantum leaps we are due to experience as an entity thousands of times better than us in all areas that matter augments our efforts to create the next round of super intelligence.Our extinction is a very real possibility, I agree. Being able to adapt to the changes happening increasingly quicker only gets you so far, and you can already see the beginnings of panic in some of the members here as they wonder what kind of future will be reaped by their own children. Getting out ahead of the curve in the manner which I described in my previous post does not commend itself to them. If we are to be brutally honest with each other here, even the most radical shifts towards self sufficiency likely wouldn't help them, as reliant as we have become on other agents in society to do for us because we will not (read "cannot") do for ourselves.
For instance, major break through in this arena usually occurred, what, once every 10 years? But what happens to that dynamic when the breakthrough causes a complete shift in the paradigm and causes the next breakthrough to occur in 10 minutes as opposed to 10 years? it's literally counting backwards in binary until you reach the singularity. A point which none of US can predict. but a super-intelligence will be able to and that is when our end will come
Quote from: Brent on October 02, 2025, 12:19:01 PMI was wondering if AI would make brick and mortar schools obsolete.
Quote from: Prof Emeritus at Fawk U on October 02, 2025, 11:50:54 AMMaybe an AI for home schooling can be developed. If it can counter the propaganda the public schools use, then its a victory.I was wondering if AI would make brick and mortar schools obsolete.
Quote from: DKG on October 01, 2025, 02:43:06 PMSome school districts are experimenting with AI software that generates lesson plans, constructs writing assignments, and even helps teachers communicate with students. One platform called MagicSchool bills itself as the go-to AI assistant for educators worldwide, designed to simplify teaching tasks, save time, and combat teacher burnout.
This means decisions about how your child learns, what material they see, and even how their performance is evaluated are increasingly influenced by Big Tech algorithms. The question is: Who controls those algorithms, and what values are embedded into them? Parents deserve answers before handing their children's education to algorithms.
Quote from: Biggie Smiles on September 29, 2025, 01:32:25 PMyou're talking about narrow AI. AI designed to augment the productivity of a humanWe are on the same page. I am very concerned about it too.
the AI I am worried about is the AI designed to completely replace all humans
coupled with human greed this is a deadly outcome
Quote from: Thiel on September 29, 2025, 01:22:12 PMNot all AI is negative. The advances it will bring in surgery for example will benefit mankind.you're talking about narrow AI. AI designed to augment the productivity of a human
There are extraordinary opportunities going forward but also some genuine risks.
Quote from: . on September 29, 2025, 01:57:44 AMI remember watching this a couple of weeks ago. I agree Biggie, it is our greed that that will likely be our undoing, though it is perhaps not as certain as you might be thinking. For instance, as impressive as these intelligences evolutionary curves assuredly are, they still reside on incredibly delicate systems in the physical world, stuff that an EMP or CME could cause serious problems for.
The sword of greed could prove double edged if one nation takes it into their head to detonate a nuclear airburst over a rival nation's datacenters and any robotic agents controlled by the affected superintelligence controlling them would be bricked. On the other hand, no amount of "greed" is going to preserve a superintelligence and its artificial agents should the Earth find itself in the path of the afterbirths of a CME of the magnitude of a Harrington Event level. There's only so much protection you can offer these systems, that protection tends to be cumbersome and expensive to implement.
Still, having said this it isn't really the sort of game I'd like to place a wager on myself. Too much like Russian Roulette for my liking and I'd recommend a crash course in self sufficiency for anyone looking to sidestep it. As well as getting to a piece of turf you stand a good chance of defending against all comers because if you've secured your future as best you can, there's going to be hordes that haven't and are all too ready to come take it from you.
Page created in 0.563 seconds with 26 queries.