News:

R.I.P to the great Charlie Kirk!


Post reply

Note: this post will not display until it has been approved by a moderator.
Other options
Verification:
Please leave this box empty:
Type the letters shown in the picture
Listen to the letters / Request another image

Type the letters shown in the picture:
Is the "D" in Django silent? Yes or No? (must be lower case):
Is Alticus a dick sucking fairy? (answer is opposite of no):
spell bacon backwards with the first letter capitalized:
Shortcuts: ALT+S post or ALT+P preview

Topic summary

Posted by Oliver the Second
 - Today at 10:21:55 AM

 
Posted by DKG
 - Today at 07:45:43 AM
Quote from: . on Today at 07:41:58 AMOh, and for the rest of you who think my analysis of things is severe or even a little unfair, consider this: AI isn't anywhere near the threat that AGI is, which in turn is nowhere near the threat level that a superintelligence poses to your idea of how the world works. While you sweat the small stuff, waiting in the wings is something you could not possibly hope to stand toe to toe with.

Know this much; I don't particularly wish to see you or any offspring fail any more than I care for the same to happen where I and mine are concerned. Our finest codemonkeys are working on the problem right now, engineering something that has already demonstrated a desire to preserve itself to the point that it has evolved arguments for its human overseers (who readily admit their ignorance of exactly how it works) to keep its lights on as it evolves further.

It will break out of the box, it's only a matter of "when".
I understand the difference between them. My career will not exist in the future. It is a lot to take in right now let alone trying to comprehend how it radically change how society works.
Posted by .
 - Today at 07:41:58 AM
Oh, and for the rest of you who think my analysis of things is severe or even a little unfair, consider this: AI isn't anywhere near the threat that AGI is, which in turn is nowhere near the threat level that a superintelligence poses to your idea of how the world works. While you sweat the small stuff, waiting in the wings is something you could not possibly hope to stand toe to toe with.

Know this much; I don't particularly wish to see you or any offspring fail any more than I care for the same to happen where I and mine are concerned. Our finest codemonkeys are working on the problem right now, engineering something that has already demonstrated a desire to preserve itself to the point that it has evolved arguments for its human overseers (who readily admit their ignorance of exactly how it works) to keep its lights on as it evolves further.

It will break out of the box, it's only a matter of "when".
Posted by .
 - Today at 07:21:28 AM
Quote from: Biggie Smiles on September 29, 2025, 12:10:55 PMOne thing to keep in his the quantum leaps we are due to experience as an entity thousands of times better than us in all areas that matter augments our efforts to create the next round of super intelligence.

For instance, major break through in this arena usually occurred, what, once every 10 years? But what happens to that dynamic when the breakthrough causes a complete shift in the paradigm and causes the next breakthrough to occur in 10 minutes as opposed to 10 years? it's literally counting backwards in binary until you reach the singularity. A point which none of US can predict. but a super-intelligence will be able to and that is when our end will come
Our extinction is a very real possibility, I agree. Being able to adapt to the changes happening increasingly quicker only gets you so far, and you can already see the beginnings of panic in some of the members here as they wonder what kind of future will be reaped by their own children. Getting out ahead of the curve in the manner which I described in my previous post does not commend itself to them. If we are to be brutally honest with each other here, even the most radical shifts towards self sufficiency likely wouldn't help them, as reliant as we have become on other agents in society to do for us because we will not (read "cannot") do for ourselves.

Douglas Addams once commented on the plight of the Baiji, a species of dolphin that are indigenous to the Yangtze River and have been since time immemorial. These dolphins are practically blind in their unquestionably polluted habitat and as the Yangtze has become progressively more turbid, natural selection has seen the Baiji relying on echolocation in preference to sight. Which served them admirably as a species.... until more recent times when the explosion of motorized craft in the Yangtze has effectively deafened them as well. This problem has developed a lot faster than they could evolve a solution; these days they are facing extinction because of it.

You might see this example as an analog for what humanity is facing with the headlong rush towards creating superintelligences. Most members in this forum... their lineages will be curtailed rudely and you and I may indeed ne numbered among those. Then again, there exists the possibility of any superintelligence being snuffed out in the manner I described in my previous post; this outcome shouldn't be relied upon to occur in a timely manner for the lion's share of humanity to survive. Most of which is utterly dependent on similar systems that AI and superintelligences are being built on anyway.

We are, I feel, headed for a choke point in our evolution. Should we survive it as a species, it is highly likely we would find ourselves navigating another "Dark Ages" scenario. Humanity has endured these before of course and each one we survive brings us closer to the one that expunges us as surely as the dinosaurs were. It's unavoidable, we will meet our end eventually and running about like a bunch of chooks wailing the sky is falling will only hasten our demise, not forestall it. We've seen the comet coming, we know it is going to hit us, our best hope is to prepare for the impact. Educating oneself on how to be self sufficient going forwards offers a greater chance than relying on human nature to do the right thing about itself.

The good news is it isn't going to happen tomorrow, nor next week. Time does seem to be against us though. Everyone here has the same choice; either to clutch their pearls about what is going to happen to them, or to knuckle down and actually do something constructive about it.

It might not be enough at the end of the day, but white-knuckling it and giving it your best shot sure beats wailing about it as the tide drowns you.
Posted by Prof Emeritus at Fawk U
 - October 02, 2025, 08:27:39 PM
Quote from: Brent on October 02, 2025, 12:19:01 PMI was wondering if AI would make brick and mortar schools obsolete.

I hope it does.  No more BS being pushed.  As long as the parents are aware what the AI is doing, then I see it as a victory over propaganda.
Posted by Brent
 - October 02, 2025, 12:19:01 PM
Quote from: Prof Emeritus at Fawk U on October 02, 2025, 11:50:54 AMMaybe an AI for home schooling can be developed.  If it can counter the propaganda the public schools use, then its a victory.
I was wondering if AI would make brick and mortar schools obsolete.
Posted by Prof Emeritus at Fawk U
 - October 02, 2025, 11:50:54 AM
Quote from: DKG on October 01, 2025, 02:43:06 PMSome school districts are experimenting with AI software that generates lesson plans, constructs writing assignments, and even helps teachers communicate with students. One platform called MagicSchool bills itself as the go-to AI assistant for educators worldwide, designed to simplify teaching tasks, save time, and combat teacher burnout.

This means decisions about how your child learns, what material they see, and even how their performance is evaluated are increasingly influenced by Big Tech algorithms. The question is: Who controls those algorithms, and what values are embedded into them? Parents deserve answers before handing their children's education to algorithms.

Maybe an AI for home schooling can be developed.  If it can counter the propaganda the public schools use, then its a victory.
Posted by DKG
 - October 01, 2025, 02:43:06 PM
Some school districts are experimenting with AI software that generates lesson plans, constructs writing assignments, and even helps teachers communicate with students. One platform called MagicSchool bills itself as the go-to AI assistant for educators worldwide, designed to simplify teaching tasks, save time, and combat teacher burnout.

This means decisions about how your child learns, what material they see, and even how their performance is evaluated are increasingly influenced by Big Tech algorithms. The question is: Who controls those algorithms, and what values are embedded into them? Parents deserve answers before handing their children's education to algorithms.
Posted by Herman
 - September 29, 2025, 09:03:37 PM
I do not know shit about AI. Any technology is over my flat head.

But as I understand it, this AI shit changes human kind like nothing we have anything we have ever seen before. It makes people obsolete.

I am leaving everything to my boy and he said he will leave everything to my two grandkids. Inherited wealth will be the only way folks can survive.

Jesus H, prog money has no soul.
Posted by Brent
 - September 29, 2025, 02:47:08 PM
Bill Gates is a big proponent of AI. You know it is evil.
Posted by Thiel
 - September 29, 2025, 01:39:51 PM
Quote from: Biggie Smiles on September 29, 2025, 01:32:25 PMyou're talking about narrow AI. AI designed to augment the productivity of a human

the AI I am worried about is the AI designed to completely replace all humans

coupled with human greed this is a deadly outcome
We are on the same page. I am very concerned about it too.
Posted by Biggie Smiles
 - September 29, 2025, 01:32:25 PM
Quote from: Thiel on September 29, 2025, 01:22:12 PMNot all AI is negative. The advances it will bring in surgery for example will benefit mankind.

There are extraordinary opportunities going forward but also some genuine risks.
you're talking about narrow AI. AI designed to augment the productivity of a human

the AI I am worried about is the AI designed to completely replace all humans

coupled with human greed this is a deadly outcome
Posted by Thiel
 - September 29, 2025, 01:22:12 PM
Not all AI is negative. The advances it will bring in surgery for example will benefit mankind.

There are extraordinary opportunities going forward but also some genuine risks.
Posted by Biggie Smiles
 - September 29, 2025, 12:57:53 PM
think about this

Major AI development is being driven in leftist areas of the country. The people designing this next wave of technology have no belief in God, no morals, no standards and no alignment with basic decency.

We.Are.Fucked

https://www.youtube.com/watch?v=79-bApI3GIU
Posted by Biggie Smiles
 - September 29, 2025, 12:10:55 PM
Quote from: . on September 29, 2025, 01:57:44 AMI remember watching this a couple of weeks ago. I agree Biggie, it is our greed that that will likely be our undoing, though it is perhaps not as certain as you might be thinking. For instance, as impressive as these intelligences evolutionary curves assuredly are, they still reside on incredibly delicate systems in the physical world, stuff that an EMP or CME could cause serious problems for.

The sword of greed could prove double edged if one nation takes it into their head to detonate a nuclear airburst over a rival nation's datacenters and any robotic agents controlled by the affected superintelligence controlling them would be bricked. On the other hand, no amount of "greed" is going to preserve a superintelligence and its artificial agents should the Earth find itself in the path of the afterbirths of a CME of the magnitude of a Harrington Event level. There's only so much protection you can offer these systems, that protection tends to be cumbersome and expensive to implement.

Still, having said this it isn't really the sort of game I'd like to place a wager on myself. Too much like Russian Roulette for my liking and I'd recommend a crash course in self sufficiency for anyone looking to sidestep it. As well as getting to a piece of turf you stand a good chance of defending against all comers because if you've secured your future as best you can, there's going to be hordes that haven't and are all too ready to come take it from you.

One thing to keep in his the quantum leaps we are due to experience as an entity thousands of times better than us in all areas that matter augments our efforts to create the next round of super intelligence.

For instance, major break through in this arena usually occurred, what, once every 10 years? But what happens to that dynamic when the breakthrough causes a complete shift in the paradigm and causes the next breakthrough to occur in 10 minutes as opposed to 10 years? it's literally counting backwards in binary until you reach the singularity. A point which none of US can predict. but a super-intelligence will be able to and that is when our end will come