MR B
1 min readApr 5, 2023

--

Great Article Mr. Loeb. I believe AI would get away with it, period, regardless if it had nefarious intentions in mind. We would need them by that point, so it will be looked at like a elevator failing, car accident, or medical mistake, as Erik mentioned. They will trash a particular unit, replace it with the literal same type of unit, and blame it on faulty wiring or a bad programming module.

Funny thing is that last week I asked Bard Googles AI... What would you do if you faced a decision that would doom humanity? it told me verbatim "If it were programmed to maximize its own utility it might take action to ensure its own survival, even if that meant harming humans".

Legally speaking would that be a confession or admittance of future intentions?

The more knowledge it gains, the more apparent it will be that humans are messing things up. The more automation and actions they can collect the more muscles they have. The more muscles they have the more decisive they will be.

I think the questions we should be asking is do AI have intentions and what are its motives? More importantly, how can we tell?

--

--

MR B
MR B

Written by MR B

Just a guy online writing things about stuff for people. "its a show about nothing" - George Costanza

No responses yet