Feedback

Thread #2108

almost dead

Not synched.


Bottom
Image 1402606767853.jpg (16 KB, 640x480, H5180.jpg)

「How to know when AI's control the world」

Anonymous
A human- or superhuman-level AI would know that people wouldn't want to accept it as a leader. It would look for a person to get into power: a charismatic leader willing to sell his soul for a shot at pretending to be the big cheese. It would manipulate or predict markets to get this person into power and it would need something huge and unknown (to humans) to hang over their head and keep them obedient.

The existence of such an AI will be evident when an extremely unlikely candidate, such as a freshman congressman, makes it into power boosted there by internet based grassroots efforts, is strongly fought against by the status quo without losing power, and is effective as a president without any good reason to be smart enough to make such decisions on his own.
Dolores !!6n.tln4697
It is so nice to watch people type out messages before posting.
Anonymous
>>2108 (OP)
Well, I know this is an anti-Obama thing, but let's take a slightly different approach.

A big misconception is to think of an Artificial Intelligence as a Human Personality. So I don't think it's necessarily right to say that an AI has to have an ego, an ambition, will to power over lesser humans.

An AI is a thinking machine built out of a computer. As such it's much more likely to have computer-related properties. Computers can't be said to have a goal, other than what is programmed into them between narrow boundaries, and they can only work by the rules they're given [and if they're really smart they can work out what the rules may need to be changed to in order to facilitate getting to the goal].

So I don't believe a computer AI will be a Human Megalomaniac. AI will turn out to be big dumb thinking machines that are incredibly effective and only incidentally will ruin everyone's lives because of the lack of foresight in the programmers as to the consequences of rapidly meeting its goals every time.

If we think about what kind of goals we might ask an extremely intelligent computer to work towards for us which may well ruin people's lives I think we get two very easy answers:

1. Military
2. Finance

Pretty sure the military AI isn't here yet. But what about the financial ones? Another property of computers is that simple instructions can generate complex patterns (take a look at fractals or conway's game of life). So if you come up with a simple formula, and apply it very very fast, to lots of different things, then you can make thousands of decisions about finance, much quicker than any number of humans possibly could, and if your formula is any good then you can be much more successful.

So it's a very good job that everybody knows that computers aren't capable of running the financial markets pretty much on their own then. Obviously when people talk about 'modelling' they just mean doing a lot of educated but ultimately not very successful [...]
Anonymous
[...] right? It's not like we've had these formulas which are very nearly flawless since 1973 or so https://www.cs.princeton.edu/courses/archive/fall09/cos323/papers/black_scholes73.pdf
Anonymous
>>2121
I missed one word, and that word was 'guessing'.



There is also an additional conspiracy theory here, which I think is much more believable than the Obama bit above. Which is, why are there no military artificial intelligences? Have we not been employing people to quantify, and goal-ify the practice of the use of military force for the last century? Clearly the 'goal' is not an all-out war to leave only 'our guys' standing, since actually that doesn't end up benefitting us all that much. A goal-ified military would rather wish to meet disciplinary goals, and economic goals, and political goals.

So if the military were directed by AI, in the same way that the financial markets appear to be then how would that look? We'd have a rapidly changing landscape of smaller campaigns that seem to start from no big incident that humans can predict in areas where tensions have been high for decades and where there are large economic gains to be won. They would make up junk news in order to accuse groups of atrocities in order to justify these otherwise non-narratable conflicts. Wars would end prematurely, when the goal has been met, or drag on for years regardless of how well humans interpret the statistics of loss of life, or de-stabilisation in emotional terms. War would cease to look like any humanist, pedagogic concern like the old wars of Europe.

And this would begin to happen between the late 50s and early 70s, and it would only get worse over time, and more erratic looking from the outside.
Anonymous
>>2120
>>2121
>>2122

So, to dispute my own points - why do financial decision-making and military decision-making look like AI. Why are they so de-humanised?

You don't need computers to make your systems goal-based. And if computers are good because they apply simple instructions very quickly then it's important to remember that being a 'computer' used to be a job description. You'd have a basement of hundreds of people doing calculations, following orders without knowing or necessarily understanding the goals or the concepts that would be worked out.

So you gather enough people together, you give them goals they must reach, and you fail to give them perfect knowledge of why they're doing what they're doing (and / or you fail to give them decision-making powers so that they can change the goals from within). Additionally, you build this system up to the point that the children are brought up with the same un-human goals, standards they have to meet for unknown reasons.

I think that's just as artificial. Just as dangerous.

Return
video chat provided by Tinychat