Google is reportedly trying to develop machines with common sense like people.
Actually, I hope Google’s newest project will be more about making machines with more common sense than the average person. Otherwise, they really don’t seem to have a high bar to work with, do they?
The project involves, according to The Guardian, involves complicated algorithms designed to encode thoughts.
Think “artificial intelligence.”
Think every science fiction story in which a computer takes over the world and targets humanity for extermination.
According to Professor Geoff Hinton, these algorithms will process “thought vectors,” which would allow them to approach human-like capacity for reasoning and logic.
I wish Hinton would develop a way to introduce more human beings to the notion of “thought vectors.” Then again, if he did, we’d probably have a lot less to talk about in the world, wouldn’t we?
Many of us like the idea of a computer that does what we want it to do, but also takes into account specific circumstances we might not have foreseen. Imagine a smart home that realizes a bad storm is coming and closes storm shutters — just to be safe — if the weather looks particularly severe. It makes a decision based on reasoning out various pieces of data at hand.
What if we’re inside that home, though, and there’s a medical emergency, but the computer won’t let us out because it concludes it’s too risky out there?
Isn’t this the kind of “thinking” a computer might one day have that all of us fears?
As long as the computer is subordinate to humans, and isn’t smart enough to keep prevent humans from keeping control, everything’s great.
But another mention in the article jumped out at me: a future scenario in which people will “chat with their computers” and see the computer as a “friend,” not a machine or gadget.
Do we really want our computers to be friends? Do we really want computers to have emotional influence over us, too?