Thinking Like A Human

A sprinkling of AI (Artificial Intelligence) phrases in the annual reports and business plans is back in vogue. AI makes its way into most discerning dinner conversations. However, what does AI actually mean?

I speak to the lead designer of Thinkwire, AJ Ostrow about machine learning, passive intelligence and other de rigueur phrase and what they really mean.

What is AI? Responding to this question in The Atlantic, Charles Isbell says AI is “making computers act like they do in the movies” Like the Star Wars R2D2 this is done invisibly “by the magic of narrative abstraction”.
What does artificial intelligence mean to you?
I think the only honest definition of AI is in the context of science fiction. To me it’s an adjective for aspirational technology. It’s unfortunate media and marketing misuse the term for applied data mining with statistical learning methods.
But how do we explain the power of machine learning to the consumer? As smart and autonomous devices, cars, assistants proliferate, don’t we need pop imagery to explain the implications?
In industry, machine learning is a cost shortcut. It allows companies to offer services (like smart email filtering) that are too complex or expensive for programmers to solve.
I would encourage consumers to learn a little about how these models work. When “AI” term is used, it implies the model is smart and correct. In reality these models are nothing more than high fidelity approximations biased and limited to a training set.
Bingo. AI has human puppet masters and often inherits those human biases and flaws. When US correctional institutions use algos to ID defendant’s likelihood of committing another crime we see clear color bias in the results.
Under the sanitized term “smart blackbox” corporations and governments can sell AI as a panacea. How do we avoid racism, sexism ect. entering the blackbox?
Gary that’s the exact problem. It’s why ML is dangerous. You can’t blindly trust the decisions of a smart black box. Feedback loops are tricky. You need the model to learn from reality, not invent reality.
Hal from the movie 2001 and Marvin from the Hitchhiker’s Guide to the Universe (Douglas Adams) come to mind.
Stephen Hawking in 2014, cautioned that ML and simulated neural networks could be soon “... outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.” How do you keep the human in the middle?
I don’t know. If you give control to the neural net then doomsday predictions become likely. It’s dangerous because once the efficiency of using ML is the norm, how do you justify ongoing human participation or oversight?
Nick Bostrom addresses this point. He argues that we need to start building models now that teach super AI algo how to execute its instructions within our human value constructs. He says it will be too late once we create the first super intelligence machine.
It may not be a issue of “ongoing human participation or oversight”. According to Bostrom there is a high likelihood that an intelligent machine will execute an action to an extreme. Like an AI version of King Midas and his wish for a golden touch.