Home / Blockchain / Blockchain: AI is mostly hype. Be more worried about the humans behind it

Blockchain: AI is mostly hype. Be more worried about the humans behind it

Blockchain:

A current joke that has been circulating around the tech world for a while now is that the way to get VCs to eat out of your hand is to slap the words AI, “machine learning” and/or “blockchain” on your pitch deck.

A1: Companies have learned that referring to “machine learning” as “AI” results in VCs firing the Money Cannon at your office. #cloudchat

— Corey Quinn (@QuinnyPig) December 21, 2017

As they say, the best jokes are indistinguishable from reality.

I’ve come to the conclusion that most of the exuberance about artificial intelligence is founded on sand. I’ve gotten there both from a closer look at what amounts to “artificial intelligence” today, as well as a good understanding of both how software itself works and, importantly, how it is developed within a corporate setting.

A very large majority of the AI hype out there today is pure attention-seeking nonsense. Generously, one might see it as just yet another example of a certain genre of tech hucksterism (see: self-driving cars, internet-beaming drones, 3D printing). Less generously, however, it isn’t hard to see the lavish marketing machine around AI as a strategy to depict deliberate choices by dominant software platforms as technical inevitabilities.

“AI” is not something anyone needs to be worried about. A world mediated by unaccountable corporate software platforms is.

AI-washing

The term “AI” is increasingly stretched to apply to any new field of software – and even quite old ones as well. This is, in fact, a widely remarked-upon phenomenon in “AI” research, in which whatever technique is used to solve a given problem becomes its own definable category, and is thus no longer “really AI.”

Translation software was once considered by serious people to be “AI” – until it became easy. Optical pattern recognition, natural language processing, auto-navigation and chess were the very same way. To be sure, IBM’s Deep Blue and, later, Watson “AI” systems proved very adept at the tasks they were directed at. But as we all later found out, those systems became significantly less impressive when directed at tasks only a few degrees separated from the ones they were specially designed for. They were simply very powerful tools, not “intelligence” in any true sense.

The problem with this trend is not that it waters down a term like “artificial intelligence,” whose definition is effectively half science fiction anyway. Rather, it recasts deliberate human choices as instead derived impassionately from data, thus imbuing them with some manner of impartial truth.

Amazon’s embarrassing experiment with an “AI” resume review system that weeded out all female applicants is a perfect example. The system they used produced “correct” results in the sense that they mostly conformed to the training data. That data, however, came from the real world, and was thus ineluctably flawed with the exact biases an automated system was hoped to overcome. Thus, the results were totally useless. Garbage in, garbage out.

(It’s also worth mentioning that the approach Amazon used sounds like a fairly straightforward model-scoring analytic technique, which has been quite common in many industries for decades. Is model scoring “AI” now?)

In this way, the model could only be as good as the data used to train it. In the real world, that data is in no way impartial or free of bias. Yet casting this as a failure of “AI,” in the sense that the technology “just isn’t ready yet,” misses the real cause: that humans are affected by real-world inequities which inevitably influence human-built technologies. There is just no way to “product” our way around this.

How the software sausage is made

Global, monopolistic platforms like Google, Facebook and Amazon do not pursue “AI” as a science project. Nor do hospitals, insurance companies, banks, airlines or governments. They do so for specific, strategic purposes, which in corporate settings are aimed at generating new revenue. Like any technology choices, these purposes can sometimes be aligned with consumer benefit, but often are not. Like any technology choice, the determination all comes to: it depends!

In many ways, the adoption of “AI” closely resembles that of other tech buzzword concepts like “big data” and “analytics.” Sometimes, they’re literally the same thing: while banks once used “big data” to score mortgage applicants for estimated creditworthiness, now it’s called “AI” because it uses a form of automatic adjustment to statistical models, which we now call “machine learning.” Mysteriously, this has not eliminated demonstrable racial bias in mortgage lending or auto insurance, just to pick two of many such examples. The reason is not only because race is used as a factor in assigning risk (not unlike Amazon’s system picking “female” as a proxy for “inadequate”), but also because lenders are obviously incentivized to find new ways to make money on loans and insurance rates. And it turns out that racially discriminatory lending and coverage can be quite profitable – particularly when enabled with technological precision.

From banking and insurance to Youtube’s algorithm and Facebook’s news feed, it’s become extremely popular to refer to the “AI” governing the critical decisions each face. One reason is because “AI” is easy to blame when things go wrong (“whoops, the system made a mistake!”), but it also conceals the deliberate human choices behind how those systems actually work. Facebook has built the “AI” behind its News Feed to maximize engagement, and Youtube to (somewhat clumsily) match interests to keep you watching videos. This is why, as a 30-something year old white man, portals into the alt-right rathole literally follow me around these platforms. Blaming “AI” for these choices is rich theater by executives who don’t wish to be pressed on the negative externalities of their engagement-maximizing directives.

Engineers and product managers behind these systems, whether they sit at Facebook, Google, Bank of America or Aetna, have specific product revenue and/or engagement goals to hit, and design their products accordingly. Companies often have good reason to not want to discuss those goals publicly, and thus deploy stories about their investment in “AI” both as cover story and positive marketing. Politicians and the public at large still eat up a lot of this messaging without much criticism, in part because “AI” sounds much less scary than “big data,” though they’re often effectively the same thing.

What are we even talking about?

No one has any idea what “artificial intelligence” even means.

One group still profoundly dubious about “AI” that you may not have heard much

Read More

About admin

Check Also

Blockchain: Is there potential for blockchain in copyright and licensing applications?

Blockchain: Is there potential for blockchain in copyright and licensing applications?

Dave Davis joined Copyright Clearance Center in 1994 and currently serves as a research analyst. He previously held directorships in both public libraries and corporate libraries and earned joint master’s degrees in Library and Information Sciences and Medieval European History from Catholic University of America. More posts by this contributor By all accounts, we appear…

Leave a Reply

Your email address will not be published. Required fields are marked *