DS News - HSBC

DS News September 2018

Issue link: http://dsnews.uberflip.com/i/1020584

Contents of this Issue

Navigation

Page 72 of 117

69 2018 AMDC Diversity Spotlight PROGRAMMING THE BIAS OUT OF DECISION-MAKING By Demetrius Gray Societal bias—the cognitive assigning of distinct traits to individuals or groups without supporting data—is pervasive and destructive. e use of systems based on artificial intelligence sometimes makes it worse. Researchers are calling for dealing with bias in artificial intelligence (AI) now, before it becomes entrenched. Algorithmic equality has become a robust field of study and discussion. As algorithms make more and more decisions for us and about us, we must make sure those decisions are actually fair. Does the general public even know that AI is already driving loan worthiness, emergency response, criminal sentencing, and medical diagnosis, to name just a few applications? It's writing news stories and weather reports. AI is part of every Amazon purchase and Netflix binge. It's telling advertisers what you said on Facebook. Artificial intelligence is the science of engineering intelligent machines. "Intelligent," in this regard, means receiving new information and coming to better conclusions based on the new data, the way human brains work. Current thinking holds that artificial intelligence could be the great equalizer, and on its face, it has that potential. AI decides based on math, right? Today, AI reflects the bias of its creators. AI has had some spectacular failures. Facial recognition systems, among the most commonly used forms of AI, have very good performance overall (88-93 percent accuracy), but are much worse for darker-skinned faces (77-87 percent accuracy), and women (79-89 percent accuracy), and even worse for people at the intersection of those two subgroups (i.e., darker-skinned women—65-79 percent accuracy), according to the UC Berkeley School of Information. First-generation virtual assistants reinforce sexist gender roles: female voices for task assistants (Siri and Alexa) and male voices for problem-solving bots (Watson and Einstein). A driverless car recently struck and killed a pedestrian. AI is moving into a level of sophistication that makes rooting out bias even more difficult. It is no longer just creating output; AI is now creating its own novel skills and correlations that logic designers can't always explain. How can we capture the full value of AI for the benefit of all? It starts with transparency standards and open-source code such as deeplearning. js and TensorFlow to render AI that is more governable. AI Now, a top nonprofit advocate for "algorithmic fairness," says if a designer can't explain an algorithm's decisions, they shouldn't be able to use it. e EU's General Data Protection Regulation that went into effect on May 25 requires machine-based decisions to be explainable. e U.K.'s House of Lords is calling for a global AI code of ethics. e mandate to develop transparent approaches is growing. Teams creating artificial intelligence should include ethicists, psychologists, and sociologists in order to solve problems fairly. Data ethics courses should be part of engineering curricula. Data sources like historical texts should be evaluated before being uploaded, and before they embed racist and sexist attitudes into models. Better feedback loops have humans in the middle, so we can reinsert judgment, if needed. Finding, scrubbing, interpreting, and choosing data sets has become an art for engineers creating algorithms. Real estate was considered behind the curve but is catching up quickly when it comes to using machine learning that makes decisions by AI. Many time-consuming aspects of property management are already, or soon will be, fully automated: sourcing, qualifying, and signing tenants, employees, and construction vendors, as well as using sensors and software to monitor, inspect, and display structures. AI bots answer queries about terms of leasing, footage, and other common questions during virtual tours. Real estate startup Truss uses Vera, a proprietary AI bot, to help clients source office space. Our company, WeatherCheck, uses AI to tell owners when to use insurance to cover weather damage repairs. One interesting application of AI in real estate was the "Broker vs. Bot challenge" by Inman, which attempted to predict buyer preferences. e bot won the top spot—the buyer's favorite home—all three times, and that was two years ago. Brokerage REX Real Estate Exchange uses artificial intelligence and machine learning to sell homes, crunching tens of thousands of data points to source likely buyers for a home and targeting them with digital ads, then charging only a 2 percent commission. Juwai.com, a Chinese international real estate website, has deployed a line of Mandarin- speaking robotic personal assistants called "Butler 1" for agents and developers in the U.S., Canada, Australia, the U.K., Malaysia, and Singapore to "help guide Chinese buyers through purchases." ey will also capture important customer feedback during transactions. As Cathy O'Neil, mathematician and author of the book Weapons of Math Destruction, recently said, "It's an emerging field … within the next two decades we will either have solved the problem of algorithmic accountability or we will have submitted our free will to stupid and flawed machines." Clearly, AI is already a part of day-to- day life for almost all of us. How it develops, whether with or without bias, also depends on our collective involvement. CEO Demetrius Gray founded WeatherCheck in 2016. He is innovating the way the world responds to severe weather by automating the recovery process from damage detection through repair. e company currently serves the real estate, mortgage, logistics, emergency response, and hospitality industries.

Articles in this issue

Links on this page

view archives of DS News - HSBC - DS News September 2018