We were talking about machine rights and when machines will overcome humans.
We are exploring ways to help create AI with good intent and impact, and the move towards multiple versions of the same product.
The biggest issue in AI today
We are seeing a huge change in the dialogue about the bias issue when it comes to the development and scaling of AI and ML-technologies. It is in fact now a public dialogue. Only two years ago this was a concern mostly debated by researchers but now, having opened the door (even if only very slightly) to the negative consequences that might evolve from these systems, it has become a more widespread concern.
In discussing this issue, there are two main aspects that are being considered when talking about “the biggest issue in AI today”: How do we ensure we have good intent and how do we end up with a positive impact when developing new technologies?
These questions were discussed in many settings during the conference but with one clear take away: We need to change our mindset from innovating new AI-systems at all cost to start innovating for the wealth of the people. The technology is mature enough for us to use it properly.
Douglas Rushkoff, author of the book Team Human, expressed it in a blunt fashion: “Instead of creating technologies for people to use, we created technologies that use people.”
When I think about responsive machines it is not enough to remove the bias, we have to tell the machines why this bias is wrong. If we really aspire to build good robots we want that system to understand why something is wrong.
- Aleksandra Przegalinska, Assistant Professor at Kozminski University and Research Fellow at MIT Sloan School of Management
DSaaS - Data sets as a Service
One inherent issue surrounding the bias-discussion lies in the fact that a data point is nothing more than something that someone thought was worth capturing. In this sense, all data contains some kind of subjectivity and thereby it will contain bias.
What we see now is that large data sets that are publicly available and for free, such as the emails from the Enron scandal, are being used to train models. In this case, if we want to create a model that is able to communicate as a white male, in his 40’s, and with a somewhat clouded judgement - that could be a good source of data. More likely, it will cause these algorithms to be biased and ultimately lead to morally questionable outcomes and harmful decisions (think for example about Amazon’s experiment with a recruitment tool that was found to not be gender-neutral).
Herein lies a challenge for which we will likely see (and will like to see) many new service offerings – Data Sets as a Service. Tiffany C. Li, a technology attorney and legal scholar, suggested a potential solution in allowing for some sort of licensing or copyright law for data sets to be provided as a product or service, with the ultimate goal to improve their quality. Or is the right way to go to introduce auditing parties that, as an objective third party, can control both data sets and models for potential unwanted bias?
What is Humane?
Granted that we find a way that successfully address the bias issue, the next question to ask if our goal is to develop “Humane AI” is: what is humane? In our interaction with these types of systems, we are looking for two aspects: It needs to be helpful and appropriate.
We have reached quite far in making AI helpful. It can guide us through the streets, it can recognize patterns and cats in pictures and it can help us predict deviations in production systems (and of course many things in between) but it is not nearly as good at knowing when to do it.
If an AI-system is something we are going to interact with, it needs to know if I really want it to correct me when I tell my kids that you can get cramps if you go swimming right after a meal or if it should “let it slip”. A lot of this is given away by the tone of our voice, by our facial expressions and gestures, but this input is not something we have figured out how to interpret on a large scale.
What we will see is a great deal of experimentation on this and the success will all boil down to how much trust we have in these systems. The more we are willing to share in terms of input, the more accurate the output will become.
What is appropriate?
Thinking about when we want AI to intervene was a question for many speakers. How do we get it to behave differently when I’m in my car alone versus when I’m with my kids?
One intriguing topic touched by many speakers was that we are moving towards a time where simple personalized products and services are becoming personalized for all versions of its user. Because ultimately, what might make a system feel humane is that it changes with you.
As much as we ourselves are wrong in everyday decisions and comments, we need to allow our AI to be wrong as well. Not only does it create a more dynamic interaction but it will also be what helps your system get to know you.
Compared to the discussion around automation and autonomous systems, where much of the conversation was held around how and where we should build the collaboration between humans and machines, when AI was the outspoken topic most speakers took a more speculative approach in reasoning about how to find this next level of understanding in these more advanced systems. Finding this level and adjusting the systems accordingly will be a huge thing moving forward.
[thrive_lead_lock id='2345']Hidden Content[/thrive_lead_lock]
SXSW is one of the biggest digital conferences in the world, and a global meeting place for the world’s most innovative technology companies and people interested in how disruption can transform their business and everyday lives. The event takes place during during 10 days each year and this year Cartina had the chance to be part of it.
This series consists of 6 global mega trends that business leaders, experts, innovators and disruptors talked about during the days in Austin.
Visst borde fler läsa detta? Glöm inte att dela!