May 22, 2019No Comments

Empathic Interactions – Trend #5 of 6 from SXSW 2019


In 2018...

We were discussing human senses going digital, and how we will be able to enhance our senses with the help from technology.

In 2019...

We speak about discovering opportunities to capture peoples’ emotions at scale and use them to inform your business decisions.


Adapting to values-based buying

Businesses are experiencing a significant move from a rational-based buying behavior towards a values-based one where customers move to brands with which they can relate and build a relationship with.

Gwyneth Paltrow shared her ideas on building a successful organization where your culture is your business plan. Culture however is an expanding term, what is inside your organization now also influences your customers.

Also the definition of a customer is experiencing a significant shift. Today, many companies are so eager to learn how to predict their customers that what is called the “The Keanu-Reeves role” has emerged, meaning that if you do not have an online personality you will be assigned one. Many are going so hard at designing online personas and optimizing towards these, that the real person, the real customer that they want to engage with is lost.

Emotions as KPIs

Whether it is in your internal culture, or in your communication with your customers, understanding what feelings and emotions arise can be the key to understanding the intrinsic motivation and intention behind certain actions.

Jared Feldman, CEO of Canvs AI, remarks that there are 42 main feelings that humans express on social media, and their goal is to make it possible to capture, identify, and measure these.

What does it lead to? Brands can (and should) create content based on a set of emotions that they want to trigger, ultimately allowing us to leave CPM* as a KPI for communication. Imagine defining your next marketing campaign or a certain interaction point in your customer journey as: 47% love, 30% passion, 13% goosebumps and 10% crazy.

* Cost per thousand impressions (technically, “Cost Per Mille”)


Clearchanneloutdoor.com

Clear Channel analyzed the mood in the Stockholm’s metro, digitally and through sensors, and displayed a corresponding art display.


The human body becomes the source of data

Consumers are not only using text to share their deepest thoughts about your business, but also their bodies. The signals that our bodies give off in certain environments and situations can tell an incredibly rich story and with new technologies that can retrieve these data points, at scale, we are opening the door to understanding and interpreting people as never before.

By listening to the tone of your voice, instead of what you are saying, or measuring the size of your iris, companies are gaining a deeper understanding of the relationship between how our body reacts and certain feelings we feel.

In China, we are already seeing supermarkets with cameras that interpret customers’ behavior in-store and soon we will be able to connect the shopper’s emotional state to their consumption behavior, and of course be able to present something perfect for just that occasion.

Our spaces will know more about us than we do and we will have a dynamic relationship with spaces where we work, train, heal
and live

Poppy Crum, Chief Scientist at Dolby Laboratories and an Adjunct Professor at Stanford University

Experiential data 

Once we explore the feelings and emotions behind our customers’ actions, generating so-called experiential data points, we will want to relate these to more traditional business metrics.

The experiential data points are likely to be connectable to at least one operational data point making it possible for us to drill down from our financial targets to what feelings drive that specific measure and consequently how we need to adjust our marketing, content, interactions or communication towards our customers.

The challenge here lies in whether your organization can be bold enough to become truly customer centric.

Building an empathetic enterprise

If we can now understand the true feelings of customers, we can also create a much more empathetic relationship with the actual person, not the digital avatar. Interacting with your customer where they are emotionally, at any given point, and building a more empathetic and values-based relationship was something various speakers presented as a great opportunity.

This exemplifies that empathy can become a point of differentiation for products, services, recruitment, innovation and revenue creation.  Ultimately it is something that makes your brand follow customers in their journey to more values-based buying decisions.

It will not take long before we see product tags with “Made in China” being changed to “Made with empathy”.


Enterprise Empathy

Volvocars.com

Volvo is introducing in-car cameras to read the driver’s eyes and face to prevent sleeping behind the wheel or driving under influence.


The starting point 

To get to the point where you as an organization can interact with a new level of personal data, it is imperative to create a data-driven work place. Representatives from Tableau presented their view on a data driven organization, as one that prioritize data over intuition in decision making, where developing a coherent governance structure is necessary to develop other core competencies within the organization. Whether you want to use more real time data in your models or if you want to use experiential data, the competencies to do so will evolve from a clearly defined governance structure.

The notion of being a data driven organization has existed for a while, but in a survey made by New Vantage Partners among senior C-level executives 69% report that they have yet to become data driven and 72% still do not have a “data culture”. What is missing in these cases is reportedly not the hardware or the software, or even the data, but in 93% of the cases it is the identifying the right people and the right processes that is the obstacle. Focusing on this, there is a huge potential to unlock the value of better-informed interactions with your customers.

[thrive_lead_lock id='2345']Hidden Content[/thrive_lead_lock]


SXSW is one of the biggest digital conferences in the world, and a global meeting place for the world’s most innovative technology companies and people interested in how disruption can transform their business and everyday lives. The event takes place during during 10 days each year and this year Cartina had the chance to be part of it.

This series consists of 6 global mega trends that business leaders, experts, innovators and disruptors talked about during the days in Austin.

Visst borde fler läsa detta? Glöm inte att dela!
[Sassy_Social_Share]

May 9, 2019No Comments

Humane AI – Trend #4 of 6 from SXSW 2019


In 2018...

We were talking about machine rights and when machines will overcome humans.

In 2019...

We are exploring ways to help create AI with good intent and impact, and the move towards multiple versions of the same product.


The biggest issue in AI today 

We are seeing a huge change in the dialogue about the bias issue when it comes to the development and scaling of AI and ML-technologies. It is in fact now a public dialogue. Only two years ago this was a concern mostly debated by researchers but now, having opened the door (even if only very slightly) to the negative consequences that might evolve from these systems, it has become a more widespread concern.

In discussing this issue, there are two main aspects that are being considered when talking about “the biggest issue in AI today”: How do we ensure we have good intent and how do we end up with a positive impact when developing new technologies?

These questions were discussed in many settings during the conference but with one clear take away: We need to change our mindset from innovating new AI-systems at all cost to start innovating for the wealth of the people. The technology is mature enough for us to use it properly.

Douglas Rushkoff, author of the book Team Human, expressed it in a blunt fashion: “Instead of creating technologies for people to use, we created technologies that use people.”

When I think about responsive machines it is not enough to remove the bias, we have to tell the machines why this bias is wrong. If we really aspire to build good robots we want that system to understand why something is wrong.

Aleksandra Przegalinska, Assistant Professor at Kozminski University and Research Fellow at MIT Sloan School of Management

DSaaS - Data sets as a Service

One inherent issue surrounding the bias-discussion lies in the fact that a data point is nothing more than something that someone thought was worth capturing. In this sense, all data contains some kind of subjectivity and thereby it will contain bias.

What we see now is that large data sets that are publicly available and for free, such as the emails from the Enron scandal, are being used to train models. In this case, if we want to create a model that is able to communicate as a white male, in his 40’s, and with a somewhat clouded judgement - that could be a good source of data. More likely, it will cause these algorithms to be biased and ultimately lead to morally questionable outcomes and harmful decisions (think for example about Amazon’s experiment with a recruitment tool that was found to not be gender-neutral).

Herein lies a challenge for which we will likely see (and will like to see) many new service offerings – Data Sets as a Service. Tiffany C. Li, a technology attorney and legal scholar, suggested a potential solution in allowing for some sort of licensing or copyright law for data sets to be provided as a product or service, with the ultimate goal to improve their quality. Or is the right way to go to introduce auditing parties that, as an objective third party, can control both data sets and models for potential unwanted bias?

What is Humane?

Granted that we find a way that successfully address the bias issue, the next question to ask if our goal is to develop “Humane AI” is: what is humane? In our interaction with these types of systems, we are looking for two aspects: It needs to be helpful and appropriate.

We have reached quite far in making AI helpful. It can guide us through the streets, it can recognize patterns and cats in pictures and it can help us predict deviations in production systems (and of course many things in between) but it is not nearly as good at knowing when to do it.

If an AI-system is something we are going to interact with, it needs to know if I really want it to correct me when I tell my kids that you can get cramps if you go swimming right after a meal or if it should “let it slip”. A lot of this is given away by the tone of our voice, by our facial expressions and gestures, but this input is not something we have figured out how to interpret on a large scale.

What we will see is a great deal of experimentation on this and the success will all boil down to how much trust we have in these systems. The more we are willing to share in terms of input, the more accurate the output will become.


What is appropriate? 

Image: 9to5mac.com

Thinking about when we want AI to intervene was a question for many speakers. How do we get it to behave differently when I’m in my car alone versus when I’m with my kids?


Beyond personalization

One intriguing topic touched by many speakers was that we are moving towards a time where simple personalized products and services are becoming personalized for all versions of its user. Because ultimately, what might make a system feel humane is that it changes with you.

As much as we ourselves are wrong in everyday decisions and comments, we need to allow our AI to be wrong as well. Not only does it create a more dynamic interaction but it will also be what helps your system get to know you.

Compared to the discussion around automation and autonomous systems, where much of the conversation was held around how and where we should build the collaboration between humans and machines, when AI was the outspoken topic most speakers took a more speculative approach in reasoning about how to find this next level of understanding in these more advanced systems. Finding this level and adjusting the systems accordingly will be a huge thing moving forward.

[thrive_lead_lock id='2345']Hidden Content[/thrive_lead_lock]


SXSW is one of the biggest digital conferences in the world, and a global meeting place for the world’s most innovative technology companies and people interested in how disruption can transform their business and everyday lives. The event takes place during during 10 days each year and this year Cartina had the chance to be part of it.

This series consists of 6 global mega trends that business leaders, experts, innovators and disruptors talked about during the days in Austin.

Visst borde fler läsa detta? Glöm inte att dela!
[Sassy_Social_Share]

May 9, 2019No Comments

The rules for automation – Trend #3 of 6 from SXSW 2019


In 2018...

We talked about Data integrity and how we need to protect ourselves from big players using or selling our personal data.

In 2019...

We were talking about Data for automation, discussing how to best approach automation in a way that fosters human value and robot-to-human collaboration.


Automating our own value 

The fear of automation greatly disrupting labor markets has been increasingly tangible, and something that many speakers decided to approach in various ways. Some with concern, some with the sense that “We are all being promoted”. From the more politically engaged speakers, there is a case being made that we are currently defining ourselves as the economic value that we create, and that we are now making human labor less and less essential to the economy.

Google’s Chief Decision Scientist Cassie Kozyrkov firmly disagreed, arguing that it would be foolish and meaningless to create machines and artificial intelligence that would compete with human skills, human values and human needs. Instead we need to find how technology can complement us, in such a way that we can do what we do best, and technology can do the same. What we need is not more competition, what we need is more tools to leverage our human skills.

The chatbot of our dreams 

As the use of automation tools becomes more wide spread, people start to adjust to this. For example, if you are using a functionality that “optimizes” the time at which you send out your weekly newsletter emails, they might just arrive at the same time as every other non-personal email to your recipients’ mailbox. Instantly, they might get categorized as unimportant and moved to the trash folder.

For some information or experiences to reach all the way through, we might want to amplify the human touch. In this example, pressing the send-button yourself at an irregular time can increase the chances of your email being read. In some cases, it may be the opposite.

One study presented by Aleksandra Przegalinska, a philosopher and researcher at MIT, found that a simple text bot with no human resemblance provokes almost no emotions in the human it is interacting with. However, one with clear human traits (think Sophia the robot) evoked a lot of emotions. But they were negative ones; people felt much more unease interacting with the more human chatbot.

What does this tell us? That we want to carefully select what we decide to automate and consider what we want to alleviate in the interaction with customers or users.


Alex - The robotic news reporter

Image: bbc.com

Russian news channel Rossiya 24 have created a robot – Alex – reading some of its news bulletins.
The question is: Which emotions is he creating for the viewers?


The state of play

As users of, or friends to, automated systems it seems that we want to know when we are interacting with one. The separation between human-to-human contact and human-to-machine contact makes the experience different. It also differs in what context or mood we are in.

A representative from Slack shared their approach to finding out in what situations their users enjoy interacting with machines. Turns out it is when we are in a so-called state of play. When someone adds their 23rd reaction to a certain message in Slack, you can be rather sure that they are not busy doing some important work – this has proven to be a great point at which to introduce a machine that initiates contact with the user.

Identifying situations where users are more receptive and open to machine interactions is an important part of developing a good system design.

Slack interacting with their users in a State of play

The human-machine collaboration

Whether it is in the context of autonomous vehicles, chatbots or any other automated system, we are facing some tricky but important challenges to create a system that is helpful.

One being the need to consider how these systems learn and adapt over time. They are in many ways adaptive to their environments, just like humans, and if your company decides to employ a chatbot in customer service you need to not only think about the technology but also how to ensure that it stays true to corporate values and has some integrity in its interaction with customers and users.

A second one is that when building large-scale autonomous systems, human interaction in combination with these systems is likely to make them deviate from the most optimal functioning. We are seeing increasing numbers of autonomous cars trying to be introduced to the roads and one issue is how they are to collaborate with human drivers. This will evolve beyond our roads and into our organizations as well, which is why we need to evaluate both where to take advantage of automation but also how to design this interaction.

If our employees and/or customers are interacting with a system, there are multiple dimensions that, positively and negatively, affect the outcome of this. Being aware of this when structuring the systems we should consider whether we want this to be a fully automated process or one with human touch points, as it greatly affects the optimal design. A better design will increase the level of trust in a system and ultimately, with more trust, we can be comfortable releasing more data to it and subsequently improve its ability.


According to Aleksandra Przegalinska there are three important dimensions in building a trustful collaboration between a human and a robot:

  • Transparency -- Honesty: The agent is what it is and does not pretend to be something else. It does not deny its status
  • Predictability -- Integrity: Seen as a factor associated with credibility, and concerns the trustors’  expectation that an object of trust will act consistently in line with past experiences. If the user perceives the chatbots as predictable, this may lead to a feeling of trust in the chatbot
  • Control -- Benevolence: The degree to which the motivations and intents of the trustee are in line with those of the trustor

[thrive_lead_lock id='2345']Hidden Content[/thrive_lead_lock]


 

SXSW is one of the biggest digital conferences in the world, and a global meeting place for the world’s most innovative technology companies and people interested in how disruption can transform their business and everyday lives. The event takes place during during 10 days each year and this year Cartina had the chance to be part of it.

This series consists of 6 global mega trends that business leaders, experts, innovators and disruptors talked about during the days in Austin.

Visst borde fler läsa detta? Glöm inte att dela!
[Sassy_Social_Share]

May 3, 2019No Comments

Damaged Trust – Trend #1 of 6 from SXSW 2019

In 2019 we were talking a lot about trust, both on an individual level as consumers,and also how companies can use it as a differentiator.

Read more

July 31, 2018No Comments

Blurring realities – Trend #8 of 8 from SXSW

Nonny - the Godmother of VR/AR

Through technologies such as Virtual Reality we are able to experience real situations in a virtual world. By using this technology, we are able to convey stronger and more realistic stories. Nonny de la Peña, founder and CEO of Emblematic Group is called the Godmother of VR and AR.

Emblematic Group is a digital media company focused on immersive virtual, mixed and augmented reality. Since 2004, De la Peña has experimented with different kinds of virtual realities and is also one of the greatest contributors to the genre of Immersive Journalism.

You don’t experience the world as flat you experience the world as volume – why shouldn’t media be that way?

- Nonny de la Peña, Founder & CEO of Emblematic Group

VR as fictional storytelling conveying important messages

De la Peña has chosen to focus on certain events that she finds important to communicate to the public. The effects of global warming, a real life beating, the war in Syria and discrimination against certain groups are some examples. De la Peña and her team create stories with the help from real audio recording, interviews with witnesses or people involved and photographs stitched together to create a 360 degrees video.

Nonny de la Peña at her seminar at SXSW, 2018

Real life beating...

One example of her projects is the recreation of a real life beating. With real audio recordings from the event when a refugee got beaten to death by a
group of policemen. Virtual reality was created with the help from people who saw the event and could communicate how it was, how it felt and what actually happened – creating a realistic scenario of the horrifying event.

...global warming...

Besides telling stories like this one, VR can also be used to convey a certain message. One example is a virtual world where you sit in a helicopter flying over Greenland. You get to see the glaciers and a 360 video enables you to look out the window and down, really feel like you are flying. Time lapses of the glacier retreat is a powerful way of showing the severe effects of global warming.

--


DID YOU KNOW?

Immersive journalism can be described as the production of news in a form where people can gain first-person experiences of the event or situation described in the news. The fundamental idea is to allow the participant, typically represented as a digital avatar, to actually enter a virtually recreated scenario.

--


...discrimination...

Another example is a virtual situation created to communicate the serious conditions of homelessness in the LHBTQ community. A young man has been thrown out by his family as a result of coming through with his sexuality. The scene is putting the audience in the middle of a moment when he feels physical vulnerable – and is surrounded by people that hate him because of his sexuality. A very strong scene that make the participant connect to Daniel, due to the threatening surroundings, and his lack of ability to defend himself.

...and going to prison

Nonny de la Peña has also created a virtual reality world of a prison, where you can meet a former prisoner and be in the cell with him. You can see his physical state and experience his emotional condition. The digital sense and presence is powerful and it is assimilated with real life components and built on real stories from victims and interviews with the people involved.

Our minds do not know the difference between VR and real life 

The way of conveying stories through VR technology wakes certain questions regarding ethics and how these impressions are affecting people’s minds. De la Peña argue that the ethical issues are much the same as other journalism and therefore we need critical thinking and to teach everyone to look at the source.  Some events can however be both unpleasant and scary – and the mind cannot in certain circumstances understand that it is a fictive event or if it is a real-life event.

--

[thrive_lead_lock id='1842']

 

 

[/thrive_lead_lock]

 

SXSW is one of the biggest digital conferences in the world, and a global meeting place for the world’s most innovative technology companies and people interested in how disruption can transform their business and everyday lives. The event takes place during during 10 days each year and this year Cartina had the chance to be part of it.

This series consists of 8 global mega trends that business leaders, experts, innovators and disruptors talked about during the days in Austin. If you want to read the full report, click the button above and we will email it to you.

Visst borde fler läsa detta? glöm inte att dela!
[Sassy_Social_Share]

CartinaLogo_small

Cartina has since 2013 helped both multinationals and startups translate digital opportunities into lasting and profitable business. We have since the start mainly worked with management services but are now expanding our offering with tech & design.

With a desire to develop oneself, clients and colleagues, our team of several senior digital experts take pride in delivering sustainable solutions that matters for our clients and society. 
Cartina is founded and owned by the investment firm Acacia Asset Management AB together with partners in the firm.


Contact

Cartina
Hamngatan 15, SE-111 47
Stockholm, Sweden
Tel: +46 (0)8 703 25 10
info@cartina.se