Connect with us

Android

Is your product’s AI annoying people?

Raghav Jain

Published

on

Artificial intelligence is allowing us all to consider surprising new ways to simplify the lives of our customers. As a product developer, your central focus is always on the customer. But new problems can arise when the specific solution under development helps one customer while alienating others.

We tend to think of AI as an incredible dream assistant to our lives and business operations, when that’s not always the case. Designers of new AI services should consider in what ways and for whom might these services be annoying, burdensome or problematic, and whether it involves the direct customer or others who are intertwined with the customer. When we apply AI services to make tasks easier for our customers that end up making things more difficult for others, that outcome can ultimately cause real harm to our brand perception.

Let’s consider one personal example taken from my own use of Amy.ai, a service (from x.ai) that provides AI assistants named Amy and Andrew Ingram. Amy and Andrew are AI assistants that help schedule meetings for up to four people. This service solves the very relatable problem of scheduling meetings over email, at least for the person who is trying to do the scheduling.

After all, who doesn’t want a personal assistant to whom you can simply say, “Amy, please find the time next week to meet with Tom, Mary, Anushya and Shiveesh.” In this way, you don’t have to arrange a meeting room, send the email, and go back and forth managing everyone’s replies. My own experience showed that while it was easier for me to use Amy to find a good time to meet with my four colleagues, it soon became a headache for those other four people. They resented me for it after being bombarded by countless emails trying to find some mutually agreeable time and place for everyone involved.

Automotive designers are another group that’s incorporating all kinds of new AI systems to enhance the driving experience. For instance, Tesla recently updated its autopilot software to allow a car to change lanes automatically when it sees fit, presumably when the system interprets that the next lane’s traffic is going faster.

In concept, this idea seems advantageous to the driver who can make a safe entrance into faster traffic, while relieving any cognitive burden of having to change lanes manually. Furthermore, by allowing the Tesla system to change lanes, it takes away the desire to play Speed Racer or edge toward competitiveness that one may feel on the highway.

However, for the drivers in other lanes who are forced to react to the Tesla autopilot, they may be annoyed if the Tesla jerks, slows down or behaves outside the normal realm of what people expect on the freeway. Moreover, if they are driving very fast and the autopilot did not recognize they were operating at a high rate of speed when the car decided to make the lane change, then that other driver can get annoyed. We can all relate to driving 75 mph in the fast lane, only to have someone suddenly pull in front of us at 70 as if they were clueless that the lane was moving at 75.

For two-lane traffic highways that are not busy, the Tesla software might work reasonably well. However, in my experience of driving around the congested freeways of the Bay Area, the system performed horribly whenever I changed crowded lanes, and I knew that it was angering other drivers most of the time. Even without knowing those irate drivers personally, I care enough about driving etiquette to politely change lanes without getting the finger from them for doing so.

Post Intelligence robot

Another example from the internet world involves Google Duplex, a clever feature for Android phone users that allows AI to make restaurant reservations. From the consumer point of view, having an automated system to make a dinner reservation on one’s behalf sounds excellent. It is advantageous to the person making the reservation because, theoretically, it will save the burden of calling when the restaurant is open and the hassle of dealing with busy signals and callbacks.

However, this tool is also potentially problematic for the restaurant worker who answers the phone. Even though the system may introduce itself as artificial, the burden shifts to the restaurant employee to adapt and master a new and more limited interaction to achieve the same goal — making a simple reservation.

On the one hand, Duplex is bringing customers to the restaurant, but on the other hand, the system is narrowing the scope of interaction between the restaurant and its customer. The restaurant may have other tables on different days, or it may be able to squeeze you in if you leave early, but the system might not handle exceptions like this. Even the idea of an AI bot bothering the host who answers the phone doesn’t seem quite right.

As you think about making the lives of your customers easier, consider how the assistance you are dreaming about might be more of a nightmare for everyone else associated with your primary customer. If there is a question regarding the negative experience of anyone related to your AI product, explore that experience further to determine if there is another better way to still delight them without angering their neighbors.

From a user-experience perspective, developing a customer journey map can be a helpful way to explore the actions, thoughts and emotional experiences of your primary customer or “buyer persona.” Identify the touchpoints in which your system interacts with innocent bystanders who are not your direct customers. For those people unaware of your product, explore their interaction with your buyer persona, specifically their emotional experience.

An aspirational goal should be to delight this adjacent group of people enough that they would move toward being prospects and, eventually, becoming your customers as well. Also, you can use participant ethnography to analyze the innocent bystander in relation to your product. This is a research method that combines the observations of people as they interact with processes and the product.

A guiding design inspiration for this research could be, “How can our AI system behave in such a way that everyone who might come into contact with our product is enchanted and wants to know more?”

That’s just human intelligence, and it’s not artificial.

[ad_2]

Tech Passionate and Heavy Geek! Into Blogging world since 2014 and never looked back since then :) I am also a YouTube Video Producer and a Aspiring Entrepreneur. Founder, MyDroidDoes

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Android

Spotify’s podcasting app Anchor now helps you make trailers

Raghav Jain

Published

on

Spotify’s simple podcasting suite, Anchor, is today introducing a new feature designed to help creators promote their podcast: trailers. On the Anchor app for iOS and Android, podcasters will now be able to create a dedicated trailer for their podcast that combines an introduction and some background music, then turns it into an animated video that can be shared across social media and the wider web.

The trailer will also be made available within the podcast’s RSS feed, where it’s marked with the “trailer” episode type.

Anchor had already offered a way for users to mark episodes of their podcast as a trailer within the app, but the new feature makes it simpler to create a trailer through a more integrated experience.

For example, when you push the button to record, you have one minute to introduce your podcast — and a warning will flash when that minute is about to be up. When you’re satisfied with the recording, you can then browse through Anchor free library of background music, which is organized by mood — like adventurous, calm, dramatic, cheerful, energetic, funky, chill, etc. Or you can opt to go without music, if you prefer.

And if you already have a voice recording saved elsewhere, you can import it into Anchor to use as your trailer.

There are other options today for creating podcast trailers, like those from services like Wavve, Headliner, or Audiogram, for example. But Anchor’s goal is to be the one-stop-shop for everything a new podcaster needs to get started, and that includes promotional tools like this.

However, many professional podcasters still view Anchor as a sort of entry-level product and turn to more advanced audio editing suites to craft their shows. But over time, these extra, handy features could help Anchor to earn a place in podcaster’s workflow, even if it’s not their end-to-end solution.

Podcasting has become an important vertical for Anchor’s parent company Spotify, which led to it acquiring both Anchor and Gimlet earlier this year for $340 million. And its investments in podcasts, which have also included the acquisition of podcast network Parcast, have been starting to pay off.

The company reported in July its podcast audience had doubled in size since last year. In October, it said the number of podcast listeners on its service grew 40% from the prior quarter, and it now had 500,000 titles hosted on its platform.

Spotify can monetize podcasts in two ways, as with music — through ads and by pushing people into premium subscriptions. It now has 113 million paying customers and 248 million monthly actives. And once Spotify’s users are subscribed to a number of podcast shows, they’re more likely to stay with the service. In addition, podcasts don’t come with the licensing costs associated with record label deals, which Spotify also surely likes.

Anchor’s new trailers feature is live now on both iOS and Android .

 

[ad_2]

Continue Reading

Android

Ghost wants to retrofit your car so it can drive itself on highways in 2020

Raghav Jain

Published

on

A new autonomous vehicle company is on the streets — and unbeknownst to most, has been since 2017. Unlike the majority in this burgeoning industry, this new entrant isn’t trying to launch a robotaxi service or sell a self-driving system to suppliers and automakers. It’s not aiming for autonomous delivery, either.

Ghost Locomotion, which emerged Thursday from stealth with $63.7 million in investment from Keith Rabois at Founders Fund, Vinod Khosla at Khosla Ventures and Mike Speiser at Sutter Hill Ventures, is targeting your vehicle.

Ghost is developing a kit that will allow privately owned passenger vehicles to drive autonomously on highways. And the company says it will deliver in 2020. A price has not been set, but the company says it will be less than what Tesla charges for its Autopilot package that includes “full self-driving” or FSD. FSD currently costs $7,000.

This kit isn’t going to give a vehicle a superior advanced driving assistance system. The kit will let human drivers hand control of their vehicle over to a computer, allowing them to do other activities such as look at their phone or even doze off.

The idea might sound similar to what Comma.ai is working on, Tesla hopes to achieve or even the early business model of Cruise. Ghost CEO and co-founder John Hayes says what they’re doing is different.

A different approach

The biggest players in the industry — companies like Waymo, Cruise, Zoox and Argo AI — are trying to solve a really hard problem, which is driving in urban areas, Hayes told TechCrunch in a recent interview.

“It didn’t seem like anyone was actually trying to solve driving on the highways,” said Hayes, who previously founded Pure Storage in 2009. “At the time, we were told that this is so easy that surely the automakers will solve this any day now. And that really hasn’t happened.”

Hayes noted that automakers have continued to make progress in advanced driver assistance systems. The more advanced versions of these systems provide what the SAE describes as Level 2 automation, which means two primary control functions are automated. Tesla’s Autopilot system is a good example of this; when engaged, it automatically steers and has traffic-aware cruise control, which maintains the car’s speed in relation to surrounding traffic. But like all Level 2 systems, the driver is still in the loop.

Ghost wants to take the human out of the loop when they’re driving on highways.

“We’re taking, in some ways, a classic startup attitude to this, which is ‘what is the simplest product that we can perfect, that will put self driving in the hands of ordinary consumers?’ ” Hayes said. “And so we take people’s existing cars and we make them self-driving cars.”

The kit

Ghost is tackling that challenge with software and hardware.

The kit involves hardware like sensors and a computer that is installed in the trunk and connected to the controller area network (CAN) of the vehicle. The CAN bus is essentially the nervous system of the car and allows various parts to communicate with each other.

Vehicles must have a CAN bus and electronic steering to be able to use the kit.

The camera sensors are distributed throughout the vehicle. Cameras are integrated into what looks like a license plate holder at the back of the vehicle, as well as another set that are embedded behind the rearview mirror.

A third device with cameras is attached to the frame around the window of the door (see below).

Initially, this kit will be an aftermarket product; the company is starting with the 20 most popular car brands and will expand from there.

Ghost intends to set up retail spaces where a car owner can see the product and have it installed. But eventually, Hayes said, he believes the kit will become part of the vehicle itself, much like GPS or satellite radio has evolved.

While hardware is the most visible piece of Ghost, the company’s 75 employees have dedicated much of their time on the driving algorithm. It’s here, Hayes says, where Ghost stands apart.

How Ghost is building a driver

Ghost is not testing its self-driving system on public roads, an approach nearly every other AV company has taken. There are 63 companies in California that have received permits from the Department of Motor Vehicles to test autonomous vehicle technology (always with a human safety driver behind the wheel) on public roads.

Ghost’s entire approach is based on an axiom that the human driver is fundamentally correct. It begins by collecting mass amounts of video data from kits that are installed on the cars of high-mileage drivers. Ghost then uses models to figure out what’s going on in the scene and combines that with other data, including how the person is driving by measuring the actions they take.

It doesn’t take long or much data to model ordinary driving, actions like staying in a lane, braking and changing lanes on a highway. But that doesn’t “solve” self-driving on highways because the hard part is how to build a driver that can handle the odd occurrences, such as swerving, or correct for those bad behaviors.

Ghost’s system uses machine learning to find more interesting scenarios in the reams of data it collects and builds training models based on them.

The company’s kits are already installed on the cars of high-mileage drivers like Uber and Lyft drivers and commuters. Ghost has recruited dozens of drivers and plans to have its kits in hundreds of cars by the end of the year. By next year, Hayes says the kits will be in thousands of cars, all for the purpose of collecting data.

The background of the executive team, including co-founder and CTO Volkmar Uhlig, as well as the rest of their employees, provides some hints as to how they’re approaching the software and its integration with hardware.

Employees are data scientists and engineers, not roboticists. A dive into their resumes on LinkedIn and not one comes from another autonomous vehicle company, which is unusual in this era of talent poaching.

For instance, Uhlig, who started his career at IBM Watson Research, co-founded Adello and was the architect behind the company’s programmatic media trading platform. Before that, he built Teza Technologies, a high-frequency trading platform. While earning his PhD in computer science he was part of a team that architected the L4 Pistachio microkernel, which is commercially deployed in more than 3 billion mobile Apple and Android devices.

If Ghost is able to validate its system — which Hayes says is baked into its entire approach — privately owned self-driving cars could be on the highways by next year. While the National Highway Traffic Safety Administration could potentially step in, Ghost’s approach, like Tesla, hits a sweet spot of non-regulation. It’s a space, that Hayes notes, where the government has not yet chosen to regulate.

[ad_2]

Continue Reading

Android

Google will now pay up to $1.5 million for very specific Android exploits

Raghav Jain

Published

on

When Google first introduced its bug bounty program for Android, the biggest reward you could get for finding and reporting a potential exploit was $38,000.

The cap grew over time, as Android grew in popularity, more security researchers got on board and more vulnerabilities were unearthed. This morning, Google is bumping up its top reward to $1.5 million dollars.

They’re not going to pay out a million+ for just any bug, of course.

For this new reward category, Google is looking for “full chain remote code execution exploit with persistence which compromises the Titan M secure element on Pixel devices.” In other words, they’re looking for an exploit that, without the attacker having physical access to the device, can execute code even after a device is reset and breaks into the dedicated security chip built into the Pixels.

Reporting an exploit that fits that bill will get researchers up to $1 million. If they can do it on “specific developer preview versions” of Android, meanwhile, there’s a 50% bonus reward, bumping up the maximum prize up to $1.5 million.

Google first introduced the Titan M security chip with the Pixel 3. As Google outlines here, the chip’s job is essentially to supervise; it double-checks boot conditions, verifies firmware signatures, handles lock screen passcodes and tries to keep malicious apps from forcing your device to roll back to “older, potentially vulnerable” builds of Android. The same chip can be found in the Pixel 4 lineup.

Indeed, $1.5 million for a single exploit sounds like a lot… and it is. It’s roughly what Google paid out for all bug bounties in the last 12 months. The top reward this year, the company says, was $161,337 for a “1-click remote code execution exploit chain on the Pixel 3 device.” The average payout, meanwhile, was about $3,800 per finding. Given the potential severity of persistently busting through the security chip on what’s meant to be the flagship form of Android, though, a wild payout makes sense.

[ad_2]

Continue Reading
Advertisement

Trending Now!