Connect with us

Android

AI photo editor FaceApp goes viral again on iOS, raises questions about photo library access

Raghav Jain

Published

on

FaceApp. So. The app has gone viral again after first doing so two years ago or so. The effect has gotten better but these apps, like many other one-off viral apps, tend to come and go in waves driven by influencer networks or paid promotion. We first covered this particular AI photo editor from a team of Russian developers about two years ago.

It has gone viral again now due to some features that allow you to edit a person’s face to make it appear older or younger. You may remember at one point it had an issue because it enabled what amounted to digital blackface by changing a person from one ethnicity to another.

In this current wave of virality, some new questions are floating around about FaceApp. The first is whether it uploads your camera roll in the background. We found no evidence of this and neither did security researcher and Guardian App CEO Will Strafach or researcher Baptiste Robert.

The second is how it allows you to pick photos without giving photo access to the app. You can see a video of this behavior here:

While the app does indeed let you pick a single photo without giving it access to your photo library, this is actually 100% allowed by an Apple API introduced in iOS 11. It allows a developer to let a user pick one single photo from a system dialog to let the app work on. You can view documentation here and here.

IMG 54E064B28241 1

Because the user has to tap on one photo, this provides something Apple holds dear: user intent. You have explicitly tapped it, so it’s OK to send that one photo. This behavior is actually a net good in my opinion. It allows you to give an app one photo instead of your entire library. It can’t see any of your photos until you tap one. This is far better than committing your entire library to a jokey meme app.

Unfortunately, there is still some cognitive dissonance here, because Apple allows an app to call this API even if a user has set the Photo Access setting to Never in settings. In my opinion, if you have it set to Never, you should have to change that before any photo can enter the app from your library, no matter what inconvenience that causes. Never is not a default, it is an explicit choice and that permanent user intent overrules the one-off user intent of the new photo picker.

I believe that Apple should find a way to rectify this in the future by making it more clear or disallowing if people have explicitly opted out of sharing photos in an app.

IMG 0475

One good idea: the equivalent of the “only once” location option added to the upcoming iOS 13 might be appropriate.

One thing that FaceApp does do, however, is it uploads your photo to the cloud for processing. It does not do on-device processing like Apple’s first-party app does, and, like it, enables for third parties through its ML libraries and routines. This is not made clear to the user.

I have asked FaceApp why they don’t alert the user that the photo is processed in the cloud. I’ve also asked them whether they retain the photos.

Given how many screenshots people take of sensitive information like banking and whatnot, photo access is a bigger security risk than ever these days. With a scraper and optical character recognition tech you could automatically turn up a huge amount of info way beyond “photos of people.”

So, overall, I think it is important that we think carefully about the safeguards put in place to protect photo archives and the motives and methods of the apps we give access to.

http://platform.twitter.com/widgets.js

[ad_2]

Tech Passionate and Heavy Geek! Into Blogging world since 2014 and never looked back since then :) I am also a YouTube Video Producer and a Aspiring Entrepreneur. Founder, MyDroidDoes

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Android

Spotify’s podcasting app Anchor now helps you make trailers

Raghav Jain

Published

on

Spotify’s simple podcasting suite, Anchor, is today introducing a new feature designed to help creators promote their podcast: trailers. On the Anchor app for iOS and Android, podcasters will now be able to create a dedicated trailer for their podcast that combines an introduction and some background music, then turns it into an animated video that can be shared across social media and the wider web.

The trailer will also be made available within the podcast’s RSS feed, where it’s marked with the “trailer” episode type.

Anchor had already offered a way for users to mark episodes of their podcast as a trailer within the app, but the new feature makes it simpler to create a trailer through a more integrated experience.

For example, when you push the button to record, you have one minute to introduce your podcast — and a warning will flash when that minute is about to be up. When you’re satisfied with the recording, you can then browse through Anchor free library of background music, which is organized by mood — like adventurous, calm, dramatic, cheerful, energetic, funky, chill, etc. Or you can opt to go without music, if you prefer.

And if you already have a voice recording saved elsewhere, you can import it into Anchor to use as your trailer.

There are other options today for creating podcast trailers, like those from services like Wavve, Headliner, or Audiogram, for example. But Anchor’s goal is to be the one-stop-shop for everything a new podcaster needs to get started, and that includes promotional tools like this.

However, many professional podcasters still view Anchor as a sort of entry-level product and turn to more advanced audio editing suites to craft their shows. But over time, these extra, handy features could help Anchor to earn a place in podcaster’s workflow, even if it’s not their end-to-end solution.

Podcasting has become an important vertical for Anchor’s parent company Spotify, which led to it acquiring both Anchor and Gimlet earlier this year for $340 million. And its investments in podcasts, which have also included the acquisition of podcast network Parcast, have been starting to pay off.

The company reported in July its podcast audience had doubled in size since last year. In October, it said the number of podcast listeners on its service grew 40% from the prior quarter, and it now had 500,000 titles hosted on its platform.

Spotify can monetize podcasts in two ways, as with music — through ads and by pushing people into premium subscriptions. It now has 113 million paying customers and 248 million monthly actives. And once Spotify’s users are subscribed to a number of podcast shows, they’re more likely to stay with the service. In addition, podcasts don’t come with the licensing costs associated with record label deals, which Spotify also surely likes.

Anchor’s new trailers feature is live now on both iOS and Android .

 

[ad_2]

Continue Reading

Android

Ghost wants to retrofit your car so it can drive itself on highways in 2020

Raghav Jain

Published

on

A new autonomous vehicle company is on the streets — and unbeknownst to most, has been since 2017. Unlike the majority in this burgeoning industry, this new entrant isn’t trying to launch a robotaxi service or sell a self-driving system to suppliers and automakers. It’s not aiming for autonomous delivery, either.

Ghost Locomotion, which emerged Thursday from stealth with $63.7 million in investment from Keith Rabois at Founders Fund, Vinod Khosla at Khosla Ventures and Mike Speiser at Sutter Hill Ventures, is targeting your vehicle.

Ghost is developing a kit that will allow privately owned passenger vehicles to drive autonomously on highways. And the company says it will deliver in 2020. A price has not been set, but the company says it will be less than what Tesla charges for its Autopilot package that includes “full self-driving” or FSD. FSD currently costs $7,000.

This kit isn’t going to give a vehicle a superior advanced driving assistance system. The kit will let human drivers hand control of their vehicle over to a computer, allowing them to do other activities such as look at their phone or even doze off.

The idea might sound similar to what Comma.ai is working on, Tesla hopes to achieve or even the early business model of Cruise. Ghost CEO and co-founder John Hayes says what they’re doing is different.

A different approach

The biggest players in the industry — companies like Waymo, Cruise, Zoox and Argo AI — are trying to solve a really hard problem, which is driving in urban areas, Hayes told TechCrunch in a recent interview.

“It didn’t seem like anyone was actually trying to solve driving on the highways,” said Hayes, who previously founded Pure Storage in 2009. “At the time, we were told that this is so easy that surely the automakers will solve this any day now. And that really hasn’t happened.”

Hayes noted that automakers have continued to make progress in advanced driver assistance systems. The more advanced versions of these systems provide what the SAE describes as Level 2 automation, which means two primary control functions are automated. Tesla’s Autopilot system is a good example of this; when engaged, it automatically steers and has traffic-aware cruise control, which maintains the car’s speed in relation to surrounding traffic. But like all Level 2 systems, the driver is still in the loop.

Ghost wants to take the human out of the loop when they’re driving on highways.

“We’re taking, in some ways, a classic startup attitude to this, which is ‘what is the simplest product that we can perfect, that will put self driving in the hands of ordinary consumers?’ ” Hayes said. “And so we take people’s existing cars and we make them self-driving cars.”

The kit

Ghost is tackling that challenge with software and hardware.

The kit involves hardware like sensors and a computer that is installed in the trunk and connected to the controller area network (CAN) of the vehicle. The CAN bus is essentially the nervous system of the car and allows various parts to communicate with each other.

Vehicles must have a CAN bus and electronic steering to be able to use the kit.

The camera sensors are distributed throughout the vehicle. Cameras are integrated into what looks like a license plate holder at the back of the vehicle, as well as another set that are embedded behind the rearview mirror.

A third device with cameras is attached to the frame around the window of the door (see below).

Initially, this kit will be an aftermarket product; the company is starting with the 20 most popular car brands and will expand from there.

Ghost intends to set up retail spaces where a car owner can see the product and have it installed. But eventually, Hayes said, he believes the kit will become part of the vehicle itself, much like GPS or satellite radio has evolved.

While hardware is the most visible piece of Ghost, the company’s 75 employees have dedicated much of their time on the driving algorithm. It’s here, Hayes says, where Ghost stands apart.

How Ghost is building a driver

Ghost is not testing its self-driving system on public roads, an approach nearly every other AV company has taken. There are 63 companies in California that have received permits from the Department of Motor Vehicles to test autonomous vehicle technology (always with a human safety driver behind the wheel) on public roads.

Ghost’s entire approach is based on an axiom that the human driver is fundamentally correct. It begins by collecting mass amounts of video data from kits that are installed on the cars of high-mileage drivers. Ghost then uses models to figure out what’s going on in the scene and combines that with other data, including how the person is driving by measuring the actions they take.

It doesn’t take long or much data to model ordinary driving, actions like staying in a lane, braking and changing lanes on a highway. But that doesn’t “solve” self-driving on highways because the hard part is how to build a driver that can handle the odd occurrences, such as swerving, or correct for those bad behaviors.

Ghost’s system uses machine learning to find more interesting scenarios in the reams of data it collects and builds training models based on them.

The company’s kits are already installed on the cars of high-mileage drivers like Uber and Lyft drivers and commuters. Ghost has recruited dozens of drivers and plans to have its kits in hundreds of cars by the end of the year. By next year, Hayes says the kits will be in thousands of cars, all for the purpose of collecting data.

The background of the executive team, including co-founder and CTO Volkmar Uhlig, as well as the rest of their employees, provides some hints as to how they’re approaching the software and its integration with hardware.

Employees are data scientists and engineers, not roboticists. A dive into their resumes on LinkedIn and not one comes from another autonomous vehicle company, which is unusual in this era of talent poaching.

For instance, Uhlig, who started his career at IBM Watson Research, co-founded Adello and was the architect behind the company’s programmatic media trading platform. Before that, he built Teza Technologies, a high-frequency trading platform. While earning his PhD in computer science he was part of a team that architected the L4 Pistachio microkernel, which is commercially deployed in more than 3 billion mobile Apple and Android devices.

If Ghost is able to validate its system — which Hayes says is baked into its entire approach — privately owned self-driving cars could be on the highways by next year. While the National Highway Traffic Safety Administration could potentially step in, Ghost’s approach, like Tesla, hits a sweet spot of non-regulation. It’s a space, that Hayes notes, where the government has not yet chosen to regulate.

[ad_2]

Continue Reading

Android

Google will now pay up to $1.5 million for very specific Android exploits

Raghav Jain

Published

on

When Google first introduced its bug bounty program for Android, the biggest reward you could get for finding and reporting a potential exploit was $38,000.

The cap grew over time, as Android grew in popularity, more security researchers got on board and more vulnerabilities were unearthed. This morning, Google is bumping up its top reward to $1.5 million dollars.

They’re not going to pay out a million+ for just any bug, of course.

For this new reward category, Google is looking for “full chain remote code execution exploit with persistence which compromises the Titan M secure element on Pixel devices.” In other words, they’re looking for an exploit that, without the attacker having physical access to the device, can execute code even after a device is reset and breaks into the dedicated security chip built into the Pixels.

Reporting an exploit that fits that bill will get researchers up to $1 million. If they can do it on “specific developer preview versions” of Android, meanwhile, there’s a 50% bonus reward, bumping up the maximum prize up to $1.5 million.

Google first introduced the Titan M security chip with the Pixel 3. As Google outlines here, the chip’s job is essentially to supervise; it double-checks boot conditions, verifies firmware signatures, handles lock screen passcodes and tries to keep malicious apps from forcing your device to roll back to “older, potentially vulnerable” builds of Android. The same chip can be found in the Pixel 4 lineup.

Indeed, $1.5 million for a single exploit sounds like a lot… and it is. It’s roughly what Google paid out for all bug bounties in the last 12 months. The top reward this year, the company says, was $161,337 for a “1-click remote code execution exploit chain on the Pixel 3 device.” The average payout, meanwhile, was about $3,800 per finding. Given the potential severity of persistently busting through the security chip on what’s meant to be the flagship form of Android, though, a wild payout makes sense.

[ad_2]

Continue Reading
Advertisement

Trending Now!