Adrienne Felt

Leading the push for a more secure Internet.

The next time you open up Google’s Chrome Web browser, take a look at the little green icon that appears in the left corner of the URL bar whenever you’re on a secure website. It’s a lock, and if it’s green it signals that the website you’re on is encrypting data as it flows between you and the site. But not everyone knows what it is or what it represents, and that’s where Adrienne Felt comes in.

As a software engineer for Chrome, Felt has taken on the task of making the Internet more secure and helping users of the world’s most popular browser make smart, informed choices about their safety and privacy while online. This includes heading a years-long push to convince the world’s websites, which traditionally used the unencrypted HTTP to send data from one point to another, to switch to the secure version, HTTPS.

Why is it tricky to come up with online security measures that work for all kinds of people?

Part of it is that security measures generally stop people from doing things. The way we keep you safe is by telling you no. But this has very real costs. You can scare people … you can keep people from using the Internet at all. On the other hand, if you don’t do anything you put people and their data at very real risk. So you have to figure out how to strike just the right balance. And with multiple billion users it’s very difficult to find a balance that makes everyone happy.

One way you are trying to make people safer while they’re online is by encouraging websites to use HTTPS. What makes this a complicated process?

Think about a site like the Washington Post. When you go to the Washington Post’s home page, there’s going to be 100 different [assets from various websites] that are loaded. All of those have to support HTTPS before the Washington Post itself can do it. Sites need to make sure there’s no revenue hit, they need to make sure there’s no [search] ranking hit, they need to make sure there’s no performance hit. And then they can switch. All these things can be done. Sites are transitioning very successfully at scale now. But it is work.

Now that many of the biggest websites have made the switch from HTTP to HTTPS, what are you focusing on?

The long tail is a big problem. There are lots and lots of sites that are out there. Some that are barely maintained, some that are run by your dentist, your hairdresser, a teacher at a local elementary school, and I don’t see them rushing to add support for HTTPS. The question is now, “Okay, we’ve hit all the really popular sites, we’re starting to get to the medium sites—what do we do for the rest of the Internet?” I don’t want to get in a state where oh, great, you’re secure if you go to a big company but not if you go to a small, independent site. Because I still want people to feel like they can go everywhere on the Web. 

Jenna Wiens

Her computational models identify patients who are most at risk of a deadly infection.

A sizable percentage of hospital patients end up with an infection they didn’t have when they arrived.

Among the most lethal of these is Clostridium difficile. The bacterium, which spreads easily in hospitals and other health-care facilities, was the source of almost half a million infections among patients in the United States in a single year, according to a 2015 report by the Centers for Disease Control and Prevention. Fifteen thousand deaths were directly attributable to the bug.

Jenna Wiens, an assistant professor of computer science and engineering at the University of Michigan, thinks hospitals could learn to prevent many infections and deaths by taking advantage of the vast amounts of data they already collect about their patients.

“I think to really get all of the value we can out of the data we are collecting, it’s necessary to be taking a machine-learning and a data-mining approach,”
she says.

Wiens has developed computational models that use algorithms to search through the data contained in a hospital’s electronic health records system, including patients’ medication prescriptions, their lab results, and the records of procedures that they’ve undergone. The models then tease out the specific risk factors for C. difficile at that hospital.

“A traditional approach would start with a small number of variables that we believe are risk factors and make a model based on those risk factors. Our approach essentially throws everything in that’s available,” Wiens says. It can readily be adapted to different types of data.

Aside from using this information to treat patients earlier or prevent infections altogether, Wiens says, her model could be used to help researchers carry out clinical trials for new treatments, like novel antibiotics. Such studies have been difficult to do in the past for hospital-acquired infections like C. difficile—the infections come on fast so there’s little time to enroll a patient in a trial. But by using Wiens’s model, researchers could identify patients most vulnerable to infections and study the proposed intervention based on that risk.

At a time when health-care costs are rising exponentially, it’s hard to imagine hospitals wanting to spend more money on new machine-learning approaches. But Wiens is hopeful that hospitals will see the value in hiring data scientists to do what she’s doing.

“I think there is a bigger cost to not using the data,” she says. “Patients are dying when they seek medical care and they acquire one of these infections. If we can prevent those, the savings are priceless.”

Franziska Roesner

Preparing for the security and privacy threats that augmented reality will bring.

hat would hacks of augmented reality look like? Imagine a see-through AR display on your car helping you navigate—now imagine a hacker adding images of virtual dogs or pedestrians in the street.

Franzi Roesner, 31, recognized this challenge early on and is leading the thinking into what security and privacy provisions AR devices will need to protect them, and ourselves. Her research group at the University of Washington created a prototype AR platform that can, for example, block a windshield app from hiding any signs or people in the real world while a car is in motion.

“I’ve been asking the question, ‘What could a buggy or malicious application do?’” she says.

Lorenz Meier

An open-source autopilot for drones.

Lorenz Meier was curious about technologies that could allow robots to move around on their own, but in 2008, when he started looking, he was unimpressed—most systems had not yet even adopted the affordable motion sensors found in smartphones.

So Meier, now a postdoc at the Swiss Federal Institute of Technology in Zurich, built his own system instead: PX4, an open-source autopilot for autonomous drone control. Importantly, Meier’s system aims to use cheap cameras and computer logic to let drones fly themselves around obstacles, determine their optimal paths, and control their overall flight with little or no user input. It has already been adopted by companies including Intel, Qualcomm, Sony, and GoPro.

Greg Brockman

Trying to make sure that AI benefits humanity.

Human-like artificial intelligence is still a long way off, but Greg Brockman believes the time to start thinking about its safety is now. That’s why, after helping to build the online-payments firm Stripe, he cofounded OpenAI along with Elon Musk and others. The nonprofit research group focuses on making sure AI continues to benefit humanity even as it increases in sophistication. Brockman plays many roles at the firm, from recruiting to helping researchers test new learning algorithms. In the long term, he says, a general AI system will need something akin to a sense of shame to prevent it from misbehaving. “It’s going to be the most important technology that humans ever create,” he says, “so getting that right seems pretty important.”

Volodymyr Mnih

The first system to play Atari games as well as a human can.

Volodymyr Mnih, a research scientist at DeepMind, has created the first system to demonstrate human-level performance in almost 50 Atari 2600 video games, including Pong and Space Invaders. Minh’s system was the first to combine the playful characteristics of reinforcement learning with the rigorous approach of deep learning, which mirrors the way  the human brain processes information—learning by example. His software learned to play the games much as a human would, through playful trial and error, while using the game score as a measurement by which to hone and perfect its technique for each game.

Joshua Browder

Using chatbots to help people avoid legal fees.

Joshua Browder is determined to upend the $200 billion legal services market with, of all things, chatbots. He thinks chatbots can automate many of the tasks that lawyers have no business charging a high hourly rate to complete.

“It should never be a hassle to engage in a legal process, and it should never be a question of who can afford to pay,” says Browder. “It should be a question of what’s the right outcome, of getting justice.”

Browder started out small in 2015, creating a simple tool called DoNotPay to help people contest parking tickets. He came up with the idea after successfully contesting many of his own tickets, and friends urged him to create an app so they could benefit from his approach.

Browder’s basic “robot lawyer” asks for a few bits of information—which state the ticket was issued in, and on what date—and uses it to generate a form letter asking that the charges be dropped. So far, 375,000 people have avoided about $9.7 million in penalties, he says.

In early July, DoNotPay expanded its portfolio to include 1,000 other relatively discrete legal tasks, such as lodging a workplace discrimination complaint or canceling an online marketing trial. A few days later, it introduced open-source tools that others—including lawyers with no coding experience—could use to create their own chatbots. Warren Agin, an adjunct law professor at Boston College, created one that people who have declared bankruptcy can use to fend off creditors. “Debtors have a lot of legal tools available to them, but they don’t know it,” he says.

Browder has more sweeping plans. He wants to automate, or at least simplify, famously painful legal processes such as applying for political asylum or getting a divorce.

But huge challenges remain. Browder is likely to run into obstacles laid down by lawyers intent on maximizing their billable hours, and by consumers wary of relying too heavily on algorithms rather than flesh-and-blood lawyers.

Neha Narkhede

Helping companies make sense of all the data.

The business world is drowning in data, but Neha Narkhede is teaching companies to swim. As an engineer at LinkedIn, Narkhede helped invent an open-source software platform called Apache Kafka to quickly process the site’s torrent of incoming data from things like user clicks and profile updates. Sensing a big opportunity, she co-founded Confluent, a startup that builds Apache Kafka tools for companies, in 2014. She’s been the driving force behind the platform’s wide adoption—Goldman Sachs uses it to help deliver information to traders in real-time, Netflix to collect data for its video recommendations, and Uber to analyze data for its surge-pricing system. Confluent’s products allow companies to use the platform to, for example, sync information across multiple data centers and monitor activity through a central console.

“We view our technology as a central nervous system for companies that aggregates data and makes sense of it within milliseconds, at scale,” she says. “We think virtually every company would benefit from that and we plan to bring it to them.”

Kathy Gong

Developing new models for entrepreneurship in China.

Kathy Gong became a chess master at 13, and four years later she boarded a plane with a one-way ticket to New York City to attend Columbia University. She knew little English at the time but learned as she studied, and after graduation she returned to China, where she soon became a standout among a rising class of fearless young technology entrepreneurs. Gong has launched a series of companies in different industries. One is Law.ai, a machine-learning company that created both a robotic divorce lawyer called Lily and a robotic visa and immigration lawyer called Mike. Now Gong and her team have founded a new company called Wafa Games that’s aiming to test the Middle East market, which Gong says most other game companies are ignoring.

Austin Russell

Better sensors for safer automated driving.

Most driverless cars use laser sensors, or lidar, to map surroundings in 3-D and spot obstacles. But some cheap new sensors may not be accurate enough for high-speed use. “They’re more suited to a Roomba,” says Austin ­Russell, who dropped out of Stanford and set up his own lidar company, Luminar. “My biggest fear is that people will prematurely deploy autonomous cars that are unsafe.”

Luminar’s device uses longer-wavelength light than other sensors, allowing it to spot dark objects twice as far out. At 70 miles per hour, that’s three extra seconds of warning.