Janet Yellen Will Consider Limiting the Use of Cryptocurrency

During her confirmation hearing, the Treasury nominee said that blockchain-based financial networks are “a particular concern.” Cryptocurrencies could come under renewed regulatory scrutiny over the next four years if Janet Yellen, Joe Biden's pick to lead the Treasury Department, gets her way. During Yellen's confirmation hearing on Tuesday before the Senate Finance Committee, Senator Maggie Hassan (D-New Hampshire) asked Yellen about the use of cryptocurrency by terrorists and other criminals. "Cryptocurrencies are a particular concern," Yellen responded. "I think many are used—at least in a transactions sense—mainly for illicit financing." She said she wanted to "examine ways in which we can curtail their use and make sure that [money laundering] doesn't occur through those channels." Blockchain-based financial networks are attractive to criminals because they do not require users to identify themselves—as the law requires most conventional financial networks to do. Because no individual or organization controls these networks, there's no easy way for governments to force them to comply with money-laundering laws. So instead of trying to force the networks themselves to comply, regulators in the US—and many other jurisdictions—have focused on regulating bitcoin exchanges that help users trade between dollars and cryptocurrencies. Once a bitcoin exchange identifies who initially received a particular bitcoin payment, law enforcement can often trace subsequent payments through a blockchain network's open payment ledger. In December, Trump's outgoing team at the Financial Crimes Enforcement Network—a unit of the Treasury Department focused on money laundering—proposed a new set of rules to tighten the screws on cryptocurrency-based money laundering. Under the new rules, cryptocurrency-based exchanges would need to file transaction reports with FinCEN any time a customer made a cryptocurrency transaction worth more than $10,000. This would mirror existing rules requiring conventional banks to report when customers make cash withdrawals or deposits worth more than $10,000. Even more controversial in the cryptocurrency world, FinCEN wants to impose new record-keeping requirements for transactions involving users who manage their own private keys—dubbed "unhosted wallets" by FinCEN. Under FinCEN's proposal, if a cryptocurrency exchange's customer sends more than $3,000 to an unhosted wallet, the exchange would be required to keep a record of the transaction, including the identity of the customer who initiated the payment. These new rules didn't take effect before Trump left office, so the incoming Biden team will need to decide what to do with them. The Biden administration could sign off on the existing rules, rewrite them, or scrap them altogether. Yellen's comments on Tuesday suggest that she is unlikely to scrap the rules. If anything, the Treasury Department is likely to consider additional regulations of the blockchain economy over the next four years. This story originally appeared on Ars Technica. More Great WIRED Stories 📩 Want the latest on tech, science, and more? Sign up for our newsletters! The unsettling truth about the “Mostly Harmless” hiker How many microcovids would you spend on a burrito? Apps to help you trim down subscriptions—and save money The Parler bans and a new front in the “free speech” wars Listening to Black women: The innovation tech can't crack 🎮 WIRED Games: Get the latest tips, reviews, and more 💻 Upgrade your work game with our Gear team’s favorite laptops, keyboards, typing alternatives, and noise-canceling headphones

The Art and Science of Boarding an Airplane in a Pandemic

Researchers and airlines that obsessed over efficiency have spent the past year worrying about safety too. Jason Steffen studies planets in other solar systems. His most famous work—OK, second-most famous work—was with NASA’s Kepler Mission, a survey of planetary systems. But you’re more likely to have heard of Steffen, a professor at the University of Nevada at Las Vegas, in a very different context: as a student of the airplane boarding process. Years ago, after waiting in yet another line on a jam-packed jetway, the physicist thought to himself, “There has to be a better way than this.” Airlines are invested in boarding times—and to a lesser extent, offboarding—because time equals money. Flying people around the world is a low-margin business, and the faster you can get a flight loaded, into the air, and then emptied on the ground, the faster you can get the next round of paying customers into the air. In 2008, Steffen published a paper detailing his way, which has become known as the Steffen method. Forget the point-counters in business class. Forget the smug airline-branded credit card wielders with priority boarding. Forget even the first -class passengers—the complimentary champagne can wait. The fastest way to board an airplane, he concluded, is to allow many people to do many boarding tasks at once. Start with the person in the window seat in the last row on the right side. The person in the third-to-last window seat goes next, allowing time to swing items into the overhead bin. Then the person in the fifth-to-last window seat, and so on until the right side fills up. Then the left side. Then the same pattern for middle seats. Then the aisle. Yeah, a little complicated. It’s been over a decade, and maybe it won’t surprise you to learn that no airlines have fully gone for the Steffen method. In fact, there’s a subgenre of global researchers—engineers, physicists, computer scientists, cyberneticists, and economists—who search for more optimal ways to cram crowds onto flying metal tubes. They’ve devised at least 20 methods to get people onto planes. But for many reasons—airline finances, airport infrastructure, technological shortcomings—their research has mostly fallen on deaf ears. In 2013, the Dutch airline KLM experimented with a modified Steffen method boarding process, but the company later said the trial had no “tangible additional benefit.” Now a global pandemic has done the seemingly impossible: shaken up airplane boarding procedures. Along with requiring masks, providing hand sanitizer, and, in some cases, banning passengers from middle seats, many airlines have created boarding and deboarding processes that try to avoid packing flyers too closely together. Delta, which previously boarded passengers according to ticket classes and mileage club memberships, is loading the airplane back to front, so that flyers don’t pass by others as they make their way to their seats. After preboarding families and passengers that need extra time, United is going back-to-front too. Even Southwest, famous for letting passengers choose their seats, is only letting 10 passengers on at a time, instead of the usual 30. The process is certainly slower, but Southwest, and other airlines, have far fewer passengers these days. Researchers pushing for smarter approaches to getting on airplanes are hoping for more change. Big changes in aviation tend to only happen when people die or get hurt, says Michael Schultz, who studies air transportation at Technische Universität Dresden. The airlines “try to learn what's going wrong, and then they try to improve,” he says. With that in mind, Schultz has been working since last spring with colleagues around the world to identify and simulate the fastest—and safest—way to get people onto and off airplanes right now. He hopes the pandemic pushes airlines to update their technology, so that they’re able to board passengers dynamically, pushing an alert to a passenger’s smartphone when it is their turn to board. He thinks a connected aircraft cabin filled with sensors could help crews direct flyers through often-hectic deboardings too. “Airlines are dealing with a very precious balancing act,” says Martin Rottler, an aviation veteran who now runs his own consultancy. “They need to balance efficiency with customer satisfaction, and now they need to add on safety.” Another team of researchers, divided between Bucharest, Romania and Potsdam, New York, think they’ve hacked the perfect mix. They call it the “WilMA back-to-front offset-2,” and it boards back to front, by rows, with the window seats first. The method might occasionally see a passenger on her way back briefly pass someone already sitting at the window. But it threads the needle, the researchers say, between safety and efficiency. Using simulations, researchers devised a boarding method that balances airline efficiency with passenger safety during a pandemic.Video: Liviu-Adrian Cotfas/Bucharest University of Economic Studies In fact, the boarding process is a little like what lots of airlines are doing now. “They’re just not quite fine-tuning the method” to make it even easier, says John Milne, an engineering management professor at Clarkson University who worked on the research. It’s high time, in other words, for the academic plane-boarding obsessives, not the business people, to be in charge for a change. More Great WIRED Stories 📩 Want the latest on tech, science, and more? Sign up for our newsletters! The case for cannibalism, or: How to survive the Donner Party A digital picture frame is my favorite way to keep in touch These are the 17 must-watch TV shows of 2021 If Covid-19 did start with a lab leak, would we ever know? Ash Carter: The US needs a new plan to beat China on AI 🎮 WIRED Games: Get the latest tips, reviews, and more ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers

Alphabet Is Grounding Loon—but Won't Call It a Failure

Plus: The moonshot’s launch, health care for the maskless, and a new era’s inaugural meme. Hi, everybody. Do you detect something missing? Like that unease in your gut that everything has gone to hell? Wonder why that is. Oh, and for those of you who are reading this newsletter on WIRED’s website, here’s some news. First, we’re sending this out early because we have a scoop timed to tonight. And now for some bad news: Aside from an occasional episode shared out of the goodness of my heart, Plaintext will hereafter be available only for subscribers. Don’t look shocked—I’ve been warning you about this for a year! The good news is that you can subscribe for an initial offer of five lousy bucks and get Plaintext, unlimited WIRED online access, and a luscious print magazine that, properly cared for, will last longer than you do. Take that, Substack! The Plain View Silicon Valley loves crazy. VCs will tell you that the crazy ideas are the ones that turn their investments into billions. Steve Jobs recited a prose-poem about how they push the world forward. And Alphabet—whose founders were told that it was crazy to capture all of the web to tackle the already-solved problem of search—has a whole division, dubbed X, devoted to nurturing crazy ideas. They call it their Moonshot Factory, but the original Apollo moonshot was fanatical about avoiding failure. Better to call this one a crazy factory. Even with that mission, one project was so out there that they called it Loon. It involved circling the globe with packs of balloons that would beam internet access to underserved earthlings. Launched in 2013, it confounded skeptics by soaring toward viability. Loon’s technology kept improving, with balloons that stayed aloft longer and sent bits straight to cell phones. In 2018, Loon “graduated” from X and became a division of its own, known as one of Alphabet’s Other Bets. It got its own CEO and, eventually, some outside funding to augment the many millions of dollars the company had already spent. (Though it won’t say how many millions.) It helped send data to Peruvians after an earthquake and to Puerto Ricans post-hurricane. Last year, in a pilot project in Kenya, the division successfully delivered bandwidth to customers. Loon refused to give Alphabet a reason to kill it. Until now. Tonight, Alphabet is announcing that it is grounding Loon. Astro Teller, who heads X and was also the chair of the Loon board, recommended that Alphabet no longer fund it, effectively letting the air out of the division’s balloon. “No one wanted to pick up the mantle,” he says. The interesting thing is how far Loon got before Alphabet pulled the plug. When Teller first heard the idea, he says, he gave it about a 1 or 2 percent chance of succeeding. By the time of its launch in 2013—which I traveled to New Zealand to attend, following some of its first internet-bearing balloons—it had gotten to around 10 percent. By the 2018 graduation, Teller thought it was 50–50. But in the last six months, the odds reset, like some grim-reaper-ish version of the New York Times needle. Loon had two challenges: the technological leap to deliver internet by balloon, and making the business case that people would pay for it. While the tech side was solving problems, the commercial environment became less favorable. In the last decade, much of the underserved world became connected—internet availability rose from 75 percent of the world to 93 percent. The remaining areas are primarily populated by those who can’t afford the 4G phones that receive Loon signals, or aren’t convinced that the internet—which in some cases has little content in their own language—was worth the effort. Teller came to realize that Loon was unlikely to ever contribute to Alphabet’s profits. And so the bet was lost. Loon does leave a legacy. Probably no one has ever spent more money and brainpower on balloon technology, and Loon constantly set records for keeping them aloft. It broke ground in using sophisticated algorithms as well as the US government's National Oceanic and Atmospheric Administration weather data to figure out how to ride wind currents and navigate the skies at 60,000 feet. Just last month, Loon engineers had a paper in Nature describing how their technology pioneered deep learning techniques to help their balloons autonomously form networks that thrived in a challenging environment. Another Loon breakthrough—sending high-speed data via beams of light (like fiber optic without the fiber)—kicked off a separate X project, Taara. The fall of Loon is a good occasion to take a look at X’s accomplishments. Last year, the Moonshot Factory celebrated its first decade. In that time, it’s pioneered autonomous driving, which is now the basis of the Other Bet called Waymo; another project, Google Brain now powers much of Google’s technology with deep learning; and Alphabet still has high hopes for X graduates like its medical bet Verily, and its drone delivery company, Wing. And still inside X are projects involving robots and food. But it has also populated a boneyard of costly failures, now including Loon. But Teller won’t call it failure. Loon, he says, was “a successful experiment.” Considering that he just killed a costly high-profile enterprise, I asked him what an unsuccessful experiment might look like. “Real failure is when the data tells you what you’re doing isn’t the right thing, and you do it anyway.” Loon was a success, he says, because once it was clear that it would never become a viable business, or solve internet connectivity, he called it quits. Crazy? That’s the X way. “We can't get access to these really exceptional opportunities unless we’re willing to be wrong a decent amount of the time,” says Teller. His bosses are cool with that. He gets regular reviews from Alphabet CEO Sundar Pichai and CFO Ruth Porat, and says both continue to be supportive. How does Teller himself rate the performance of X? ”Eight out of 10,” he says. Still, it’s never fun to end a project. “We wanted Loon to be a beautiful solution to a seemingly unsolvable problem,” says Teller. Now, someone else will have to solve the problem. A few dozen Loon balloons are still aloft. Over the next nine months, remaining Loonatics will painstakingly recover them as they sink into the sunset. X will be on to its next crazy project, and Alphabet will keep the money flowing. “You can't make Loons,” Teller says, “unless you're willing, when they don't work out, to just say, ‘OK, let's start over and do something else.’” Time Travel In 2013, I wrote about the launch of Loon, describing how its cofounder Rich DeVaul helped shape the concept. (DeVaul, a leader of the X rapid evaluation team, left the company in 2018 because of a sexual harassment issue.) Teller gave DeVaul some ideas to kill. One of them was a concept to deliver wireless Internet access via balloons in the stratosphere. CEO Larry Page had often spoken of this, and Teller knew that favorite topics of the cofounders had a leg up in funding decisions. But there was a big problem: Balloons are hostages to wind. If you try to keep a balloon in a fixed location, you must apply Sisyphean efforts to resist that wind. It almost always ends badly. Lockheed Martin recently tried to beat the odds with a giant solar-powered dirigible. But in its maiden test in 2011, Lockheed's High Altitude Long Endurance-Demonstrator prototype failed to reach altitude and was forced to abort, landing in a Pennsylvania forest. Lockheed has no plans for another test. As DeVaul began spreadsheeting the possibilities, he came up with another concept. Rather than a behemoth that required massive amounts of energy to fight stratospheric winds to stay in place, he found himself drawn to the idea of smaller, cheaper weather balloons that sometimes stay aloft for 40 days or more, circling the globe. "I thought, why not have a bunch of these things, covering a whole area? How crazy would that be?" he says. Ask Me One Thing Penny asks, “Do you think people who don’t wear a mask or take other precautions should sign a waiver, voluntarily giving up their option for ER care and just take care of themselves at home if they contract the virus?” That’s easy, Penny: No! I’m no doctor, but if I were one I would have taken a Hippocratic oath that actually compels me to treat such people, dumb or stubborn as they are. Oath or not, it’s the right thing to do. Don’t get me wrong—we all should be wearing masks outside of our homes. Businesses should refuse to let those with naked faces into their establishments. Socially shun these people! And toss out any craven elected official that encourages otherwise! But when it comes to the emergency room or the clinic, sick people must be treated, regardless of what culpability they may have had in contracting their disease. Penny, when you wear a mask, you are not only protecting yourself but others, including folks you don’t know and maybe don’t even like. You are saying, “We’re all in this together.” Even if some won’t understand the sentiment until their oximeter reading drops to the 80s, let’s hold on to that sentiment. And don’t forget to wash your hands. You can submit questions to mail@wired.com. Write ASK LEVY in the subject line. End Times Chronicle We got rid of the president, and a new era has begun—but all the internet can talk about is Bernie Sanders and his mittens. Last but Not Least After “indefinitely” banning Donald Trump, Mark Zuckerberg asked Facebook’s independent Oversight Board to decide whether the former president can return. Didn’t Trump once say, “It’s not my responsibility!” During Covid, schools have had to make tough choices. One Florida school made some good ones. Here’s how Palantir kept its all-seeing eye on Afghanistan. And here’s the headquarters for China’s plan for global AI domination. Did you like this newsletter? Don’t say goodbye. Subscribe here and I’ll greet you every Friday with another Plaintext. Don't miss future subscriber-only editions of this column. Subscribe to WIRED (50% off for Plaintext readers) today. More Great WIRED Stories 📩 Want the latest on tech, science, and more? Sign up for our newsletters! The case for cannibalism, or: How to survive the Donner Party A digital picture frame is my favorite way to keep in touch These are the 17 must-watch TV shows of 2021 If Covid-19 did start with a lab leak, would we ever know? Ash Carter: The US needs a new plan to beat China on AI 🎮 WIRED Games: Get the latest tips, reviews, and more ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers

Facebook’s Oversight Board to Decide on Trump Ban

Facebook has to decide whether to give the ex-president his bullhorn back. It won’t make that call itself. In the wake of this month’s violent insurrection at the US Capitol, Facebook suspended the account of the inciter-in-chief Donald Trump. On January 7, citing the danger that the president’s posts might incite further violence, CEO Mark Zuckerberg said that the company would keep Trump off the platform “indefinitely and for at least the next two weeks until the peaceful transition of power is complete.” (Twitter has banned Trump permanently.) Now that that transition is official, Facebook has a decision to make—whether or not to allow the ex-president his bullhorn. But today, Facebook is announcing that it won’t make that decision itself. Earlier this morning, it asked the Oversight Board it created to make the call on whether or not Trump is welcomed back on Facebook. The Oversight Board is an independent entity funded by Facebook. Three years in the making, it has yet to issue its first decision—but Facebook is expecting rulings in six cases that the board has been mulling over since it finally began operating this fall. (The Trump decision will be the seventh ruling.) Cases generally will come to the board by user appeals of Facebook’s content decisions, but the bylaws allow Facebook itself to present cases for the board’s consideration. Those cases include the toughest calls Facebook must make, and none will be tougher than this. A lot rides on the decision. Without Trump on Twitter and Facebook, much of the world is enjoying better blood pressure readings. More seriously, there is less of a chance that Trump can incite the kind of violence that led to his suspension. Meanwhile, Trump supporters—and others simply concerned about Facebook’s power—are crying censorship. Facebook had the option of asking the board to make an expedited decision for a quick turnaround, but considering the gravity of the case, it chose to allow the board to take its usual 90 days to process the case. (Of course, it could issue its judgement sooner.) One of the four cochairs of the 20-person board will assign the case to a panel, which typically consists of five people (one must be in the North American region), and they will consider whether, as a major political player, Trump will be welcomed back. The former president will have an opportunity to present his side of the argument to the board. Meanwhile, Trump’s indefinite suspension will hold. Is Facebook passing the buck on this most critical of decisions? Absolutely. In a press release explaining the move, Facebook’s VP of Global Affairs, Nick Clegg, says, “There are decisions that we cannot duck.” Sending them to the board allows Facebook to do just that. But the company says that’s why it started the board in the first place—to provide accountability. Mark Zuckerberg has always said he does not want to be the global arbiter of what people can say or not say on Facebook. But he did make a decision—to send this case to the board he created in part to take the heat off Facebook. It’s far from certain that this will take Facebook off the hook for whether Donald Trump can use its platform. For now, the much touted Oversight Board finds itself holding a very hot potato. More Great WIRED Stories 📩 Want the latest on tech, science, and more? Sign up for our newsletters! Your body, your self, your surgeon, his Instagram My quest to survive quarantine—in heated clothes How law enforcement gets around your phone's encryption AI-powered text from this program could fool the government The ongoing collapse of the world's aquifers 🎮 WIRED Games: Get the latest tips, reviews, and more 🏃🏽‍♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers, running gear (including shoes and socks), and best headphones

This Chinese Lab Is Aiming for Big AI Breakthroughs

China produces as many artificial intelligence researchers as the US, but it lags in key fields like machine learning. The government hopes to make up ground. In a low-rise building overlooking a busy intersection in Beijing, Ji Rong Wen, a middle-aged scientist with thin-rimmed glasses and a mop of black hair, excitedly describes a project that could advance one of the hottest areas of artificial intelligence. Wen leads a team at the Beijing Academy of Artificial Intelligence (BAAI), a government-sponsored research lab that’s testing a powerful new language algorithm—something similar to GPT-3, a program revealed in June by researchers at OpenAI that digests large amounts of text and can generate remarkably coherent, free-flowing language. “This is a big project,” Wen says with a big grin. “It takes a lot of computing infrastructure and money.” Wen, a professor at Renmin University in Beijing recruited to work part-time at BAAI, hopes to create an algorithm that is even cleverer than GPT-3. He plans to combine machine learning with databases of facts, and to feed the algorithm images and video as well as text, in hope of creating a richer understanding of the physical world—that the words cat and fur don’t just often appear in the same sentence, but are associated with one another visually. Other top AI labs, including OpenAI, are doing similar work. One thing that drew Wen to BAAI is its impressive computational resources. “The BAAI has received stellar support from the government and has strong data and computing power,” he says. His language model is one of many BAAI projects aimed at fundamental advances in AI, reflecting a new era for Chinese technology. Despite considerable hype and hand-wringing over China’s technological ascent, the country has so far primarily excelled at taking innovations from elsewhere and deploying them in new ways. This is particularly evident in AI, an area Chinese leaders consider crucial to their aspirations of becoming a true superpower. Some breakthroughs at BAAI could benefit the government directly. Wen says his language system could serve as an intelligent assistant to help citizens perform civic tasks online like obtaining a visa, a driver’s license, or a business permit. Instead of spending days filling out paperwork and waiting in line, as is the norm, a clever helper could guide citizens through the red tape. Zhanliang Liu, project lead for the effort and previously an engineer at Baidu, China’s top web search company, says his team has built a prototype for Beijing’s Department of Motor Vehicles. “It is a really tough challenge,” he says. The government might, of course, benefit in other ways. More sophisticated AI language systems could prove useful for scanning social media for questionable comments or for scouring phone call transcripts. The Chinese state has embraced AI as a tool of governance, including for censorship and surveillance, particularly of Muslims in western Xinjiang province. There’s no evidence of BAAI’s work feeding into policing or intelligence, but it is being released openly for anyone to commercialize or apply. At the same time, officials are wary about the potential for AI to erode the power of the state. Several projects at the institute aim to set guardrails for commercial use of AI, to head off ethical challenges and curb the power of big tech companies. “The Chinese government's trying to get on top of this, to make sure that they're properly in control, and I think that's actually not proving altogether straightforward,” says Nigel Inkster, author of The Great Decoupling, a recent book about the fracturing relationship between China and America. An Ambitious Plan for AI The government made its AI ambitions clear in a sweeping plan released in 2017. It set AI researchers the goal of making “fundamental breakthroughs by 2025” and called for the country to be “the world’s primary innovation center by 2030.” BAAI opened a year later, in Zhongguancun, a neighborhood of Beijing designed to replicate US innovation hubs such as Boston and Silicon Valley. It is home to a few big tech companies modeled on Western successes, like the PC maker Lenovo and the search engine Sogou, as well as countless cheap electronics stores. In recent years, the electronics stores have begun disappearing, and dozens of startups have sprung up, many focused on finding lucrative uses for AI—in manufacturing, robotics, logistics, education, finance, and other fields. BAAI will move into a new building not far from the current offices later this year. The location is both symbolic and practical, within walking distance of China’s two most prestigious universities, Tsinghua and Peking, as well as the Zhongguancun Integrated Circuit Park, opened by the government last year to attract home-grown microchip businesses. The pandemic has interrupted visits to China. I’ve met some academics working at BAAI before, and talked to others there over Zoom. An administrative assistant gave me a guided tour over WeChat video. Through the tiny screen, I saw engineers and support staff seated in an open-plan office between lush-looking potted plants. Plaques on the wall of the reception area identify the academy’s departments, including Intelligent Information Processing and Face Structured Analysis. A large sign lays out the principles that guide the center: Academic thinking. Basic theory. Top talents. Enterprise innovation. Development policy. One group at BAAI is exploring the mathematical principles underpinning machine-learning algorithms, an endeavor that may help improve upon them. Another group is focused on drawing insights from neuroscience to build better AI programs. The most celebrated machine-learning approach today—deep learning—is loosely inspired by the way neurons and synapses in the human brain learn from input. A better understanding of the biological processes behind animal and human cognition could lead to a new generation of smarter machines. A third group at the academy is focused on designing and developing microchips to run AI applications more efficiently. “Innovation by its very nature is sort of uncertain, and perhaps nowhere more so than in AI.”  Noam Yuchtman, London School of Economics Many BAAI-affiliated researchers are doing cutting-edge work. One works on ways to make deep learning algorithms more efficient and compact. Another studies “neuromorphic” computer chips that could fundamentally change the way computers work by mirroring biological processes. China boasts some top academic AI talent, but it still has fewer leading experts than the US, Canada, or some European countries. A study of AI research papers by the Paulson Institute released in June found that China and the US produce about the same number of AI researchers each year, but the vast majority of them end up working in the US. The issue has become more urgent of late, after the Trump administration imposed sanctions that capitalize on China’s inability to manufacture the most advanced microchips. The US has most prominently targeted Huawei, which it accuses of funneling data to the government, including for espionage, cutting off its supplies of the chips needed to make high-end smartphones. In 2019, the US broadened Chinese sanctions to ban US firms from doing business with several AI firms, accusing them of supplying technology for state surveillance. President Biden may take a different approach than Trump, but he is unlikely to ignore China’s technological threat. Tiejun Huang, director of BAAI, speaks carefully, after a long pause to collect and translate his thoughts. He says the center is modeled on Western institutions that bring together different disciplines to advance AI. Despite difficult US-China relations, he says, it is crucial for the academy to build ties with such institutions. It has sent researchers to visit MILA in Canada and the Turing Institute in the UK, two of the world’s top centers of AI expertise. AI scientists from US institutions including Princeton and UC Berkeley serve on the academy’s advisory committee. The Chinese government is not alone in investing in AI. The US Defense Advanced Research Projects Agency backs research with potential military uses. Yet many in the West are wary of how the Chinese state could use technology to further its interests and values—for example, tying digital technologies to the Belt and Road Initiative, which builds economic and infrastructure links to neighboring countries. With clear ties to the Chinese government, it isn’t hard to see a broader agenda in BAAI’s work. Research at BAAI could perhaps serve as tools of soft power, through technical standards, for example. Some Western students of China see the government’s efforts to define standards as a way to favor domestic companies and to shape perceptions and norms of a technology. Chinese firms have been active in setting technical standards for advanced 5G mobile networks. A research group at BAAI is focused on technical standards for AI, releasing proposed notation for machine-learning articles in July. Some Western researchers say some of what China is doing is not exceptional. Danit Gal, a researcher at Cambridge University’s Leverhulme Center for the Future of Intelligence who specializes in AI ethics and was previously a technology adviser to the UN, was studying at Peking University when the academy opened and has attended several meetings there. She says it is unfair to focus on the controversies when the academy is doing earnest research. “What China is doing, you know the surveillance part, is not unique to China,” she says. “I'm from Israel, and Israeli surveillance and borders are powered by Microsoft.” (Microsoft invested in AnyVision, an Israeli company providing facial-recognition software used at West Bank checkpoints, but it said in March 2020 that it would divest its stake.) Huang and others at BAAI say international researchers should engage with the institute as a way to indirectly influence the Chinese government. “The BAAI is a platform to put together people with different answers, different backgrounds, different views, and from different countries so they can talk to each other and know each other,” Huang says. Glenn Tiffert, who focuses on China at the Hoover Institution, says engagement makes sense, but it is important to appreciate the broader context. “I am absolutely not in favor of decoupling,” he says. “They may be honorable people, people of good faith,” he says of the staff and researchers at the academy. “But it’s important to remember there is a commissar behind the curtain.” Curbing the Private Sector In the summer of 2019, before the pandemic, I visited a researcher at the Institute of Automation in Beijing who is now a key member of BAAI. The Institute of Automation is also located in Zhongguancun. Its entrance bears testament to the Chinese Communist Party’s longstanding interest in technological innovation: Black-and-white photographs show Mao Zedong meeting with scientists there, alongside color ones showing Xi Jinping, China’s current leader, doing the same. Yi Zeng, a fresh-faced researcher at the Automation Institute, is also director of BAAI’s Research Center for AI Ethics and Safety. His group produced a code of ethics covering uses of AI on behalf of the Beijing city government. The code, which is voluntary for now, says AI should not discriminate; should not be used in ways that pose safety risks; and that end users can choose to opt out if AI systems misbehave. Zeng showed me a chart of 47 AI ethics codes drawn up by companies and governments in different countries. He said that his group wants to talk to researchers from around the world about issues such as AI bias and privacy protection, but he sidestepped questions about government surveillance. Some students of China believe the Chinese Communist Party is in fact wrestling with the ethical implications of AI algorithms—at least those used by private industry—just as much as Western governments are. In November, government regulators blocked Ant Group, a financial tech spin-off of Alibaba, from completing its planned IPO in Hong Kong and Shanghai. The government also said it would investigate Alibaba for possible antitrust abuses. Inkster, the author of The Great Decoupling, says the government is “making strenuous efforts to remind the private sector in China they exist at the government's pleasure.” The Chinese government is preparing a major new privacy law that will limit what data companies can collect and use—but also reinforces the state’s access to data for law enforcement and surveillance. Some work underway at BAAI reflects this new era. In response to the pandemic, a team at the BAAI developed a Bluetooth Covid contact-tracing app that can alert people of possible exposure without collecting identifying information. The BAAI spokeswoman says this has been tested at several offices around Zhongguancun. Noam Yuchtman, a professor at the London School of Economics, has published work that uses evidence from China to suggest that AI benefits uniquely from state intervention, because algorithms are so hungry for data and computer power that governments have access to. But he adds that such a fast-moving and unpredictable technology may also pose problems for governments. “Innovation by its very nature is sort of uncertain, and perhaps nowhere more so than in AI,” he says. Updated 1-22-21, 9:48 pm EST: This story has been updated to correct Tiejun Huang's title. He is director of BAAI, not codirector as previously stated.  More Great WIRED Stories 📩 Want the latest on tech, science, and more? Sign up for our newsletters! Your body, your self, your surgeon, his Instagram My quest to survive quarantine—in heated clothes How law enforcement gets around your phone's encryption AI-powered text from this program could fool the government The ongoing collapse of the world's aquifers 🎮 WIRED Games: Get the latest tips, reviews, and more 🏃🏽‍♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers, running gear (including shoes and socks), and best headphones

Palantir’s God’s-Eye View of Afghanistan

The company’s software can sift through enormous amounts of data, and those metrics can be used to make life-or-death decisions. This is an excerpt from the book First Platoon, by Annie Jacobsen, about the US Defense Department’s quest to build the most powerful biometrics database in the world: a system that can tag, track, and locate suspected terrorists in a war zone. But, as the world continues to battle a deadly pandemic, these big-data surveillance systems are playing an increasingly suspicious role in our daily lives. In a steel container inside an American outpost in southern Afghanistan, an aerostat operator named Kevin was watching a man defecate in a farmer’s field. The man was wearing a purple hat. It was 2012, and Kevin was serving as mission director for the Persistent Ground Surveillance System team stationed here at Combat Outpost Siah Choy, located in the heart of volatile Zhari District. The PGSS (pronounced pee-jiss) team spent 24 hours a day, seven days a week, watching an area that included 20 or so small Afghan villages. Their job was twofold. On the one hand, they watched four individual platoons of American soldiers who’d been deployed to this area, including the approximately 30 young men who made up First Platoon. Whenever one of these platoons stepped off base to conduct a patrol, the PGSS team “covered down” on the soldiers, keeping an eye out for indicators of a pending attack. The rest of the time, the team observed locals under suspicion of terrorist activity, which is why Kevin was watching the man in the purple hat. The question at hand: Was he squatting down to go to the bathroom, or to bury an IED? An aerostat is a giant surveillance balloon. Its onboard cameras and sensors suck up vast amounts of data on what’s happening on the ground. That raw data gets processed, organized, and aggregated into an army intelligence product thanks to software developed by Palantir Technologies. Launched almost two decades ago with seed money from the CIA, the Silicon Valley startup had managed to solve a problem plaguing the Pentagon: After years of accumulating surveillance video captured by drones, airships, and aircraft flying over Iraq, the armed forces had, quite literally, millions of hours of footage sitting in archives taking up space. “We’re going to find ourselves in the not too distant future swimming in sensors and drowning in data,” Lieutenant General David Deptula warned colleagues in 2009. In one single year, the Air Force alone had collected more video footage in Iraq than a person could watch 24 hours a day, seven days a week, over the course of 24 continuous years. What to do with all that information? Palantir’s software could sift through volumes of raw, or unstructured, data, then organize and structure it in a way that made search and discovery features possible. Search for, and discovery of, say, a man in a purple hat. “I could see everything,” Kevin says, referring to the aerostat’s technology-enabled omniscience, sometimes called the God’s-eye view. “The only way I didn’t see something was if I wasn’t looking at it.” Kevin is an expert in what’s called pattern-of-life analysis, an esoteric discipline that involves establishing a person’s identity based on his or her cumulative habits, much of which is captured from overhead surveillance. The man going to the bathroom was deemed a person of interest, and Kevin was working to establish his pattern of life in pursuit of a new methodology called activity-based intelligence, or ABI. The first, fundamental premise of activity-based intelligence: You are what you do. The PGSS aerostat that Kevin was in charge of was a 72-foot-long balloon called a 22M (for meters) in contractor parlance. It was not a dirigible, meaning it was not steerable and did not navigate through the air on its own power. The 22M was tethered to a mooring station inside the combat outpost at Siah Choy, attached by a 2,700-foot cable made of fiber optics, rubber, and Kevlar wrap. The flatbed surface on the mooring station reminded Kevin of a merry‑go‑round because it could rotate 360 degrees. “It could swivel back and forth to allow for wind relief, [which] mattered in the summer months, when the 120 Days of Wind kicked in,” he said, referring to Afghanistan’s strong seasonal winds. (He would later say they reminded him of the Santa Anas in Southern California, where he grew up.) The equipment attached to the balloon afforded Kevin a clear view of the soldiers, their outpost, called Strong Point Payenzai, and its environs. For the most part, First Platoon’s soldiers were unaware that an aerostat with a suite of electro-optical-infrared high-definition sensors and cameras was able to watch them as they walked around their area of operations—through grape rows, down footpaths, and into the villages on their twice-daily patrols. “The idea was, do not let anyone know we exist,” Kevin says. “Occasionally one of the Joes”—contractor vernacular for soldiers—“an NCO usually, would use us as a way of saying ‘We are watching you.’ And we’d say, ‘No, no, no, don’t do that.’ We’d end up with some villager at our combat outpost saying, ‘I know you’re watching us. Tell me who stole my goat.’ This actually happened.” The imaging system, dubbed the MX-15, attached to the underbelly of the aerostat was roughly the size of a beach ball. It weighed 100 pounds and carried an array of cameras for intelligence, surveillance, and reconnaissance purposes. Its ability to see people and objects on the ground was extraordinary; it could make out an unusual modification on the buttstock of an AK-47 from 2 miles away, Kevin recalls. This image feed was livestreamed to several 40-inch monitors inside the steel container where the PGSS team worked. Separately, the data went to Palantir’s database for processing. Working as a pattern-of-life expert at Sia Choy, Kevin spent hours a day, seven days a week watching all kinds of people go about their lives, with the goal of separating out the insurgents from the civilians. Pattern-of-life analysis means watching seemingly innocent behavior hour after hour, eyes focused for when the behavior of a person of interest might take a unique turn. Machines can’t yet perform this function; only humans have conscious reasoning. The PGSS team had reason to believe the person of interest in the purple hat was a terrorist. Because purple was an uncommon color for a hat in Zhari District, it had become a unique marker for the man. After watching him for weeks, they’d determined he was a bomb emplacer, meaning he buried IEDs for the Taliban. The team had established his bed-down location: He lived across the Arghandab River, on the south side. Like many of the people Kevin and his team tracked, this individual insurgent was still an anonymous person to them. “He would get up every morning, turn on an IED belt, a series of IEDs strung together by det [detonation] cord, to protect himself and his perimeter,” Kevin recalls. “We elevated him to 429 status through his actions.” 429 status is what happens when a person of interest completes three “interactions with the ground.” These are actions that allow for that individual to be moved out of civilian status and into insurgent status—to be targeted and killed legally according to army rules of engagement. The three interactions with the ground were specific: “If I see him interacting with the ground, and then I see the pressure tank going in, or [if ] I see the charge going in, and him stringing the lamp cord out to install his pressure plate or his battery connection … That’s one, two, and three for me,” Kevin says. This is activity-based intelligence acquired through persistent surveillance from above. But activity-based intelligence as a methodology is predicated on a more radical idea. “By understanding [a person’s] pattern of life, analysts can build models of potential outcomes and anticipate what may happen,” explains Patrick Biltgen, a senior mission engineer who worked on the persistent surveillance system ARGUS‐IS (shorthand for Autonomous Real-Time Ground Ubiquitous Surveillance Imaging System), glorified by some for its mind-blowing technological achievements and criticized by others as paving the way for an all-seeing, digital panopticon. Activity-based intelligence began in the war theater with the presumption You are what you do, but it is now being pushed into a new realm, says Biltgen. One that asserts: Because we know what you did, we think we know what you are going to do next. “Just like [the film] Minority Report predicted in 2002.” Today, some of these same methodologies are being used by the US federal government, including the Department of Health and Human Services, to tag, track and locate people who might carry the coronavirus. This raises concerns among legal scholars of a burgeoning biometric cybersurveillance state. In Afghanistan in 2012, when not covering down on a specific mission, the PGSS team at Combat Outpost Siah Choy would watch persons of interest like the man in the purple hat. Waiting for three key interactions with the earth that would allow for 429 status to be assigned to the person of interest. As soon as the criteria were met, the PGSS team would notify the army’s S2 intelligence officer, a lower-echelon intelligence officer working in the tactical operations center, but with profound, direct influence over what might happen next. The S2 would monitor the situation by watching the full-motion video feed. At the same time, one of the aerostat’s flight engineers would begin reviewing the feed from minutes before. “Rolling it back in time,” Kevin explains, “in order to take snapshots of the three interaction-with-the-earth events.” The 429 package, which allows an insurgent to be killed in an air strike, must meet the legal requirements. The full-motion video gets snapshotted as evidence. While this is going on, the PGSS operator quickly generates a PowerPoint containing all the data, which goes to the S2. The S2 quickly reviews that, then sends the information to the battle captain. “He takes that info,” Kevin explains, “and he washes it through Palantir.” Although Kevin carries a top secret clearance, as a PGSS operator he would not be able to access Palantir’s database. “That’s an S2 function,” he explains, meaning the classified data being aggregated by Palantir is proprietary. The job of a pattern-of-life expert is to find out “who is who” and “who is doing what.” As defense contractors, PGSS operators do not have the legal authority to decide who gets to kill whom. “The military application of Palantir is awesome,” Kevin says. Palantir is capable of mining and aggregating data on individual people in a manner that would astonish almost anyone. But he thinks the growing movement among law enforcement agencies in the United States to use Palantir’s software programs domestically is cause for alarm. For example, in 2017, a joint effort by the Department of Homeland Security, Immigration and Customs Enforcement, and Health and Human Services relied on Palantir Technologies to tag, track, locate, and arrest 400 people in an operation targeting family members and caregivers of unaccompanied migrant children. Human rights organizations and privacy advocates cried foul. “The fact that there’s other moves afoot to actually use Palantir in the United States, I think that’s very, very bad, because of the type of 360 [degree] metrics that are collected,” Kevin warns. “I’m not kind of saying, ‘Hey, I’m scared of Big Brother.’ That’s not my view. But that is exactly what Palantir is capable of.” In Afghanistan in 2012, there was a geospatial intelligence gap as far as biometrics was concerned. PGSS operators were able to physically locate and track individual persons of interest who were still anonymous—meaning they were fighters whose identities were not yet known. These individuals were being watched because of what they did. Separately, the Defense Department maintained its Automated Biometric Identification System, or ABIS, database, which contained the biometric profiles on millions of individuals in Afghanistan, some of whom had already been classified as most-wanted terrorists. Biometrics meaning fingerprints, facial images, iris scans, and DNA—body measurements taken wittingly, by American soldiers on presence patrols, as well as unwittingly, having been lifted off captured bomb parts and weapons caches. In 2012, there was no technology-based way to bridge this gap. Meaning the MX‑15 camera on the aerostat could not cross reference what it saw with biometric information from the ABIS database. On occasion Kevin would participate in a go‑around. “I would get a slice of data from Palantir [via S2] saying, ‘Hey this is this guy we’re interested in. The request would be ‘Try and locate him.’” Included in the slice of data from Palantir would be an image of the man’s face. “I’d get a picture of him,” Kevin says, “I’d also get, maybe, one or two degrees of people that he knows, and areas that he’s known to travel in.” When Kevin says “degrees of people,” he means individuals the person of interest is linked to, associates with, or has been determined to know. The power of Palantir lies in the connections it can make between people, searching vast amounts of data, analyzing patterns, and making connections that would take humans a huge amount of time to figure out. Because Palantir’s algorithms can gather data about a person’s activities in the past, in 2012, the machines were “learning” how to make predictions about this same person’s activities in the future. In addition to the images of the associates, Kevin would often get predictions about “a general area where [the person] could be traveling.” Once the PGSS team located who they thought was the actual person of interest, “we’d kind of do a self-check, to follow him.” Meaning the initial hunt began with a computer, but it was now fact-checked by a human. “This is basically what I do. I follow his bed-down location. I track every building that he walks to. I determine his daily pattern of life. When does he pray? When does he eat? When does he go to the bathroom? When does he wake up? When does he sleep? The data cuts from Palantir are like a bread-crumb trail for me to go down. At the same time, if I see something, then that’s me generating a report. And that becomes data in Palantir.” Once an individual is determined to be a known IED emplacer, like the man in the purple hat, and he has been designated a “429 package,” then one of two things happens. “If there is an asset available, if CAS,” close air support like attack helicopters and drones “is in the vicinity, then it is time to take the target out.” If there’s not air support available, then the person of interest remains marked for death in the system. “The moment there is a target of opportunity to take him out, I call it in. I don’t have to go back through the approving process,” Kevin says. “The 429 package stands. That’s why it’s called a Target of Opportunity. When you have the opportunity, you strike the target.” You kill the man. One morning, Kevin came into the ops center. The overnight team was excited. One of them said, “We’re about to kill the man in the purple hat.” Kevin had personally watched this man bury IEDs and train others how to emplace bombs. He leaned in close to the screen. “Where is he?” he asked his colleague. The colleague pointed to the screen. “Here,” he said, “talking to this other farmer,” and he pointed to a man seated on a tractor. Kevin examined the image feed. The man on the tractor was talking to an old man, who appeared to be another farmer. Kevin stared at the man in the purple hat. “That’s a Massey Ferguson tractor he’s sitting on,” Kevin said, pointing at the screen. “Yep,” the colleague agreed. Kevin explains what went through his mind in 2012. “I’d burned a lot of time and effort trying to locate and kill this guy, because he was a terrorist cell leader. I knew his face. I knew his gait. I knew his build. I knew what he looked like, and I knew he wore a purple hat. I knew he wore white and black man-jams [traditional outfit]. I knew the color of his shawl, his little body wrap, and I knew where he lived.” Standing in the C2 shelter at Siah Choy, in front of the video screens, the colleague spoke, “We’re getting ready to hit him now,” he said. “CAS is on the way.” “That isn’t him,” Kevin said. “That is absolutely not him.” Kevin was certain of this. “I thought, wow, that looks like him. But something just gave me a tickle that that wasn’t him. For a lot of different reasons. Number one, he’s not a worker. He’s a bad guy. Bad guys don’t tool around on tractors and play farmer. They are bad guys.” The tractor was a legitimate and expensive tractor, one only a farmer would have. “Why is he on a tractor?” Kevin asked himself. “Why is he talking to this old man in this field?” The more Kevin looked at the man in the purple hat, the more he realized something was wrong. “I became confused. I said to myself, ‘Well, I mean, fuck, it looks like him, but I don’t think it is him.’” Then he became very stressed out, he recalls. “Hands-down, I wanted the man in the purple hat dead. I still do to this day. But we’re talking about killing someone.” Metaphorically, he says, he had his finger on the button. “If that kills an innocent civilian? I don’t want that.” Kevin ran out of the operations center, across the outpost and into the tactical operations center. “I told the S2 they had to call off the air strike. It’s not him,” Kevin told the battle captain. The tactical operations center spun into action. One of the S2 intelligence officers confirmed that Brigade Headquarters, located a few miles north at Forward Operating Base Pasab, had already authorized the air strike. That close air support was on the way. “I said, ‘I’m certain it’s not him.’” Kevin remembers. The battle captain said to him, “Well, you’ve got five minutes to figure that out and prove me wrong.” Kevin said that’s what he’d do. Kevin ran back to the C2 shelter. “I [moved] the camera over to his actual bed-down location. He lived right across the river. I waited and waited. It felt like half an hour. It was probably more like a few minutes. Finally he came out. I recognized him right away.” Kevin was looking at the man with the purple hat. The insurgent whose pattern of life he’d been tracking for hundreds of hours. “He walked out of where he slept to go to the bathroom, wash his hands, stretch. I had visual positive identification on him.” S2 called off the air strike. “Had a computer done the algorithm on the guy on the tractor, as far as the computer was concerned, that was him. The insurgent in the purple hat,” Kevin says. “But because I had already been watching this guy for months, I knew that it wasn’t.” Humans are still the ultimate recognizers. “We humans have the ability to recognize faces. It’s part of our genetics. Of however many thousands of years of being a hunter-gatherer. Of being able to spot recognizable features. I knew his face. I doubted the computer. I was right.” How was the farmer on the tractor misrecognized as the cell leader in the purple hat in the first place? After the air strike was called off, and the man was spared execution, the PGSS operators rolled back the videotape to review what had happened. To see what they could learn. “It was his hat,” Kevin explains. “There’s a window of time, around dawn, as the sun comes up,” he explains, where colors are “read differently” by the imaging system than how it sees them during the day. In this window of time, the farmer’s hat was misidentified as purple, setting off a series of linkages that were based on information that was erroneous to begin with. But what if the S2 shop had killed the farmer in the purple hat in error? And what if, out of fear of backlash over yet another civilian casualty, the data that showed otherwise was deleted so that it would never become known? This invites the question: Who has control over Palantir’s Save or Delete buttons? “Not me,” says Kevin. “That’s an S2 function.” Who controls what data gets saved as potential evidence, and what data gets deleted—including data that could potentially act in a defense? What happens to the rule of law when individual citizens are persistently surveilled without knowledge of, or access to, the information that is being collected on them? The Department of Defense won’t answer these questions on the grounds that its war-fighting systems are classified. But persistent surveillance systems similar to the PGSS are now being used to watch and collect data on Americans back home, always under the premise of rule of law. Privacy issues regarding persistent surveillance are being debated in the courts at a snail’s pace, while advances in machine learning are moving forward at science-fiction-like speed. Palantir cofounder and CEO Alex Karp sees that as an existential challenge for Palatir. “The present and the future ability to control the rule of law and its application will be determined by our ability to harness and master artificial intelligence and its precursor, machine learning,” Karp says. The global pandemic has pushed the use of military-grade surveillance technologies on American citizens, and to an alarming degree: On April 10, 2020, the US Department of Health and Human Services (HHS) entered into a no-bid contract with Palantir Technologies to track the spread of the coronavirus. The goal of the HHS Protect Now program, explains former CIA officer Christopher Burgess, is to “bring disparate data sets together and provide better visibility to HHS on the spread of Covid.” HHS confirmed that the data that Palantir is now mining includes “diagnostic testing data, geographic testing data, [and] demographic statistics,” meaning information about individual American citizens’ health, location, family, and tribe. The initial HHS announcement said Palantir would have access to 187 data sets. That number has since grown to 225. Unknowns abound: What data is going into the Palantir system, how is it shared, with whom, and for how long? What safeguards are in place to prevent HHS from sharing identifiable personal data with its federal law enforcement partners—just as it did in 2017, with ICE? “Given how tight-lipped both HHS and Palantir have been over the program, we don’t fully know,” says Lauren Zabierek, executive director of the Cyber Project at Harvard Kennedy School’s Belfer Center. Zabierek is a former Air Force officer who also served as a civilian analyst with the National Geospatial-Intelligence Agency (NGA) in three war zones, including in Kandahar in 2012. “I sincerely hope that HHS Protect Now will do nothing resembling finding and fixing certain entities,” she says, using military nomenclature for locating and killing IED emplacers in the war zone. “I hope that the data sets will only be used to understand the spread of the virus in the aggregate.” But of course how could we ever be sure of that? Machines make mistakes, the implications of which are both known and unknown. Just ask the man in the purple hat. Adapted from First Platoon: A Story of Modern Warfare in the Age of Identity Dominance, by Annie Jacobsen. Copyright © 2021 by Annie Jacobsen. Published by arrangement with Dutton, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. More Great WIRED Stories 📩 Want the latest on tech, science, and more? Sign up for our newsletters! The case for cannibalism, or: How to survive the Donner Party Yes, Cyberpunk 2077 is buggy. But mostly, it has no heart Apple's app “privacy labels” are a big step forward These 7 pots and pans are all you need in the kitchen The race for a Covid vaccine was more about luck than tech 🎮 WIRED Games: Get the latest tips, reviews, and more ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers