On February 27 the La Voz de Michoacán website ran a story about the announcement of Pokémon Sword and Shield. The story made no mention of Gun, the joke game that does not exist, but did include a retweeted version of @Sir5000’s image.
It was an odd choice of Tweets to include. Then the print version of the paper, published on March 2, made an even odder choice. The story, written by Mario Plancarte, contained no mention of Pokémon Gun. The accompanying art, however, included all three logos, each corresponding to one of the game’s starters. The glorious Scorbunny and weeping Sobble are Shield and Sword respectively, while grass monkey Grookey looks like he’s reaching for a pistol.
Twitter user Cin Ceja brought the article to light on March 3 with the message, “My local newspaper, always doing research before publishing anything.”
According to print article author Plancarte, he was only responsible for the written portion of the page. The department responsible for formatting the article looks to have combined official Nintendo art of the starters with the two real and one fake logos. It looks like the paper even scrubbed @Sir5000’s watermark from the Gun logo. La Voz de Michoacánissued an apology for the silly image earlier today.
We’re just disappointed that the newspaper wasn’t reporting a huge scoop. We’d really like Nintendo to make a game called Pokémon Gun.
Password managers like LastPass exist because (A) you should be using strong, difficult to guess and unique passwords for every important account, and (B) remembering several or even dozens of passwords is no easy task. Fortunately, the web is moving towards an easier and more secure way of doing things thanks to WebAuthn.
WebAuthn, which is short for Web Authentication, is an API that allows for secure, password-less logins. It’s not a new standard, but up until now, it was not an official one. The World Wide Web Consortium (W3C) changed that today by declaring WebAuthn as an official web standard, thus paving the road for more widespread adoption.
“Now is the time for web services and businesses to adopt WebAuthn to move beyond vulnerable passwords and help web users improve the security of their online experiences,” said Jeff Jaffe, W3C CEO. “W3C’s recommendation establishes web-wide interoperability guidance, setting consistent expectations for web users and the sites they visit. W3C is working to implement this best practice on its own site.”
The WebAuthn API allows users to log into websites using biometric security measures, such as fingerprint scanning or facial recognition. It can also be used with FIDO security keys that plug into USB ports, and mobile devices such as smartphones to verify a user’s identity.
Having a standard is one thing, but actually utilizing it is another. One thing that works in the API’s favor is that it is already supported in Windows 10 and Android, and by every major web browser. That includes Chrome, Firefox, Edge (which is being rebuilt around the same engine as Chrome), and Safari.
That said, support is not automatic—websites have to specifically code support for the API. It’s W3C’s hope that this will take place sooner than later.
“Web services and apps can—and should—turn on this functionality to give their users the option to log in more easily via biometrics, mobile devices and/or FIDO security keys, and with much higher security over passwords alone,” W3C says.
For now, you’ll still have to remember your passwords, or offload the task to a password manager. Perhaps not for long, though.
Following last month’s update with tweaks to the About menu and coinciding with the ability to schedule update restarts, the March security patch is now rolling out to Pixel devices. In addition to vulnerability fixes, this version makes several improvements to the Pixel 3 and Pixel 3 XL.
As usual, Google notes that there are no reports of customers being affected by these security issues. The company cited in its 2017 year in review of Android security that 30% more devices are getting patches compared to the prior year. Like in February, the Pixel C from 2015 is still receiving Android 8.1 patches.
There are 27 issues resolved in the March security patch dated 2019-03-01 and 18 for 2019-03-05. Vulnerabilities range from high to critical, with the most severe relating to the media framework and a remote attacker possibly executing arbitrary code through a crafted file.
Google has temporarily disabled Android TV photo sharing after reports of a brutal bug popped up on Twitter. Through the Google Home app, Android TV has a “linked accounts” feature that lets several people (like, say, you and your spouse) share photos from multiple Google Photos accounts to the TV. Apparently something went horribly wrong with this feature earlier, as pictures and videos show hundreds of accounts from strangers listed under “linked accounts.”
When I access my Vu Android TV through the @Google Home app, and check the linked accounts, it basically lists what I imagine is every single person who owns this television. This is shocking incompetence. pic.twitter.com/5DGwrArsco
Twitter user Prashanth posted two videos of the “Linked Accounts” feature going haywire. After the above video scrolling through linked accounts, a second shows him enabling random accounts. “Oh my god.” Prashanth tweeted, “Private @googlephotos of strangers are being shown to me in the ambient mode screensaver. SERIOUSLY WHAT THE F–K?!” While the main profile photos of strangers were appearing under his account, Prashanth noted that the display of entire photo collections did not work.
Prashanth speculated earlier that the issue was a problem with his Vu TV, since his other Android TV device, a Xiaomi Mi Box 3, did not show random accounts. A Vu spokesperson spoke to Gadgets 360 and blamed the issue on Google, though. “We were recently notified that there was a malfunction of Google Home App in some of the Android TVs.” the spokesperson told Gadgets 360. “After verifying the incident we have informed our customers that it was not an issue of Vu Television but it was software malfunction of the Google Home App. We take your privacy very seriously. Vu has a long-standing commitment to protecting the privacy of the personal information that our customers entrusts to us.”
The issue is clearly not happening to every Android TV device, just some subset of devices. Another user reported seeing a similar issue in a iFFalcon Android TV device. Prashanth noted that his device was running Android 7.0 Nougat and “hasn’t got a security patch since 2017.”
After a back and forth with the @madebygoogle account on Twitter, Google disabled photo sharing temporarily. In a statement to Ars and others, a Google spokesperson said, “We take our users’ privacy extremely seriously. While we investigate this bug, we have disabled the ability to remotely cast via the Google Assistant or view photos from Google Photos on Android TV devices.”
Update: We updated this to clarify that Prashanth only saw stranger’s profile pictures in the app, and couldn’t cast whole photo collections to his TV.
I remember walking to my local GameStop to pick up the system when it hit the west in February of 2012 and man it was a hell of a honeymoon period. The Vita launched with a ton of decent games with the amazing Gravity Rush (this is before the console port was even a twinkle in Sony’s eye) coming shortly after. The system felt great, looked fantastic, and housed a ton of backward compatibility support with PSOne Classics. I pretty much never ran out of things to play on it from day one.
Yet, the portable was pretty much doomed from the start with Sony’s incredibly questionable decision to force people into buying proprietary, completely price-jacked memory cards: a harrowing prospect in the face of the cheaper 3DS option that takes standard SD storage solutions. From there things got worse, with many third parties pulling out of the system with the exception of Atlus, who remained until the Vita’s dying breath. After just three years and some change, Sony announced that it was ceasing production of any AAA titles for the system. They even pulled direct support for some of their apps and functionality from the Vita near the end.
Despite all of those misgivings and missteps, my launch model Vita remains plugged in and constantly charging in my room. I go back to it time and time again to replay various PlayStation classics and niche games that weren’t eventually ported to the PS4. Good night, sweet prince.
You’re not expected to always agree, but do please keep cool and never make it personal. Report harassment, spam, and hate speech to our community team. Also, on the right side of a comment you can flag nasty comments anonymously (we ban users dishing bad karma). For everything else, contact us!
Nearly two years ago former Twitter for iOS tech lead Ben Sandofsky and ex-Apple designer Sebastiaan de With launched Halide, the beautiful pro camera app for iPhone that’s proven both popular and powerful. Today the duo is launching their second app, a companion camera called Spectre that focuses on intelligent long exposure photography.
Spectre lets you erase moving tourists from busy locations or capture light trails and water movements from the camera on your iPhone:
MAKE CROWDS DISAPPEAR: Spectre helps take photos of crowded, touristy places. Simply set a medium or long duration and take a shot to erase the crowd from a location. Capture those special moments like you remember them.
RIVERS OF LIGHT: At night, AI scene detection automatically switches modes to draw beautiful light trails, for stunning nighttime city photography or light painting.
GORGEOUS WATER: Capture waterfalls, ocean waves, fountains and more with beautiful ghostly water.
Spectre includes a visual indicator of how stable you’re holding the iPhone while capturing a long exposure shot to help you steady your hand. If you shoot with an iPhone 8 or later, Spectre uses Apple’s Neural Engine to drive AI-based stabilization features to improve shots.
The app is very approachable too with few settings to tweak. Just aim, hold steady, and shoot from your iPhone. A slider toggle on the bottom right of the app lets you select between 3-, 5-, and 9-second capture durations.
You can also tap to change the focal point, see and adjust brightness with a vertical slider by tapping the top right corner, and adjust light trail capture mode between auto/on/off.
Check out some sample shots from Spectre:
Spectre also supports Live Photos for long exposure shots so you can capture motion and not just still shots. You can also use Siri Shortcuts to create voice commands for capturing long exposure shots with Spectre hands-free.
Find out more about Spectre at spectre.cam and download the new camera from the App Store. Spectre is available for $1.99 as a limited time launch price, but the price will increase after the launch.
During the Ebola epidemic in West Africa in 2014, medical workers collected hundreds of thousands of samples of blood from victims and those presumed to be infected, in an effort to stem an epidemic that eventually took more than 11,000 lives.
After that outbreak subsided, most of the samples were believed to have been destroyed. But recent reporting by The Telegraph in London revealed that thousands of samples were not destroyed but, rather, shipped out of West Africa. The samples’ location isn’t clear—The Telegraph’s freedom of information request was turned back by the UK government—but they are believed to be in the custody of national health agencies, and possibly pharmaceutical companies, in Western Europe and the United States.
That those samples passed out of the countries where they originated is a scandal in the making, because if they provide the raw material for diagnostics or remedies made by Western companies, those products may be unaffordable to the countries where the samples originated.
Developing nations have protested before that richer countries and their corporations should compensate them for their biological resources. They consider it colonialism for the bioprospecting age: Instead of stripping the developing world of its precious metals, timber, or minerals, the nations of the West are mining for microbes and other biological source materials. Often the developing world’s objections go nowhere. But in a few cases, bolstered by an evolving body of international rules, the countries who feel their patrimony has been stolen have fought back and won.
In 2007, for instance, Indonesia refused to share samples of the avian flu strain H5N1—which at that point had killed more than half of those who came down with it—into the international network of labs that monitored the virus’ movement and evolution. The country’s health minister held them back in protest, after she learned that an Australian company had obtained some of Indonesia’s flu viruses and developed a test vaccine out of it; she feared Indonesia would not get access to the vaccine or be able to afford it.
A second clash over flu vaccine showed that her concerns had been reasonable. A different flu strain, H1N1, caused a worldwide epidemic in 2009. As before, vaccine-making swung into gear. But it soon emerged that the Pacific Rim countries where vaccine strains originate would have little chance to buy the vaccine, because the affluent northern countries where manufacturers are based had placed advance orders that would use up the new supply.
In the aftermath of those linked crises, the World Health Organization developed its Pandemic Influenza Preparedness Framework, which commits member countries to equitably sharing viruses and the vaccines made from them. More practically, it drew vaccine manufacturers into an agreement in which they would return a small portion of profits to countries where strains originated, so the countries’ own pandemic defenses could be strengthened.
That agreement, which only covers flu, could serve as a global model of sorts. But it doesn’t address the broader question of a poor country losing control of its biological resources to a rich one. The international covenant that could address that situation, known as the Nagoya Protocol, hasn’t received anywhere near as much support.
The protocol—formally, the Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization—became effective in 2014. It’s a subsidiary agreement to the Convention on Biological Diversity, in force since 1993. Among the things the convention covers is fair treatment when it comes to biological resource extraction; the protocol goes further by defining enforcement. Most of the countries in the world have signed and ratified the convention, and a little more than 100 have signed and ratified the protocol. Those numbers include most of the big vaccine-making nations, but notably not the United States, which signed but never ratified the convention, and thus has not endorsed the protocol.
The protocol commits signers to sharing the benefits of commercializing “any material of plant, animal, microbial or other origin containing functional units of heredity” with the country the material came from. That seems straightforward: Develop a product out of a naturally occurring substance, and some of the benefits and revenue should flow back to the place where the substance was collected or bred.
But it’s not obvious that the protocol applies to pathogens like Ebola, given that the underlying convention was intended to protect biodiversity and plant and animal resources. It didn’t anticipate pandemics; at the time, there hadn’t been a worldwide flu epidemic since 1968, or an Ebola outbreak since 1976.
Today the fear is that if it does extend to viruses and bacteria, the protocol will imposepaperwork and slow down disease response. Last fall, China stopped sharing new flu strain samples with the United States, and several researchers have argued that dereliction is due to the protocol’s cumbersome requirements.
The World Health Organization’s own FAQ on the protocol seems cautiously negative; it says: “Implementation of the Nagoya Protocol may impact the sharing of pathogens if, for example, it entails multiple, complex and/or time-consuming processes.”
The protocol’s shortcomings aren’t an excuse to keep kicking the issue down the road, however. Until it is solved, health authorities risk re-creating the distrust and abuse that people in the developing world reasonably experience when they feel they have been made a test bed for the West. One lingering, powerful example: the Trovan trial, an exploitative 1996 test of a novel antibiotic set up in Nigeria in the midst of a meningitis epidemic. Almost a decade later, the reaction to that episode—which was the basis of the novel and film The Constant Gardener—was still so potent that it helped fuel northern Nigeria’s 2003 rejection of the polio vaccine, which reseeded the disease across sub-Saharan Africa.
There is a further risk here as well. If attention is not paid to the developing world’s assertions of sovereignty over its biological resources now, it is possible that frustration could boil over into a wider claim for recompense over biological resources plundered in the past.
Consider how much of the industrialized world’s pharmacopeia originated in natural products brought from elsewhere. In the 1960s, it was routine for pharma companies to ask diplomats and missionaries to prospect for them, a search that yielded the antibiotics chloramphenicol (compost, Venezuela); vancomycin (mud, Borneo); and daptomycin (dirt, Turkey). Vincristine and vinblastine, the foundation of mid-century cancer chemotherapy, were extracted from a periwinkle found in Madagascar. Quinine, the foundation of antimalarial drugs for a century, comes from a tree once native to the Andes.
The protocol doesn’t provide a mechanism for developing countries to reach backward into the past and claim reparations for plundered resources. That doesn’t mean they might not be willing to try. In 1995, the government of India forced US companies to abandon patents on compounds elaborated from the herb turmeric and the neem tree, and in 2016 French Guyana compelled the government of France to share the credit and profits for a new antimalarial sourced from an indigenous shrub.
It’s crucial that outbreak detection and response not be compromised. But in the aftermath of outbreaks, it’s equally important that the public health establishment honor the contributions of the countries that were victims of outbreaks—whether those contributions are their blood and tissue or their minerals and plants. Uncompensated resource extraction is a sin of the past, and it should be consigned to there.
In a 1987 voyage to the Antarctic, the paleoceanographer James Kennett and his crew dropped anchor in the Weddell Sea, drilled into the seabed, and extracted a vertical cylinder of sediment. In an inch-thick layer of plankton fossils and other detritus buried more than 500 feet deep, they found a disturbing clue about the planet’s past that could spell disaster for the future.
Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
Lower in the sediment core, fossils abounded from 60 plankton species. But in that thin cross-section from about 56 million years ago, the number of species dropped to 17. And the planktons’ oxygen and carbon isotope compositions had dramatically changed. Kennett and his student Lowell Stott deduced from the anomalous isotopes that carbon dioxide had flooded the air, causing the ocean to rapidly acidify and heat up, in a process similar to what we are seeing today.
While those 17 kinds of plankton were sinking through the warming waters and settling on the Antarctic seabed, a tapir-like creature died in what is now Wyoming, depositing a tooth in a bright-red layer of sedimentary rock coursing through the badlands of the Bighorn Basin. In 1992, the finder of the tooth fossil, Phil Gingerich, and collaborators Jim Zachos and Paul Kochreported the same isotope anomalies in its enamel that Kennett and Stott had presented in their ocean findings a year earlier. The prehistoric mammal had also been breathing CO2-flooded air.
More data points surfaced in China, then Europe, then all over. A picture emerged of a brief, cataclysmic hot spell 56 million years ago, now known as the Paleocene-Eocene Thermal Maximum (PETM). After heat-trapping carbon leaked into the sky from an unknown source, the planet, which was already several degrees Celsius hotter than it is today, gained an additional 6 degrees. The ocean turned jacuzzi-hot near the equator and experienced mass extinctions worldwide. On land, primitive monkeys, horses and other early mammals marched northward, following vegetation to higher latitudes. The mammals also miniaturized over generations, as leaves became less nutritious in the carbonaceous air. Violent storms ravaged the planet; the geologic record indicates flash floods and protracted droughts. As Kennett put it, “Earth was triggered, and all hell broke loose.”
The PETM doesn’t only provide a past example of CO2-driven climate change; scientists say it also points to an unknown factor that has an outsize influence on Earth’s climate. When the planet got hot, it got really hot. Ancient warming episodes like the PETM were always far more extreme than theoretical models of the climate suggest they should have been. Even after accounting for differences in geography, ocean currents and vegetation during these past episodes, paleoclimatologists find that something big appears to be missing from their models—an X-factor whose wild swings leave no trace in the fossil record.
Evidence is mounting in favor of the answer that experts have long suspected but have only recently been capable of exploring in detail. “It’s quite clear at this point that the answer is clouds,” said Matt Huber, a paleoclimate modeler at Purdue University.
Clouds currently cover about two-thirds of the planet at any moment. But computer simulations of clouds have begun to suggest that as the Earth warms, clouds become scarcer. With fewer white surfaces reflecting sunlight back to space, the Earth gets even warmer, leading to more cloud loss. This feedback loop causes warming to spiral out of control.
For decades, rough calculations have suggested that cloud loss could significantly impact climate, but this concern remained speculative until the last few years, when observations and simulations of clouds improved to the point where researchers could amass convincing evidence.
Now, new findings reported this week in the journal Nature Geoscience make the case that the effects of cloud loss are dramatic enough to explain ancient warming episodes like the PETM—and to precipitate future disaster. Climate physicists at the California Institute of Technology performed a state-of-the-art simulation of stratocumulus clouds, the low-lying, blankety kind that have by far the largest cooling effect on the planet. The simulation revealed a tipping point: a level of warming at which stratocumulus clouds break up altogether. The disappearance occurs when the concentration of CO2 in the simulated atmosphere reaches 1,200 parts per million—a level that fossil fuel burning could push us past in about a century, under “business-as-usual” emissions scenarios. In the simulation, when the tipping point is breached, Earth’s temperature soars 8 degrees Celsius, in addition to the 4 degrees of warming or more caused by the CO2 directly.
Once clouds go away, the simulated climate “goes over a cliff,” said Kerry Emanuel, a climate scientist at the Massachusetts Institute of Technology. A leading authority on atmospheric physics, Emanuel called the new findings “very plausible,” though, as he noted, scientists must now make an effort to independently replicate the work.
To imagine 12 degrees of warming, think of crocodiles swimming in the Arctic and of the scorched, mostly lifeless equatorial regions during the PETM. If carbon emissions aren’t curbed quickly enough and the tipping point is breached, “that would be truly devastating climate change,” said Caltech’s Tapio Schneider, who performed the new simulation with Colleen Kaul and Kyle Pressel.
Huber said the stratocumulus tipping point helps explain the volatility that’s evident in the paleoclimate record. He thinks it might be one of many unknown instabilities in Earth’s climate. “Schneider and co-authors have cracked open Pandora’s box of potential climate surprises,” he said, adding that, as the mechanisms behind vanishing clouds become clear, “all of a sudden this enormous sensitivity that is apparent from past climates isn’t something that’s just in the past. It becomes a vision of the future.”
The Cloud Question
Clouds come in diverse shapes—sky-filling stratus, popcorn-puff cumulus, wispy cirrus, anvil-shaped nimbus and hybrids thereof—and span many physical scales. Made of microscopic droplets, they measure miles across and, collectively, cover most of the Earth’s surface. By blocking sunlight from reaching the surface, clouds cool the planet by several crucial degrees. And yet, they are insubstantial, woven into greatness by complicated physics. If the planet’s patchy white veil of clouds descended to the ground, it would make a watery sheen no thicker than a hair.
Clouds seem simple at first: They form when warm, humid air rises and cools. The water vapor in the air condenses around dust grains, sea salt or other particles, forming droplets of liquid water or ice—“cloud droplets.” But the picture grows increasingly complicated as heat, evaporation, turbulence, radiation, wind, geography and myriad other factors come into play.
Physicists have struggled since the 1960s to understand how global warming will affect the many different kinds of clouds, and how that will influence global warming in turn. For decades, clouds have been seen as by far the biggest source of uncertainty over how severe global warming will be—other than what society will do to reduce carbon emissions.
Kate Marvel contemplates the cloud question at the NASA Goddard Institute for Space Studies in New York City. Last spring, in her office several floors above Tom’s Restaurant on the Upper West Side, Marvel, wearing a cloud-patterned scarf, pointed to a plot showing the range of predictions made by different global climate models. The 30 or so models, run by climate research centers around the world, program in all the known factors to predict how much Earth’s temperature will increase as the CO2 level ticks up.
Each climate model solves a set of equations on a spherical grid representing Earth’s atmosphere. A supercomputer is used to evolve the grid of solutions forward in time, indicating how air and heat flow through each of the grid cells and circulate around the planet. By adding carbon dioxide and other heat-trapping greenhouse gases to the simulated atmosphere and seeing what happens, scientists can predict Earth’s climate response. All the climate models include Earth’s ocean and wind currents and incorporate most of the important climate feedback loops, like the melting of the polar ice caps and the rise in humidity, which both exacerbate global warming. The models agree about most factors but differ greatly in how they try to represent clouds.
The least sensitive climate models, which predict the mildest reaction to increasing CO2, find that Earth will warm 2 degrees Celsius if the atmospheric CO2 concentration doubles relative to preindustrial times, which is currently on track to happen by about 2050. (The CO2 concentration was 280 parts per million before fossil fuel burning began, and it’s above 410 ppm now. So far, the average global temperature has risen 1 degree Celsius.) But the 2-degree prediction is the best-case scenario. “The thing that really freaks people out is this upper end here,” Marvel said, indicating projections of 4 or 5 degrees of warming in response to the doubling of CO2. “To put that in context, the difference between now and the last ice age was 4.5 degrees.”
The huge range in the models’ predictions chiefly comes down to whether they see clouds blocking more or less sunlight in the future. As Marvel put it, “You can fairly confidently say that the model spread in climate sensitivity is basically just a model spread in what clouds are going to do.”
The problem is that, in computer simulations of the global climate, today’s supercomputers cannot resolve grid cells that are smaller than about 100 kilometers by 100 kilometers in area. But clouds are often no more than a few kilometers across. Physicists therefore have to simplify or “parameterize” clouds in their global models, assigning an overall level of cloudiness to each grid cell based on other properties, like temperature and humidity.
But clouds involve the interplay of so many mechanisms that it’s not obvious how best to parameterize them. The warming of the Earth and sky strengthens some mechanisms involved in cloud formation, while also fueling other forces that break clouds up. Global climate models that predict 2 degrees of warming in response to doubling CO2 generally also see little or no change in cloudiness. Models that project a rise of 4 or more degrees forecast fewer clouds in the coming decades.
The climatologist Michael Mann, director of the Earth System Science Center at Pennsylvania State University, said that even 2 degrees of warming will cause “considerable loss of life and suffering.” He said it will kill coral reefs whose fish feed millions, while also elevating the risk of damaging floods, wildfires, droughts, heat waves, and hurricanes and causing “several feet of sea-level rise and threats to the world’s low-lying island nations and coastal cities.”
At the 4-degree end of the range, we would see not only “the destruction of the world’s coral reefs, massive loss of animal species, and catastrophic extreme weather events,” Mann said, but also “meters of sea-level rise that would challenge our capacity for adaptation. It would mean the end of human civilization in its current form.”
It is difficult to imagine what might happen if, a century or more from now, stratocumulus clouds were to suddenly disappear altogether, initiating something like an 8-degree jump on top of the warming that will already have occurred. “I hope we’ll never get there,” Tapio Schneider said in his Pasadena office last year.
The Simulated Sky
In the last decade, advances in supercomputing power and new observations of actual clouds have attracted dozens of researchers like Schneider to the problem of global warming’s X-factor. Researchers are now able to model cloud dynamics at high resolution, generating patches of simulated clouds that closely match real ones. This has allowed them to see what happens when they crank up the CO2.
First, physicists came to grips with high clouds—the icy, wispy ones like cirrus clouds that are miles high. By 2010, work by Mark Zelinka of Lawrence Livermore National Laboratory and others convincingly showed that as Earth warms, high clouds will move higher in the sky and also shift toward higher latitudes, where they won’t block as much direct sunlight as they do nearer the equator. This is expected to slightly exacerbate warming, and all global climate models have integrated this effect.
But vastly more important and more challenging than high clouds are the low, thick, turbulent ones — especially the stratocumulus variety. Bright-white sheets of stratocumulus cover a quarter of the ocean, reflecting 30 to 70 percent of the sunlight that would otherwise be absorbed by the dark waves below. Simulating stratocumulus clouds requires immense computing power because they contain turbulent eddies of all sizes.
Chris Bretherton, an atmospheric scientist and mathematician at the University of Washington, performed some of the first simulations of these clouds combined with idealized climate models in 2013 and 2014. He and his collaborators modeled a small patch of stratocumulus and found that as the sea surface below it warmed under the influence of CO2, the cloud became thinner. That work and other findings—such as NASA satellite data indicating that warmer years are less cloudy than colder years—began to suggest that the least sensitive global climate models, the ones predicting little change in cloud cover and only 2 degrees of warming, probably aren’t right.
Bretherton, whom Schneider calls “the smartest person we have in this area,” doesn’t only develop some of the best simulations of stratocumulus clouds; he and his team also fly through the actual clouds, dangling instruments from airplane wings to measure atmospheric conditions and bounce lasers off of cloud droplets.
In the Socrates mission last winter, Bretherton hopped on a government research plane and flew through stratocumulus clouds over the Southern Ocean between Tasmania and Antarctica. Global climate models tend to greatly underestimate the cloudiness of this region, and this makes the models relatively insensitive to possible changes in cloudiness.
Bretherton and his team set out to investigate why Southern Ocean clouds are so abundant. Their data indicate that the clouds consist primarily of supercooled water droplets rather than ice particles, as climate modelers had long assumed. Liquid-water droplets stick around longer than ice droplets (which are bigger and more likely to fall as rain), and this seems to be why the region is cloudier than global climate models predict. Adjusting the models to reflect the findings will make them more sensitive to cloud loss in this region as the planet heats up. This is one of several lines of evidence, Bretherton said, “that would favor the range of predictions that’s 3 to 5 degrees, not the 2- to 3-degree range.”
Schneider’s new simulation with Kaul and Pressel improved on Bretherton’s earlier work primarily by connecting what happens in a small patch of stratocumulus cloud to a simple model of the rest of Earth’s climate. This allowed them to investigate for the first time how these clouds not only respond to, but also affect, the global temperature, in a potential feedback loop.
Their simulation, which ran for 2 million core-hours on supercomputers in Switzerland and California, modeled a roughly 5-by-5-kilometer patch of stratocumulus cloud much like the clouds off the California coast. As the CO2 level ratchets up in the simulated sky and the sea surface heats up, the dynamics of the cloud evolve. The researchers found that the tipping point occurs, and stratocumulus clouds suddenly disappear, because of two dominant factors that work against their formation. First, when higher CO2 levels make Earth’s surface and sky hotter, the extra heat drives stronger turbulence inside the clouds. The turbulence mixes moist air near the top of the cloud, pushing it up and out through an important boundary layer that caps stratocumulus clouds, while drawing dry air in from above. Entrainment, as this is called, works to break up the cloud.
Secondly, as the greenhouse effect makes the upper atmosphere warmer and thus more humid, the cooling of the tops of stratocumulus clouds from above becomes less efficient. This cooling is essential, because it causes globs of cold, moist air at the top of the cloud to sink, making room for warm, moist air near Earth’s surface to rise into the cloud and become it. When cooling gets less effective, stratocumulus clouds grow thin.
Countervailing forces and effects eventually get overpowered; when the CO2 level reaches about 1,200 parts per million in the simulation—which could happen in 100 to 150 years, if emissions aren’t curbed—more entrainment and less cooling conspire to break up the stratocumulus cloud altogether.
To see how the loss of clouds would affect the global temperature, Schneider and colleagues inverted the approach of global climate models, simulating their cloud patch at high resolution and parameterizing the rest of the world outside that box. They found that, when the stratocumulus clouds disappeared in the simulation, the enormous amount of extra heat absorbed into the ocean increased its temperature and rate of evaporation. Water vapor has a greenhouse effect much like CO2, so more water vapor in the sky means that more heat will be trapped at the planet’s surface. Extrapolated to the entire globe, the loss of low clouds and rise in water vapor leads to runaway warming—the dreaded 8-degree jump. After the climate has made this transition and water vapor saturates the air, ratcheting down the CO2 won’t bring the clouds back. “There’s hysteresis,” Schneider said, where the state of the system depends on its history. “You need to reduce CO2 to concentrations around present day, even slightly below, before you form stratocumulus clouds again.”
Paleoclimatologists said this hysteresis might explain other puzzles about the paleoclimate record. During the Pliocene, 3 million years ago, the atmospheric CO2 level was 400 ppm, similar to today, but Earth was 4 degrees hotter. This might be because we were cooling down from a much warmer, perhaps largely cloudless period, and stratocumulus clouds hadn’t yet come back.
Past, Present, and Future
Schneider emphasized an important caveat to the study, which will need to be addressed by future work: The simplified climate model he and his colleagues created assumed that global wind currents would stay as they are now. However, there is some evidence that these circulations might weaken in a way that would make stratocumulus clouds more robust, raising the threshold for their disappearance from 1,200 ppm to some higher level. Other changes could do the opposite, or the tipping point could vary by region.
To better “capture the heterogeneity” of the global system, Schneider said, researchers will need to use many simulations of cloud patches to calibrate a global climate model. “What I would love to do, and what I hope we’ll get a chance to do, is embed many, many of these [high-resolution] simulations in a global climate model, maybe tens of thousands, and then run a global climate simulation that interacts with” all of them, he said. Such a setup would enable a more precise prediction of the stratocumulus tipping point or points.
There’s a long way to go before we reach 1,200 parts per million, or thereabouts. Ultimate disaster can be averted if net carbon emissions can be reduced to zero—which doesn’t mean humans can’t release any carbon into the sky. We currently pump out 10 billion tons of it each year, and scientists estimate that Earth can absorb about 2 billion tons of it a year, in addition to what’s naturally emitted and absorbed. If fossil fuel emissions can be reduced to 2 billion tons annually through the expansion of solar, wind, nuclear and geothermal energy, changes in the agricultural sector, and the use of carbon-capture technology, anthropogenic global warming will slow to a halt.
What does Schneider think the future will bring? Sitting in his office with his laptop screen open to a mesmerizing simulation of roiling clouds, he said, “I am pretty—fairly—optimistic, simply because I think solar power has gotten so much cheaper. It’s not that far away from the cost curve for producing electricity from solar power crossing the fossil fuel cost curve. And once it crosses, there will be an exponential transformation of entire industries.”
Kerry Emanuel, the MIT climate scientist, noted that possible economic collapse caused by nearer-term effects of climate change might also curtail carbon emissions before the stratocumulus tipping point is reached.
But other unforeseen changes and climate tipping points could accelerate us toward the cliff. “I’m worried,” said Kennett, the pioneering paleoceanographer who discovered the PETM and unearthed evidence of many other tumultuous periods in Earth’s history. “Are you kidding? As far as I’m concerned, global warming is the major issue of our time.”
During the PETM, mammals, newly ascendant after the dinosaurs’ downfall, actually thrived. Their northward march led them to land bridges that allowed them to fan out across the globe, filling ecological niches and spreading south again as the planet reabsorbed the excess CO2 in the sky and cooled over 200,000 years. However, their story is hardly one we can hope to emulate. One difference, scientists say, is that Earth was much warmer then to begin with, so there were no ice caps to melt and accelerate the warming and sea-level rise.
“The other big difference,” said the climatologist Gavin Schmidt, director of the Goddard Institute, “is, we’re here, and we’re adapted to the climate we have. We built our cities all the way around the coasts; we’ve built our agricultural systems expecting the rain to be where it is and the dry areas to be where they are.” And national borders are where they are. “We’re not prepared for those things to shift,” he said.
Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
YouTube CEO Susan Wojcicki defended her company’s efforts to keep violent content off the video platform at the sixth annual Lesbians Who Tech Summit Friday in San Francisco. Wojcicki was interviewed by New York Times columnist Kara Swisher, who took the YouTube leader to task for the platform’s failure to keep dangerous content away from kids. Last week reports emerged that scenes describing how to commit suicide were spliced into YouTube videos aimed at children, only the latest example in a long list of troublesome content plaguing the platform.
“We take kids’ safety incredibly seriously, and I would say that the last two years have really been focused on the responsibility of our platforms,” said Wojcicki. “I’m a mom, I have five kids from 4 to 19,” she explained to the crowd filling the Castro Theatre, one of San Francisco’s oldest movie houses. “I understand kids, and as a parent I really want to do the right thing.”
Following the latest controversy, YouTube again changed its policies regarding content that features children, eliminating comments on videos featuring young minors, or older minors engaged in risky behavior. She said some creators may be upset that their videos won’t be the subject of comments. “This change takes away that ability from people who are innocent,” she said. “But this is a decision we made because we want to prioritize children’s safety.” Earlier this year, YouTube said it would change its algorithm to recommend less content that might be harmful.
About 500 hours of video is uploaded to YouTube every minute. In the third quarter of 2018, the company removed nearly 8 million videos with problematic content from the site, 75 percent of which were identified by machine-learning systems, Wojcicki said. Most of those videos didn’t have a single view. (A report covering the fourth quarter is due soon, she said.)
During Friday’s conference, Swisher lamented that her teenage son was able to land on neo-Nazi propaganda after a few clicks on YouTube: “I said, ‘I’m going to kill Susan Wojcicki.’ It feels like all of you tech companies built these beautiful cities, but you decided to not put in police, fire, garbage…” Do tech companies have a sense of the impact they’re having, Swisher asked, and do they need regulation to deal with it? “We use the analogy of a city too,” Wojcicki said. She said some of the problems of YouTube and other platforms stemmed from their rapid growth. “We were this smaller city, and everybody knew each other on the internet, and very quickly we grew to this major metropolitan city,” she said. “Google has committed to having 10,000 people dealing with controversial content. We have already made a huge difference, and we will continue.”
Following the conference’s focus on inclusion—80 percent of Lesbians Who Tech speakers are queer women, 50 percent are women of color, 15 percent are transgender or gender non-conforming—Swisher also grilled Wojcicki about her company’s diversity efforts. Wojcicki mentioned a 2017 story she’d written for Vanity Fair called “How to Break Up the Silicon Valley Boys’ Club.” “The first point I made was that it has to come from the CEO level,” Wojcicki said. “The CEO has to make it a priority; they have to say, I’m going to meet with the underrepresented groups, I’m going to focus on having a diverse management team.”
When she joined YouTube in 2014, Wojcicki said, the company’s management team was 15 percent women; today, it’s 30 percent women. “I’ve been really focused on diversity at YouTube and on bringing more leaders and women and people of color and underrepresented minorities.”
Last fall Google employees staged a walkout in response, in part, to a New York Timesreport that the company paid Android creator Andy Rubin $90 million to exit the company following accusations of sexual misconduct. Following the protest, Google eliminated a requirement for binding arbitration in cases of harassment and discrimination claims, pledged to end pay inequity, and to revamp the process employees use to report sexual misconduct. “I don’t want to say that everything is solved, but there were a lot of changes that were made quickly,” said Wojcicki.
As part of the protest, employees demanded that Google appoint an employee to its board. Pressed by Swisher, Wojcicki, who does not sit on the board of Google parent Alphabet but does serve on Salesforce’s board, wouldn’t say whether she supports this move, which Google has declined to make.
Lesbians Who Tech calls itself the largest LGBTQ professional in the event in the world; the organization holds conferences in San Francisco and New York, with smaller gatherings in 40 cities globally. About 6,000 people are attending this weekend’s gathering in San Francisco. In addition to Wojcicki, speakers on Friday included Senator Tammy Baldwin (D-Wisconsin), San Francisco Mayor London Breed, former Georgia gubernatorial candidate and rising Democratic Party star Stacey Abrams, and Emerson Collective founder Laurene Powell Jobs. The event also featured sessions on scaling venture-backed companies, cybersecurity best practices, building serverless applications, and imposter syndrome. The group offers a coding and scholarship fund named for Edie Windsor, the late IBM computer programmer whose efforts to force the US government to recognize her same-sex marriage led to a landmark 2013 Supreme Court case that legalized gay marriage nationally.