It’s easy to imagine the conversation that led to this. In a boardroom overlooking Detroit, some Cadillac executives gathered around around a walnut conference table, swiveling idly in plush leather chairs. “What do American car buyers want?” one says.
“So let’s build an electric SUV!” It’s the same back and forth that led Audi, Jaguar, Mercedes, BMW, and others to promise battery-powered kid-karters, some of which are already on the road. So at the North American International Auto Show in Detroit this week, Cadillac chief Steve Carlisle took a moment to show off what it’s working on. Well, it was more of a tase: General Motors’ luxury arm has released just a few renderings of the vehicle. Key details like the name, tech specs, and price will all come later.
The images we do have show an attractive, angular, upright vehicle, that stretches some of Cadillac’s styling cues. There’s a giant trapezoid grille at the front, with a glowing logo. The headlights are thin horizontal slits, the daytime running lights are tall and vertical. No word on batteries or drivetrain, beyond the fact that it will be based on new architecture—a future “BEV3” platform, that should allow Cadillac to build a range of front or rear drive vehicles on top of it. “The Cadillac portfolio will eventually benefit from a variety of body styles that can be spun off this architecture,” says a press release.
Don’t expect to plug-and-play with this ride in the next few years. Cadillac put much more energy into promoting the 2020 XT6, a very conventional, very large, very real, very gas-powered SUV. Still, it’s a sign of what’s to come, and not just for Cadillac. In October 2017, General Motors declared it is working its way to an all-electric future. This month, it announced that Cadillac will lead that shift. That role used to belong to Chevy, whose Volt and Bolt were meant to bring the appeal of driving on batteries to the masses. GM is killing the plug-in hybrid Volt along with the rest of its passenger cars. And although the Bolt, a pure EV with over 200 miles of range, hit the market before Tesla’s Model 3 and costs less, it hasn’t generated the level of excitement or sales that Elon Musk’s sedan has.
What’s surprising is that this concept is Cadillac’s first fully electric car. GM has been in the EV game since the 1960s, with the Electrovair, the 1977 Electrovette, and most famously in the 1990s with the EV1, the first modern, mass produced, electric vehicle from a major manufacturer. After it reclaimed every EV1 from its lessee—and famously crushed them—the company spent a good decade winning back the respect of EV lovers with the Volt and Bolt. But as much as those “halo” cars might have done for GM’s reputation, they didn’t seem to do much for the bottom line.
Taking electrics upmarket makes sense. Although the costs of building an EV are falling, selling a car with a built-in profit margin rather than hoping it eventually generates enough sales to make money, is a logical longterm strategy for shifting away from internal combustion engines. As Tesla has shown, wealthy buyers are prepared to pay a premium to be green, as long as they can still be sporty and techy. Add in the ever-popular SUV body style, and blockbuster sales are a no-brainer—or so many car companies hope. Jaguar is already selling its I-Pace crossover. The Audi E-tron SUV and Mercedes EQC are coming soon. As a bonus, Cadillac is quite popular in China, a critical market where regulators are demanding cars come with charge cords.
Now it’s just a matter of waiting to see how Cadillac takes what looks like a quickly drawn sketch, and turning it into an image of the future.
Instead of figuring out how many Pacific hake fishermen can catch sustainably, as his job demands, scientist Ian Taylor is at home with his four-month old daughter, biding his time through the partial government shutdown.
Taylor’s task is to assess the size and age of hake and other commercially harvested fish species in the productive grounds from Baja California to the Gulf of Alaska. These stock assessments are then used by federal managers to approve permits to West Coast fishing boats. Without Taylor’s science report, the season could be delayed—and the impact of the shutdown could spread beyond the 800,000 government employees now on furlough to include boat captains, deck hands, and others working in the seafood industry who won’t be able to head to sea on schedule. That’s what happened to Alaska crabbers during the last big federal shutdown in 2013.
Taylor, an operations research analyst at the National Oceanic and Atmospheric Association’s Northwest Fisheries Science Center, says he’s frustrated that he can’t do his job. He can’t even make phone calls or use email. “It feels like a terrible situation,” he says. “Important work is not getting done.”
President Trump says he will not sign legislation to operate large chunks of the federal government unless Democrats agree to approve more than $5.7 billion for a wall along the Mexican border. Trump said Monday he plans to visit the border Thursday, hinting that any compromise will likely not happen before then.
Some federal science agencies are open, such as the National Institutes for Health and the Department of Energy, since their appropriations bills were already signed by Trump. Others, such as NASA, are continuing to operate key programs such as the International Space Station, although 95 percent of its 15,000 workers were sent home on Dec. 22.
The shutdown has led to a hodgepodge of federal science-based activity across the country. A SpaceX Falcon 9 rocket is sitting on a launch pad at Cape Canaveral ready for a planned launch on Jan. 17, but without NASA personnel to oversee testing, that liftoff will be delayed. Crews that fly over the Atlantic to check on endangered Atlantic right whales and send those positions to commercial ships are still working, but they aren’t being paid.
Weather forecasters are working during the shutdown, but hundreds of scientists from NOAA and the National Weather Service have been banned from attending the annual American Meteorological Society meeting this week in Phoenix. Antonio Busalacchi was supposed to be on a panel with colleagues from federal weather agencies, but they didn’t show up. “Science is a community and this is where people come together to discuss common problems,” says Busalacchi, president of the University Corporation for Atmospheric Research, a consortium of academic institutions that conduct and promote the study of earth sciences. “Last month, we were talking about the workforce in the future, but now we can’t discuss how best to go forward.”
Busalacchi is worried that he may have to shut down a meteorological research program UCAR runs called COSMIC that uses a fleet of existing GPS satellites to measure the atmosphere’s temperature and humidity. The data is then sent to federal NWS forecasters who use it to make both short-term weather and long-term climate predictions. UCAR runs the program with funds from the National Science Foundation, which isn’t giving out grant money right now, as well as help from NOAA and NASA.
“We may be running the risk to shut this program down because we are not getting the funds from the government,” he says. If COSMIC gets shut down, data analysis would be paused, potentially weakening some forecasts. But equally frustrating is the fact that Busalacchi is left in the dark on how to handle the program. With no information coming from his federal partners, he doesn’t know whether to keep spending money to sustain the program, or pull the plug.
Reams of scientific data are still being collected remotely by federally operated satellites, automated river gauges or non-federal scientists, but the policies and permits that rely on this science are now in limbo. As a result, one legal expert worries that the shutdown could result in more air and water pollution being discharged by companies with permits that expire during the shutdown.
“None of the federal environmental laws are written in such a way that if the government is shut down, you can’t do anything,” says Kyla Bennett, senior attorney for the nonprofit group Public Employees for Environmental Responsibility, which advocates on behalf of federal workers, and a former EPA employee. Instead, the law implies that companies can proceed on their own. “It says, if you don’t hear anything, go ahead.”
The Environmental Protection Agency furloughed about 14,000 of its employees, leaving just 753 “essential” workers on the job. That might make it more difficult for the agency to meet legal deadlines later this year for safety assessments of about 40 chemicals, according to a news report in the journal Nature. The agency has already postponed at least one upcoming advisory committee meeting related to the work.
Federal science workers are making do. Leslie Rissler, an evolutionary biologist and program director at the NSF, tweeted last week that she had applied for unemployment benefits. “This is a ridiculous shutdown unnecessarily affecting thousands of federal employees and families. Wishing all of them, and this country, better days ahead.”
For his part, fisheries scientist Taylor is budgeting his savings and using his time wisely. “I’ve been watching Marie Kondo on Netflix,” he says from his home near Seattle. “We’ve been cleaning out our closets.”
Though sporadic hacker intrusions and phishing campaigns targeted political entities in the lead-up to November’s midterm elections, things seemed pretty quiet overall on the election-meddling front in the US. Certainly no leaks or theatrics rose to the level of Russia’s actions during the 2016 presidential election. But a belatedly revealed breach of the National Republican Congressional Committee shows just how bad the attack on the 2018 election really was.
As Politico first reported Tuesday, attackers compromised the email accounts of four top NRCC aides, surveilling their correspondences—totaling thousands of messages—for months. The NRCC discovered the intrusion in April, and has been investigating it since. The Committee kept the incident quiet, though, and didn’t even inform Republican House leaders. NRCC officials told Politico that the stolen data hasn’t surfaced, and that no breach-related extortion attempts have targeted the NRCC so far.
“Of course these types of activity were continuing.”
Dave Aitel, Cyxtera
“The NRCC can confirm that it was the victim of a cyber intrusion by an unknown entity,” spokesperson Ian Prior wrote in a statement. Prior, a former Department of Justice public affairs officer who now works for the bipartisan strategy firm Mercury, has consulted NRCC on the incident. “The cybersecurity of the Committee’s data is paramount, and upon learning of the intrusion, the NRCC immediately launched an internal investigation and notified the FBI, which is now investigating the matter.” Prior said the NRCC is declining to answer additional questions because of the ongoing investigation.
A few election-related hacking incidents were publicly known leading up the midterms, including some attempted spearphishing attacks against campaigns. But from the outside, those attempts appeared largely unsuccessful, seemingly because political organizations shored up their digital security after the wakeup call of 2016. But the major NRCC breach is a reminder that what’s publicly known doesn’t represent the full picture.
“Of course these types of activity were continuing,” says Dave Aitel, a former NSA analyst who is now chief security technology officer at the secure infrastructure firm Cyxtera. “I was always confused when people said they were not.”
Even more revelations could be in the offing as well. Government and intelligence officials are still analyzing the events of the midterm season, and new tidbits continue to emerge. For example, defense secretary James Mattis told the Reagan National Defense Forum in California on Saturday that Russian President Vladimir Putin “tried again to muck around in our elections this last month, and we are seeing a continued effort along those lines.”
It is still unclear who was behind the NRCC breach, or what they were after. The details available so far suggest at least a moderately sophisticated hacking effort, since attackers simultaneously compromised four top accounts and were able to lurk on the network for months, says Julian Sanchez, a national-security-focused research fellow at the Cato Institute. But Sanchez also cautions that it’s too early to draw conclusions about what the attackers were after, or how difficult it was to persist.
“If we’re assuming a government actor, this is the kind of thing you can do on general principle just to see if it might be useful,” Sanchez says. “So I don’t know that there needs to be a plan this was specifically in service of. For a government adversary there might be a rare case where there’s enough value in embarrassing or exposing a target in some way that it’s worth it to publish stolen information, but much more often the value is greater when it’s kept secret.”
Though the NRCC breach may turn out to be anything from a random criminal compromise to a calculated nation state espionage attack, the stolen data could be a ticking time bomb in such a charged political climate. The release of stolen emails from Hillary Clinton 2016 campaign chair John Podesta had a devastating impact on her presidential run. And while the focus during the 2016 election season was leaked emails from prominent democrats, government officials later confirmed that Russian election meddling compromised GOP accounts and old Republican National Committee emails as well.
Depending on what the hackers found in their NRCC trove, and what their intentions are, this new wave of compromised data could be used to similar effect at some point. Or the trove may never see the light of day, instead quietly bolstering some currently unknown nation state’s intelligence gathering apparatus.
In either case, the long tail of the 2018 election season just got a little bit longer.
Alexander Smith’s work on the Goldfeld conjecture reveals fundamental characteristics of elliptic curves.
Elliptic curves seem to admit infinite variety, but they really only come in two flavors. That is the upshot of a new proof from a graduate student at Harvard University.
Elliptic curves may sound exotic, but they’re unspectacular geometric objects, as ordinary as lines, parabolas or ellipses. In a paper first posted online last year, Alexander Smith proved a four-decade-old conjecture that concerns a fundamental trait of elliptic curves called “rank.” Smith proved that, within a specific family of curves, and with one qualification, half of all curves have rank 0 and half have rank 1.
The result establishes baseline characteristics of objects that have intrigued mathematicians for centuries, and that have increased in importance in recent decades.
“We’ve been thinking about this for over 1,000 years, and now we have some probabilistic sense about [elliptic curves]. That’s super important,” said Shou-Wu Zhang, a mathematician at Princeton University who advised Smith at the outset of the work, when Smith was an undergraduate at Princeton.
Elliptic curves are equations with a variable raised to the third power, like y2 = x3 + 1. They’ve figured in many important mathematical proofs in recent decades, including Andrew Wiles’ landmark 1994 proof of Fermat’s Last Theorem. They’re powerful in part because they’re the most complicated type of polynomial equation about which mathematicians have some systematic understanding.
“Elliptic curves are sort of the most interesting case,” said Dorian Goldfeld, a mathematician at Columbia University who proposed the eponymous Goldfeld conjecture in 1979.
The Goldfeld conjecture makes a prediction about the ranks of elliptic curves. As Quanta explained in a recent article, “Without a Proof, Mathematicians Wonder How Much Evidence Is Enough,” rank is a measure of the complexity of the curve’s set of rational solutions (solutions that can be expressed as fractions). While there is no proven limit to how high the rank of a curve can be — the highest-ranked curve mathematicians have found has a rank of 28 — the Goldfeld conjecture predicts that, overall, half of all elliptic curves have rank 0 and half have rank 1.
You may be wondering how it’s possible for there to be elliptic curves with rank higher than 1 if half of all elliptic curves have a rank of 0 and half have a rank of 1. After all, if you have a box full of pingpong balls, and you know that exactly half are black and half are white, you know that there can’t be any red ones hiding in there.
Even more perplexing, there aren’t just a few elliptic curves with a rank of 2 or higher, but an infinite number of them. The apparent absurdity is a result of the slippery statistics involved in infinity. Even though there are lots of curves of rank 2 or higher, there are so many more curves of ranks 0 and 1 that curves of rank 2 or higher are statistically insignificant. If you were to put all the curves in a box and pick one out at random, your odds of picking a curve with a rank higher than 1 are officially zero.
What does it mean for a curve to have a rank of 0? These curves have a finite number of rational points — in fact, no more than 16, as proved by Barry Mazur in the 1970s.
There are reasons to think lots of elliptic curves would have rank 0. If you picture a curve carving its way through the plane, most of the points on the curve are not rational. Those points can’t be expressed as fractions, no matter how elaborate. The chances that a randomly drawn curve would intersect lots of rational points — infinitely many — are low.
“My philosophy about this is that if you look at a random elliptic curve, it has a good reason to be rank 0. It doesn’t want to have rational points,” Smith said.
The ubiquity of rank 1 curves has a similar explanation. Rank 1 curves have infinitely many rational points, but all those points line up in a neat way, so that you can connect the dots between them using a relatively straightforward procedure.
Curves of rank 2 or higher have more complicated sets of rational points. These need to contain multiple infinite subsets of rational points that don’t connect with one another.
“What’s the chance of having two independent points?” Goldfeld said. “It seems to be very hard. My conjecture says it should happen rarely.”
When Goldfeld first made his conjecture, most mathematicians thought it was false. They pointed to computational experiments that suggested that curves with rank 2 or higher occur far more often than 0 percent of the time.
Goldfeld replied that they just weren’t casting their nets wide enough. He pointed out that if you only studied the first 10 whole numbers, you’d come to the wildly inaccurate estimate that 40 percent of all whole numbers are prime. In a similar fashion, these computational experiments were extrapolating from small subsets of elliptic curves to infinitely large families of curves.
“I said, ‘Look at the primes!’ That was my response. You have to go much, much higher because at the beginning there could be a lot of funny things appearing,” Goldfeld said.
Alexander Smith has shown that Goldfeld was right. In his new paper, he proves that 100 percent of elliptic curves1 have rank 0 or 1. He also proves that those curves are split equally between the two ranks, though this step comes with a caveat. His proof of the 50-50 split is contingent on the Birch and Swinnerton-Dyer (BSD) conjecture being true. The BSD conjecture is one of the most famous open problems in math. Mathematicians are far from proving it, but they generally believe it’s true.
Even with that caveat, Smith’s result is being received as momentous. Mathematicians say it indicates a way to fully prove the Goldfeld conjecture without having to tackle the daunting BSD conjecture. It does so by providing a new understanding of the underlying nature of elliptic curves.
“Alex Smith’s work is extremely exciting and I think still yet to be fully studied and appreciated,” said Melanie Wood, a mathematician at the University of Wisconsin, Madison. “It is a very important and groundbreaking thing to have been able to do.”
Robert Durst. Marjorie Diehl-Armstrong. Adnan Syed. Michael Peterson. Brendan Dassey. Steven Avery. Any self-professed true crime fan worth their weight in luminol is undoubtedly familiar with not just these names but with the minute details of the crimes of which those individuals have been accused (wrongly or otherwise). While the genre is not new—its roots in pop culture can be traced to writer Edmund Pearson’s 1924 Lizzie Borden book Studies in Murder—there’s no denying that true crime is having a major moment, one fueled by streaming services and all-crime-all-the-time networks that feed an ever-growing audience hungry for whodunit docs.
Natural selection has been a cornerstone of evolutionary theory ever since Darwin. Yet mathematical models of natural selection have often been dogged by an awkward problem that seemed to make evolution harder than biologists understood it to be. In a new paper appearing in Communications Biology, a multidisciplinary team of scientists in Austria and the United States identify a possible way out of the conundrum. Their answer still needs to be checked against what happens in nature, but in any case, it could be useful for biotechnology researchers and others who need to promote natural selection under artificial circumstances.
Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
A central premise of the theory of evolution through natural selection is that when beneficial mutations appear, they should spread throughout a population. But this outcome isn’t guaranteed. Random accidents, illnesses and other misfortunes can easily erase mutations when they are new and rare—and it’s statistically likely that they often will.
Mutations should theoretically face better odds of survival in some situations than others, however. Picture a huge population of organisms all living together on one island, for example. A mutation might get permanently lost in the crowd unless its advantage is great. Yet if a few individuals regularly migrate to their own islands to breed, then a modestly helpful mutation might have a better chance of establishing a foothold and spreading back to the main population. (Then again, it might not—the outcome would depend entirely on the precise details of the scenario.) Biologists study these population structures to understand how genes flow.
Martin Nowak, who is today the director of Harvard University’s Program for Evolutionary Dynamics, began thinking about how population structures could potentially affect evolutionary outcomes in 2003 while studying the behavior of cancer. “It was clear to me then that cancer is an evolutionary process that the organism does not want,” he said: After malignant cells arise through mutation, competition among those cells selects for the ones best able to run rampant through the body. “I asked myself, how would you get rid of evolution?” Attacking mutations was one solution, Nowak realized, but attacking selection was another.
The problem was that biologists had only loose ideas about how specific population structures might affect natural selection. To find more generalizable strategies, Nowak turned to graph theory.
Mathematical graphs are structures that represent the dynamic relations among sets of items: Individual items sit at the vertices of the structure; the lines, or edges, between every pair of items describe their connection. In evolutionary graph theory, individual organisms occupy every vertex. Over time, an individual has some probability of spawning an identical offspring, which can replace an individual on a neighboring vertex, but it also faces its own risks of being replaced by some individual from the next generation. Those probabilities are wired into the structure as “weights” and directions in the lines between the vertices. The right patterns of weighted connections can stand in for behaviors in living populations: For example, connections that make it more likely that lineages will become isolated from the rest of a population can represent migrations.
With graphs, Nowak could depict diverse population structures as mathematical abstractions. He could then rigorously explore how mutants with extra fitness would fare in each scenario.
Those efforts led to a 2005 Nature paper in which Nowak and two colleagues showed how strongly certain population structures can suppress or enhance the effects of natural selection. In populations that have “burst” and “path” structures, for example, individuals can never occupy positions in the graph that their ancestors held. Those structures stymie evolution by denying advantageous mutations any chance to take over a population.
The opposite is true, however, for a structure dubbed the Star, in which fitter mutations spread more effectively. Because the Star magnifies the effects of natural selection, the scientists labeled it an amplifier. Even better is the Superstar, which they called a strong amplifier because it ensures that mutants who are even slightly more fit will eventually replace all other individuals.
“A strong amplifier is an amazing structure because it guarantees the success of the advantageous mutation, no matter how small the advantage is,” Nowak said. “Everything about evolution is probabilistic, and here we somehow turn probability into near certainty.”
Yet that certainty came with a catch. Most potential population structures didn’t seem theoretically capable of being strong amplifiers. A few others looked like possibilities, but they seemed contrived rather than realistic, and they were so complex that their status as amplifiers couldn’t be proved. (A formal proof that the Superstar works came out just two years ago from a group at the University of Oxford, and Nowak described it as an intricate paper “with about a hundred pages of dense mathematics.”) It was hard to see how population structure could boost natural selection among real living creatures except under highly unusual circumstances.
Not quite a decade ago, however, one of Nowak’s collaborators, Krishnendu Chatterjee, a computer science researcher at the Institute of Science and Technology Austria, also became interested in this problem. He and his group had already spent years developing an understanding of similar problems involving graph theory and probabilities, and they thought the intuitions and insights they had developed might prove useful on this evolution problem.
The key to constructing amplifiers, Chatterjee and his students Andreas Pavlogiannis (now at the École Polytechnique Fédérale de Lausanne, EPFL) and Josef Tkadlec learned, was in the weights of the connections within the graphs. They realized that all potential strong amplifiers would have certain features in common, such as hubs and self-loops. They then showed that by assigning the right weights to the connections, they could create strong amplifiers inside even simple population structures. “It came as a very big surprise to show that almost any population structure can become a strong amplifier by adjusting weights,” Nowak said.
All told, the recent and previous papers make a case for population structure as a meaningful force in evolution. Any populations that act like the “burst” will be evolutionary dead ends—advantageous mutations that appear within them will never take off, no matter what the details of the interrelationships might be. Other population structures may not automatically enhance natural selection, but most of them at least have the potential to amplify advantageous mutations and give evolution a helping hand.
The scientists’ findings come with some important caveats. One is that the population models in these studies apply only to asexual organisms like bacteria and other microbes. Taking into account the wholesale reshuffling of genes that occurs in sexual reproduction would massively complicate the models, Nowak and Chatterjee said, and to their knowledge, no one has yet seriously taken on that challenge. The consequences of allowing the modeled populations to grow or shrink also need to be determined.
Another issue is that although strong amplifiers guarantee that useful mutations will spread inexorably through a population, they don’t ensure it will happen quickly, Nowak said. It’s entirely possible that some populations might benefit from structures in which natural selection is less certain but more swift.
That’s an important consideration, agreed Marcus Frean, an associate professor at Victoria University of Wellington in New Zealand. Work that he and his colleagues presented in 2013 shows that the rate of evolution can slow down substantially even in population structures that amplify natural selection. The certainty that a mutation will take over a population and the speed with which it does might often oppose each other. “The thing we really care about—the rate of evolution—involves both,” Frean explained by email.
Nevertheless, Nowak, Chatterjee and their colleagues suggest in their paper that their algorithm for constructing strong amplifiers might still be useful to researchers working with cell cultures who want to foster the emergence of desirable mutants or to screen for faster-growing strains of cells. Microfluidic growth systems could be adjusted to produce any desired population structure by controlling how cells mix and migrate.
Perhaps a more intriguing application of their work, however, might be to identify where these strong amplifiers can already be found in nature. Nowak and his colleagues suggest that, for instance, immunologists could check whether populations of immune cells in the spleen and lymph nodes show these structural features, which might help speed up how quickly the body fights back against infections. If they do, it could prove that natural selection sometimes favors itself as a good solution to life’s challenges.
_Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
The Seattle City Council voted 9-0 last month to approve an annual $275-per-employee tax on big employers like Amazon. The tax was expected to raise about $47 million a year for services for the homeless and construction of affordable housing. But Tuesday, less than a month after passing the tax, the council voted 7-2 to repeal it.
The victory for Amazon comes amid growing concern about the power and influence of big tech companies, and their responsibility for the spread of fake news, sloppy handling of user data, rising income inequality, and other ills. But a look at this week’s headlines shows getting tough on tech is harder than many expected. The most tech-friendly candidate in San Francisco’s mayoral race appears to have won. The Federal Communications Commission’s net neutrality rules are no more, freeing broadband providers favor certain content over others, and to charge companies extra fees for “fast lane” access. And AT&T got the go-ahead to buy Time Warner, overcoming an federal antitrust challenge.
Seattle’s corporate tax looked at first like an example of elected officials standing up to a tech giant. But the version that passed last month was a compromise. Councilmembers originally proposed a $500-per-employee tax that would have eventually transitioned into a payroll tax on companies making at least $20 million in revenue. In response, Amazon announced it was halting construction on a new office building and considering subleasing space in another instead of hiring more staff in Seattle.
Seattle Mayor Jenny Durkan negotiated a compromise, cutting the tax nearly in half. Amazon announced it would resume construction after the measure passed, but company spokesman Drew Herdener warned the company still wasn’t happy and suggested it might hire fewer people in Seattle. Amazon donated $25,000 to a campaign for a ballot measure to overturn the tax in November.
Councilmember Lisa Herbold told the Seattle Times she voted to reverse the tax to avoid a months-long battle over a ballot measure, and because of recent polling on the issue. A KIRO-TV poll found that 54 percent of respondents opposed the corporate tax and only 38 percent favored it.
Meanwhile in San Francisco, mayoral candidates Mark Leno and Jane Kim teamed up to oppose London Breed, perceived as the tech industry’s favored candidate. Although Leno and Kim didn’t explicitly target Breed over tech-industry support, the candidates urged voters to “stand against special-interest Super PAC spending.”
On the other hand, the end of the FCC’s net neutrality protections might seem like a setback to big tech. Giants such as Google and Facebook generally favored the rules, largely through the Internet Association trade group. They’re also helping fund the legal fight against the FCC’s decision. But the biggest names in tech were relatively quiet on the subject of net neutrality this year. That could be in part because net neutrality doesn’t affect their business models as much as it did in the past.
In fact, the end of net neutrality could help the incumbents more than it hurts them, if upstarts are forced to pay extra or negotiate special deals in order to offer content at the same speeds that, say, Netflix and YouTube do. That makes the end of net neutrality protections more of a victory for big telecom companies than a problem for tech giants.
Telecom won again with AT&T’s court victory in the Time Warner case. The judge’s refusal to apply antitrust law to stop the deal will likely spur another round of telecom and media consolidation. Wednesday, Comcast increased its offer for key 21st Century Fox assets, to fend off a rival bid from Disney. And it could forestall antitrust scrutiny of Amazon, Google, and Facebook as they grow ever larger.
Some of this week’s setbacks for tech opponents may have to do with the public not sharing the outrage expressed by pundits and politicians. Even as Tucker Carlson and Senator Elizabeth Warren alike railed against tech last year, polls found that consumers still viewed major tech brands very favorably.
Other research shows that consumers are less happy with telecom companies. The latest American Customer Satisfaction Index found cable companies and internet service providers tied for dead last in customer satisfaction. Wireless carriers were also near the bottom. It should come as little surprise then that otherpolls find that voters across the political spectrum favor net neutrality protections.
The fact that the government now struggles to rein in some of the least liked companies in the world doesn’t bode well for efforts to tackle companies that consumers love.