For over a century and a half, Eta Carinae has been one of the most luminous – and most enigmatic – stars of the southern Milky Way.
Part of its nature was revealed in 1847, when, in a giant eruption, it ejected a nebula called the Homunculus (“little man”). The event made Eta Carinae the second-brightest star in the sky after Sirius, visible even in broad daylight and (later) easily distinguishable from other, similarly unstable stars called Luminous Blue Variables, whose nebulae are not so clearly visible.
Aside from making Eta Carinae one of the most beautiful and frequently photographed objects in the night sky, the giant Homunculus contains information about its parent star, ranging from the energy of its expansion to its bipolar outflow and chemical composition.
In as little as a decade from now, however, we will no longer be able to see the nebula clearly.
A recent study indicates that the Homunculus will be obfuscated by the increasing brightness of Eta Carinae itself. So rapidly is it growing, in fact, that in 2036 the star will be 10 times brighter than its nebula, which in the end will make it indistinguishable from other LBVs.
But there’s an upside.
A team of 17 researchers led by Brazilian astronomer Augusto Damineli, with input from Université de Montréal’s Anthony Moffatt, believe that the increasing brightness of Eta Carinae is not intrinsic to the star itself, as is commonly believed. In fact, it is likely caused by the dissipation of a dust cloud positioned exactly in front of it as seen from the Earth.
This cloud, the researchers posit in a new study in the Monthly Notices of the Royal Astronomical Society, completely shrouds the star and its winds, blotting out much of its light emanating towards Earth. The surrounding Homunculus, by contrast, can be seen directly because it is 200 times larger than the obscuring cloudlet and its brightness is thus almost unaffected.
In 2032 (with an uncertainty of plus or minus four years), the dusty cloud will have dissipated, so that the brightness of the central star will no longer increase and the Homunculus will be lost in its glare, the research team believes.
And that will provide an opportunity for deeper study of Eta Carinae itself, even showing that it is not one, but in fact two, stars.
“There have been a number of recent revelations about this unique object in the sky, but this is among the most important,” said Moffat. “It may finally allow us to probe the true nature of the central engine and show that it is a close binary system of two very massive interacting stars.”
There are plenty of ways you can improve your photography. You should get inspired, learn new stuff, go out and shoot, make mistakes and learn to correct them… But you know what else you need to have if you want to become a better photographer? “Patience you must have, my young Padawan,” as Yoda would say. In this video, Pierre T. Lambert gives you five ways patience will help you to raise your photography to a higher level.
Hexbyte – Glen Cove – News 1. Patience to wait for the right light
Sometimes, all you need is to wait extra few minutes to get just the right light and make your photos stunning. This is especially true around sunrise and sunset when the light and colors change minute after minute. So, don’t rush in. Don’t just snap a few photos and leave. Spend some time at the location, shoot at different times and make the best of the light you get. I believe that landscape photographers will find this scenario very familiar.
Hexbyte – Glen Cove – News 2. Patience to wait for the right subject
Patience can be crucial for a good street photo. Sometimes, you spot the scene and the person you want to capture, but your subject simply isn’t positioned well. Other times, you imagine the shot you want to take, but you need to wait for someone to walk into your frame exactly where you want them. In both scenarios, you’ll have to wait. Don’t just snap random shots. Instead be patient, wait for the right moment, and then press the shutter.
Hexbyte – Glen Cove – News 3. Patience to wait for the right weather
Just like #1, this one can also be very familiar to landscape photographers. Sometimes you’ll arrive at the location just to see that the weather isn’t what you need for the best shot. You may need to wait a few hours, maybe all day. But the key is to be patient. In most cases, it pays off.
Hexbyte – Glen Cove – News 4. Patience to polish different skills
There will be times when you won’t be able to shoot what you usually shoot. For example, if you’re a travel photographer, there will be times when you’ll stay in your hometown. If you shoot landscapes, there will be times when you’ll be stuck in the city. You get the gist.
In these situations, Pierre advises you to take advantage of the situation to polish your other skills. Make the best out of the current situation, and use what you learn the next time you shoot what you normally do.
Hexbyte – Glen Cove – News 5. Patience to improve and succeed
Finally, patience is the key to learning any new skill. Therefore, you should have patience when learning anything about photography. If you’re a newbie, especially if you’re anything like me and you’re really impatient – it may all seem overwhelming and you may want to give it all up. Don’t! Be patient and go step by step. Photography is a huge field and there’s always something to learn, no matter how skillful you are. So, take your time to learn and be patient. Learning is a beautiful and a life-long journey, do your best to enjoy it!
Last Sunday, as much of the country tuned into the Super Bowl, SpaceX CEO Elon Musk and a crew of engineers were gathered in McGregor, Texas, the small city where the company maintains a rocket test site. For a few seconds in the early evening, the sound of a new engine roared across the flatlands. “First firing of Starship Raptor flight engine!” Musk tweeted along with video footage of the test fire.
The engine will power SpaceX’s upcoming heavy-lift launch system, consisting of two components: a large rocket dubbed the Super Heavy and a crew transporter called Starship. First introduced by Musk in 2016 at a meeting of the International Astronautical Congress in Guadalajara, Mexico, the Starship transporter is designed to carry as many as 100 people to the moon and Mars. In true SpaceX fashion, both the rocket and the transporter will be reusable: able to launch, land, and repeat many times over.
SpaceX tends to be private about its affairs, but not much gets past the company’s biggest enthusiasts. In January, eagle-eyed observers near the company’s Texas facilities spotted the appearance of a silver spaceship on the otherwise flat landscape, which Elon Musk confirmed to be SpaceX’s Starship. Unlike the iconic black-and-white paint scheme of the Falcon series of rockets, the Starship transporter sports a shiny, stainless steel skin that evokes a vintage sci-fi vibe. Musk says the vehicle is a prototype version of the craft that will one day ferry humans.
This prototype is now preparing to take part in a series of short flights called “hop” tests. According to FCC filings, it will conduct both low- and high-altitude flights that could climb as high as 16,400 feet. The transporter that will conduct those flights will be powered by three engines identical to the one test-fired on Sunday. In December, Musk hinted that hop tests would begin in early spring. But in January, gusts of up to 50 miles per hour knocked over the prototype, breaking the mooring blocks that secure the Starship to the ground, Musk reported on Twitter. The needed repairs might push out the timing of the tests.
The Starship system is the latest in SpaceX’s parade of increasingly large rockets. A year ago, SpaceX launched and landed its Falcon Heavy rocket for the first time, generating 5 million pounds of thrust from the rocket’s 27 engines. Tens of thousands of spectators watched as two of its three boosters landed in perfect unison on their designated landing zone. (The rocket’s center core failed to land on one of the company’s two drone ships.)
That was the only flight of the Falcon Heavy so far, though SpaceX says the next launch is estimated to take off no earlier than March. As the most powerful launch vehicle on the market today, Falcon Heavy can deliver to low Earth orbit more than twice the payload capacity of its counterpart, the Falcon 9. But even it can’t help Musk ferry people to the moon or Mars, an ambition that he has echoed repeatedly since SpaceX’s inception. For that, Musk needs the Starship.
Sunday’s test was not the first time a Raptor fired up, but it does represent the first test of a “flight-ready” engine. Afterwards, SpaceX posted on Instagram that the engine had reached about 60 percent of its power—a milestone for the Starship program. Unlike the engines currently powering SpaceX’s Falcon 9 and Falcon Heavy, which use a mixture of kerosene and liquid oxygen, the Raptor is fueled by methane. (SpaceX’s competitor, Blue Origin, is also developing a methane-fueled engine called the BE-4.) Mars has a generous supply of methane, which could make refueling any rockets that land there relatively straightforward.
After its hop tests, Musk has said he will divulge more of the vehicle’s design details. Some of its specifications have changed from what he’d previously released. When the concept of an interplanetary transport system was first revealed, for example, he had said that the giant rocket would be constructed out of carbon-fiber composites. The debut of a metallic prototype shows that SpaceX pursued a different tack. Musk says that the stainless-steel alloy that makes up the Starship can withstand the searing temperatures experienced during the different phases of spaceflight. Resembling the Atlas rockets of the early space program, the Starship’s metallic skin won’t need as much thermal shielding as other materials. And areas that take the brunt of the heat during atmospheric entry will be actively cooled with residual liquid methane. The vehicle will also feature landing legs and windows so that passengers onboard can see out during flight.
Musk and SpaceX have a lot riding on this engine, as it will power both the Super Heavy rocket during launch and the Starship spacecraft in space. Last September, SpaceX announced that it had signed its first passenger to fly on the Starship transporter. Yusaku Maezawa and a gaggle of artists will embark on a weeklong journey to the moon. The mission is planned for 2023, but developing rockets costs money, and the feasibility of SpaceX’s Starship has been in question since its inception. So far, Maezawa’s trip is the only mission booked for this vessel, but as SpaceX moves through the design process and Musk reveals the rocket’s capabilities, more missions could come.
Even Musk knows that the notion of Starship is outlandish, referring to its development as “absolutely insane.” According to the CEO, the work on this project and the company’s space-based internet endeavor, dubbed Starlink, are what recently prompted SpaceX to restructure its workforce, laying off 10 percent of its staff. In explaining the SpaceX layoffs during last month’s Tesla earnings call (where Musk is also CEO), he described it as a preemptive measure, since such massive projects have bankrupted other organizations.
To keep the project from spiraling out of hand, Musk says that SpaceX plans to build its Starship as quickly as possible. But first, SpaceX has to prove it can safely transport people—starting with a different, existing rocket, and an upcoming trip to the International Space Station.
Meanwhile, Roger Stone—himself indicted, in part, because of his alleged lies to Congress and witness tampering that encouraged his associates “to do a ‘Frank Pentangeli,’” a reference to a Godfather Part II character who lied to Congress—continues his bizarre post-indictment media road show.
A close reading of the Stone indictment shows the odd hole at the center of the Mueller investigation so far. It followed a now familiar pattern: Mueller’s court filing included voluminous detail, including insight into the internal decisionmaking process of Donald Trump’s presidential campaign—and yet the indictment stopped short of alleging that Stone was part of a larger conspiracy.
Given how much Trump says, in all settings, all the time, his silences are just as conspicuous as Mueller’s.
All told, according to a recent tally by TheNew York Times, “more than 100 in-person meetings, phone calls, text messages, emails and private messages on Twitter” took place between Trump associates and Russians during the campaign and transition. But while we’ve seen a lot of channels, we’ve thus far from Mueller’s court filings seen near silence about what was said during those contacts—and why. In court filings that are remarkable for their level of detail and knowledge, Mueller’s conspicuous silence about those conversations stands out.
Of course, one possible explanation is that the content of the conversations was completely innocent—totally normal directions and innocent chitchat about “adoptions,” sanctions, potential business deals, and geopolitical diplomacy. That could explain why Mueller thus far has only charged individuals, including Michael Flynn, Michael Cohen, and Roger Stone, with lying about those contacts, not the underlying behavior.
Yet the evidence against such innocence seems clear too, in the form of consistent lies, omissions, and obfuscations about the numerous meetings, conversations, and contacts with Russians throughout the Trump campaign, transition, and presidency.
To take just two examples: Donald Trump lied extensively, for more than two years, about his dealings with Russia concerning the Trump Tower Moscow project, which suggests that he knew something about it was shady. If he’d really believed the project was on the up-and-up, it’s easy to imagine Trump as a candidate making a public to-do about the deal—arguing that he felt America’s relationship with Russia was off-track, and that as the world’s smartest businessman, he alone could set it right. Trump could have made the case on the campaign trail that he alone could make deals with Putin because he alone was making deals with Putin. Yet he didn’t make that argument, and remained entirely silent about the deal for years, even lying about his interest in Russia. Given how much Trump says, in all settings, all the time, his silences are just as conspicuous as Mueller’s.
And then there’s the continued controversy over Trump’s private conversations with Vladimir Putin at geopolitical gatherings, from Hamburg to Helsinki to Buenos Aires. Under normal circumstances and operations, US leaders meet with Russian leaders to advance geopolitical conversations, and then they “read out” those meetings to staff in order to execute the work and vision hashed out one-on-one. The entire point of those head-of-state conversations is to generate follow-up work for staff later—to come to agreements, to advance national interests, and to find common ground for action on areas of shared concern. And yet in city after city, President Trump has had suspicious conversations with Putin, where he goes out of his way to ensure that no American knows what to follow up on. In Hamburg he confiscated his translators’ notes. In Buenos Aires, he cut out American translators entirely.
If he’s truly advocating for the United States in these meetings, there’s no sign those conversations have translated into any action by White House or administration staff afterwards. Instead, quite the opposite. Trump has emerged from those conversations to spout Kremlin talking points, even, apparently, calling TheNew York Times from Air Force One on the way back from Hamburg to argue Putin’s point that he didn’t interfere with the 2016 election.
Mueller presumably has far more knowledge about the “why” and the “what” of the interactions between Trump’s orbit and Russia than he’s shared so far. The Stone indictment is the latest court filing to show two-way conversation, flowing from Trump to WikiLeaks or Trump to Moscow and back again, without ever making clear what, precisely, was flowing back or forth.
In fact, the one thing that remains clear is just how much Mueller knows: He’s uncovered “track changes” in individual Microsoft Word documents, he’s referenced what specific words Russian military intelligence officers Googled three years ago, and even what the hired trolls inside the Internet Research Agency wrote to family members. Long before the House Intelligence Committee today kicked over a few dozen transcripts, Mueller amassed some 290,000 documents from Michael Cohen, tons more from the Trump transition team, and what the White House says is 1.4 million documents it turned over voluntarily, among countless other files, documents, reports, and classified raw intelligence.
Given that foundation of knowledge, it’s worth examining some of the “known unknowns,” places where Mueller has been silent but where he presumably knows far more than he’s chosen to say. To single out just five examples:
Who directed the campaign’s contact with Roger Stone—and what flowed back and forth?
Much has been made in the days since the Stone indictment about paragraph 12 of the court filing, which says “a senior Trump Campaign official was directed to contact Stone about any additional releases and what other damaging information [Wikileaks] had regarding the Clinton Campaign.” That simple “was directed” appears to indicate Mueller knows about the internal decisionmaking of the Trump campaign—and that he knows who directed the campaign’s contact to Stone, a pool of officials that has to be quite small. Mueller could have easily written the sentence in a thousand less indicative ways, saying simply that Stone was contacted by a senior Trump campaign official or that someone “suggested” or “told” that official to contact Stone. Instead, by saying “was directed,” Mueller implies a level of authority and even hints at a possible internal conspiracy to make contact with Stone, if it was for nefarious purposes—but Mueller stops short of saying who or why.
What more is there in the Flynn case that’s worth knowing?
Similarly, Mueller stops short of confirming whether Stone and his associates, Jerome Corsi or Randy Credico, actually ever did have contact with WikiLeaks or Julian Assange, a hole in the indictment so gaping that its absence is inexplicable unless it’s being saved for some future court filing. Similarly, Mueller only outlines Stone’s requests for stolen emails, not whether anything flowed back to Stone from WikiLeaks. Again, we’re left with the puzzle: Why would Roger Stone have allegedly continued to lie so long about being in contact with WikiLeaks if either he (a) never was or (b) the contacts were entirely routine and aboveboard? Mueller, though, says Stone “falsely denied possessing records that contained evidence of these interactions,” a phrase that seems to indicate much more.
How did Donald Trump and the Trump Organization react to the progress of the Trump Tower Moscow project?
Michael Cohen’s plea agreement only lays out that the president’s former lawyer and fixer repeatedly briefed Trump and members of the Trump Organization’s leadership on his progress on the Trump Tower Moscow project. But he stops short of saying anything about how the Trump team reacted—or what instructions, if any, they gave Cohen. Mueller also points out in Cohen’s plea that Cohen appears to have scuttled a trip to Russia to work on the deal on the very day that the DNC announced it had been hacked, odd timing at least.
Who directed Michael Flynn’s conversations with Sergey Kislyak?
There remains much to understand about former national security advisor Michael Flynn’s plea agreement, which states that he lied to FBI agents about conversations with Russian ambassador Sergey Kislyak during the transition. Two things, in particular, stand out in the facts of the case: First, that his contacts with Kislyak were directed by a “very senior member” of the Trump transition, an official identified in media reports as Jared Kushner, and second, if Flynn truly believed that he’d been properly directed by the president-elect or his designee to have the communications with Kislyak, why would he lie about them? Mueller has provided no answers yet here, either. But it’s worth noting, again, the oddity of Flynn’s aborted sentencing at the end of last year—where the judge, privy to more information than the public has, exploded at Flynn and finally prompted him to postpone the sentencing and continue cooperating. What more is there in the Flynn case that’s worth knowing?
Why did Manafort turn over polling data to Konstantin Kilimink? And what are Konstantin Kilimnik’s ties to Russian intelligence?
Mueller’s court filings have laid out that the special counsel believes that Kilimnik, Paul Manafort’s business partner and codefendant, had ties to Russian intelligence in 2016. Yet we haven’t seen evidence of why Mueller believes that—and, more important, what relevance that has to the Trump campaign. And we have only learned about the polling data from Manafort’s ongoing tech foibles, so why hasn’t Mueller brought that charge into the open yet?
Why the “first time”?
In last summer’s GRU indictment, Mueller seemed to say more than he needed to—just like he did with “was directed” in the Stone indictment—in pointing out that “on or about July 27, 2016, the Conspirators attempted after hours to spearphish for the first time email accounts at a domain hosted by a third-party provider and used by Clinton’s personal office.” Mueller doesn’t note in the document that this was the same day Trump invited Russia to hack Clinton’s email, but in writing about the day Mueller adds two seemingly unnecessary details: First, that the GRU did it “after hours,” which, accounting for the time difference, would mean after Trump’s campaign trail comments. And second, that the attack on Clinton’s email directly was “for the first time,” a fact that Mueller would have to prove in a trial, meaning he has evidence that makes him confident the action was new in Russia’s strategy. Mueller is only making his own potential case and evidentiary burden higher by singling out “after hours” and “for the first time,” so that obviously must mean something to his prosecuting team.
Mueller is clearly picking and choosing his charges carefully, so far. But there’s a lot more he’s not telling us, and if you add up all those missing puzzle pieces, it certainly seems possible—perhaps even probable—that Mueller is building towards a conspiracy indictment that he’s already told us about, one that brings together many of these open threads and players into one coherent narrative.
In thinking through what that might look like, it’s worth remembering the second paragraph of his indictment last July, the case that targeted the GRU officials, which lays out three distinct stages of alleged conspiracy: hacking the Democratic computers, stealing documents, and then “stag[ing] releases” to “interfere” with the election. The latter could easily encompass some of the actions already described in the Stone indictment.
The “who” and “why” of that broader conspiracy remain open questions, but it’s notable the extent to which so many threads of the Russia story increasingly appear to overlap. For instance, Russian lawyer Natalia V. Veselnitskaya, a key player in the June 2016 meeting at Trump Tower, was charged earlier this year with obstruction relating to a separate, older money laundering case relating to her role in helping Prevezon Holdings, an entity owned by Russian oligarch Denis Katsyv. Buzzfeed reported this week that one of the other attendees at that Trump Tower meeting, a former Russian soldier and current lobbyist named Rinat Akhmetshin, “received a large payment that bank investigators deemed suspicious from Denis Katsyv.” So here we have Veselnitskaya, the lawyer from Prevezon Holdings, helping to organize a meeting at Trump Tower, while one of the other attendees received money contemporaneously from the same entity.
Each revelation from Mueller and the other investigations around Trump appears actually to point in a consistent direction: a relatively small and regularly overlapping circle of people, both American and Russian, constantly lying and covering up their contacts together. Now, we’re just waiting for Mueller to tell us precisely why—and who.
The eighth and final season of Game of Thrones will be bigger, badder, and hairier than ever. No, we’re not talking about the saga—we’re talking about the furs! Because that’s literally all HBO sees fit (ha!) to show us. Earlier today, the studio released a collection of brooding character stills, notable mainly for the fabulous fashions. (Maybe that’s where they’re concealing all the plot twists—in the majestic folds of Brienne of Tarth’s capacious overcoat.) It’s been an incremental PR rollout, like water dribbling off an icicle, but at least we now know what our incestuous heroes and pretenders to the throne are hiding in their Westerosi winter wardrobes. Let’s unpack.
Angela Watercutter, Senior Editor: Whoohoo! New Game of Thrones images! Today is a blessed day. Much like Winterfell itself, it’s cold and grey here in New York and if there’s one thing that will warm my cold, dead heart, it’s some new images of the surviving members of the GoT cast—and boy, HBO really delivered on that. I mean, lookit! There’s Sansa Stark (Sophie Turner) looking stoic AF. Oh, and Arya Stark (Maisie Williams) looking all kinds of confident. Daenerys Targaryen (Emilia Clarke) also appears as though she’s ready to pummel some lands—and also maybe constipated? Everyone else is in some sort of furrowed-brow state (except Sam Tarly/John Bradley, who, I think we can all agree now, is probably going to be the only one to survive this mess). So, I guess the theme of the final season of Game of Thrones is “Be Worried”?
Whatever, these are character shots, and as such they reveal next to nothing about what to expect in Season 8 of Game of Thrones, except for maybe some inkling of who is still living when it starts. But let’s get past that bit of disappointment and get to what really matters. Friends, can we talk about these outfits? What are they wearing?
Emily Dreyfuss, Senior Writer: Having read all the books and watched every episode of this show, I have to admit that I still can’t remember what’s happening at this point of HBO’s Game of Thrones. [Eds. Note: Same.] Things are off the rails, yes? But the fashion gives me hope. I’m particularly excited about Arya’s modestly fur-lined wool-woven half cape.
Arya is my favorite blood-thirsty tween, but what I adore about this outfit is how little fur she’s sporting in comparison to her garish relatives and enemies. Arya wants to murder humans, not innocent animals—though, of course, if she has to kill what appears to be a squirrel to line her cape for warmth, she’ll do it. There’s just the slightest hint of femininity in the diamond stitching of that cape—which she has sewn on with leather straps. Plus, she’s obviously wrapped head-to-toe in leather—a dead animal product styled to keep her warm, protect her from stab wounds, and send the message that she’s a gender-role-nonconforming warrior at the same time. Arya’s practical in every way.
Arya’s outfit contrasts with her mortal enemy Cersei, whose decked out to look the part of the warrior, with her ornate epaulettes and perfectly placed lapel chains. Her outfit tells you that she’s very willing to orchestrate mass murder, but wouldn’t want to partake in anything as close-up as one-on-one combat. It might tousle her crown.
She also seems to have the sliiiiightest of grins on her face. Or am I imagining that? Is it a grimace?
Watercutter: Emily, you’re not dreaming. Jason, what’s your take here?
Jason Kehe, Senior Associate Editor: Poor Samwell—that looks like recycled polyester. Maybe he’s joined a high school biker gang? I think we’re supposed to believe he’s cool now.
Dreyfuss: LOL, Jason! No, he’s not cool, he’s enlightened! He’s done all the learning he could do at the citadel and now he doesn’t care about anything as silly as fashion or coolness.
Kehe: Generous of you, Emily. Also, I can’t stop staring at Daenerys’ ice-queen-pop-idol coat. Very Frozen. Is that polar bear? White fox? Ermine?! Perfectly fitted, with those flare-out sleeves. (I don’t know the official terms, or what an ermine actually is.) My question is, does she know the truth of Jon Snow’s identity here? What’s her face telling us? Either way, no amount of fur will warm up the frigid chemistry between these two, I’m convinced.
Dreyfuss: Jon Snow (Kit Harington) looks like he just realized Daenerys is his sister … five minutes after they slept together. Now he’s like, “Can this wolf-fur coat hide my shame?” And Daenerys is all, “Brother, your queasiness is very unattractive.”
Is the red thread of Daenerys’ coat a slight nod to the Red God?
Watercutter: Emily, I think you could be right there—yet that would be an actual possible plot detail, so dunno.
To answer Jon’s question, though, I’m not sure if furs can hide shame—and something about that pelt says Stride of Pride to me. If anything, I’d say their faces, and accompanying threads, are giving off an air of “We’re taking the Iron Throne and beating the Lannisters at their incest game while we’re at it.” That’s just me, though.
Speaking of (good) Lannisters, can we talk about Tyrion (Peter Dinklage) for a second?
Dreyfuss: Yes, please! What is going on with his neckerchief?
Watercutter: Right? It … kinda looks like a dickey? And, hey, I got nothing against a good dickey, but it’s friggin’ cold in Winterfell (or wherever he is, someplace frigid). You’re going to need to protect your neck, man. If not from the cold, at least from, I dunno, everyone who probably wants to slash your throat.
Dreyfuss: And the material is hard to identify. It looks like … plastic globules painted blue? Give my man a proper fur-lined neck, please, costume department.
Watercutter: And yet, Cersei has on some kind of medieval shoulder pads. Is she a linebacker now? Is she joining the cast of Alita: Battle Angel to play Motorball? I’m confused. That said, the look is cute. A little less Rhythm Nation than her previous ‘fits, but I’m OK with that.
Dreyfuss: She’s all about the lewk. That’s her whole schtick: projecting strength while not actually being able to defend her throne or her family. She always looks fierce as hell as she’s totally dropping the ball.
Whereas Jon Snow and Danerys continue to look fierce and actually be fierce—bedecked in various furs. It’s interesting to note who is wearing fur and who isn’t—none of the Lannisters, and also not Varys or Davos. What are you trying to tell us, promotional photos?!
Oh, you just want us to remember this show exists? And is coming back to television on April 14? And every character is hot and powerful? But also very cold? Message received.
Following more than two years of constant turbulence for Facebook, the company’s vice president of communications, Caryn Marooney, is leaving the company, Facebook has confirmed. Marooney, who previously cofounded the technology communications firm The Outcast Agency, joined Facebook in 2011 as director of technology communications, after representing the company at Outcast. Most recently, she has been responsible for all global communications. Marooney’s final day is not yet set, but spokesperson Vanessa Chan said she would be staying on to bring her replacement on board.
“She’s been at Facebook for eight years on the payroll,” and worked with the company even before that at Outcast, Chan said. “It’s been a really, really long time. I think she just wants to take a step back.” In 2016, Marooney became head of global communications, a position, she told WIRED, that she accepted while battling cancer. Facebook is now looking internally and externally for her replacement.
Marooney’s departure is just the latest in a string of shakeups at Facebook’s communications department over the past year. In early 2018, the entire company underwent a major executive reorganization. As part of the changes, Marooney began splitting her duties with Rachel Whetsone, who had been hired away from Uber by Facebook the year before. In June, vice president of communications and public policy, Elliot Schrage, announced that he was stepping down from Facebook after a decade there, although he has not departed. Sir Nick Clegg, former deputy prime minister of the United Kingdom, was later hired to lead Facebook’s global policy and communications. Whetstone announced she was leaving for a top job at Netflix in August. At that point, Marooney reassumed responsibility for all global communications, and was reporting to Clegg when she announced her departure Wednesday.
“I’ve decided it’s time to get back to my roots: going deep in tech and product,” Marooney wrote in a Facebook post Wednesday. “With Nick Clegg settled in at Facebook, this felt like the right time to start the transition.”
Chan also confirmed that Debbie Frost, Facebook’s vice president of international policy and communications and the longest tenured employee on Facebook’s communications team, has also announced her exit.1 According to Chan, Frost is retiring. Meanwhile, the company recently hired Sarah O’Brien, formerly of Tesla, to be the company’s vice president of executive communications.
The staffing shuffle underscores the sheer difficulty of defending Facebook’s reputation at a time when it is perpetually under siege. Since 2016, the company has faced a barrage of questions about the rise of fake news, the spread of foreign propaganda, a massive security breach, violations of user privacy, violent conflict fueled by social media myths overseas, and an ever-expanding list of scandals. As one former Facebook employee put it to WIRED, Facebook’s public relations department has become a “crisis communications” shop.
“I think that some folks left just because they got tired of the day-in-day-out criticism, not just media but also from people in Washington,” the former employee said of the recent turnover at Facebook.
Members of Facebook’s PR team have bore the brunt of some of Facebook’s most recent scandals. It was Schrage, for instance, who took the public blame for hiring Definers Public Relations, which conducted opposition research on Facebook’s biggest critics, including billionaire Democratic donor George Soros. It was only after Schrage published a blog post on the subject that Facebook’s chief operating officer Sheryl Sandberg acknowledged she, too, had been aware of Definers’ work.
During this tough time, Facebook also went on a hiring spree, growing from 17,048 employees by the end of 2016 to 35,587 employees at the end of 2018. Much of that increase went toward beefing up Facebook’s safety and security teams, and yet, according to the former employee, the dramatic increase led to “growing pains” across the company. “There would be internal tension over who gets to do what. That was tough to deal with,” the employee says.
“There were definitely executive camps, and this isn’t just comms, this is throughout the entire company,” another source familiar with Facebook’s communications team says. But the source noted that Marooney “did a good job keeping herself out of it.”
It’s still unclear which brave soul will take on the job next. Whoever it is will have their work cut out for them, with a Federal Trade Commission investigation into Facebook’s privacy practices hanging over the company’s head, plans for federal privacy legislation taking shape on Capitol Hill, and a battery of ongoing investigations happening overseas. That’s in addition to the near weekly news stories about how Facebook is prying into people’s private messages for market research or its history of bilking money from unsuspecting children playing games on the platform. The job Marooney is leaving behind just may be the hardest job in tech.
1CORRECTION on 2/6/2019, 1:24 pm ET: This story has been updated to correct Debbie Frost’s title at Facebook. It has also been updated to include a quote from Caryn Marooney’s Facebook post about her departure from the company.
“Do you like Instagram?” Bee Fisher asks her son, Tegan Fisher, a 3-year-old Instagram sensation who specializes in posing next to his family’s enormous Newfoundlands. He doesn’t seem to understand the question.
“Is this yogurt cooled down?” Tegan replies.
“Do you like Instagram? Do you like taking pictures?” Bee asks again. Once again, the temperature of yogurt prevails. “He means, ‘Has the yogurt thawed?’ Our fridge froze them,” Bee explains.
Finicky appliances are one of the challenges of RV life, and Bee and her family of five have been living in one for the past few months. They’re on a countrywide tour, taking pictures and meeting fans. “They think we’re on vacation,” she says of Tegan and his brothers. “We’ve tried to explain that this is the family business, but they don’t understand social media at all.”
The kids may not understand social media, but social media definitely gets them—some 200,000 people look at photos of Bee’s brood daily. (Bee and her husband, Josh, took the kids to a packed sports stadium to demonstrate the scale of their fanbase.) The Fisher family feed offers up a winsome, (literally) sanitized version of life with three boys under 8 and two dogs that weigh more than 100 pounds. In place of temper tantrums and cranky spouses, you’ll find a perfectly curated world of smiles and hand-holding. Even snapshots that are powerfully mundane, like the family sitting outside their RV or a kid biting into a pastry the size of his head, have scores of likes. In 2019, that’s enough to convert kiddie cuteness into a commodity.
In recent years, hundreds of kids have risen to bankable internet stardom on Instagram and YouTube. Marketers, ever the wordsmiths, have dubbed them “kidfluencers.” They’re the child stars of the social media age, tiny captains of industry with their own toy lines and cookbooks. On Instagram, families seem to go for a controlled-chaos aesthetic—a Kondo’d Jon & Kate Plus 8. On YouTube, it’s more like late-capitalist Blue’s Clues. And somehow, despite the brand deals and the creeps in the comments and the constant watchfulness of parents’ cameras and the general ickiness our society attaches to living the most innocent years of your life on a public stage, these kids seem all right.
No influencer, adult, child, or animal, is internetting as well as Ryan, of Ryan ToysReview. Ryan—last name undisclosed, location undisclosed—is a preternaturally cheerful and well-spoken 7-year-old who made $22 million last year testing toys on YouTube (many from his own product line, Ryan’s World), trying kiddie science experiments, and doing regular stuff like swimming “in a Super Cold Icy Swimming Pool!!!” for an audience of more than 18 million.
Marketers, ever the wordsmiths, have dubbed them “kidfluencers.” They’re the child stars of the social media age, tiny captains of industry with their own toy lines and cookbooks. On Instagram, families seem to go for a controlled-chaos aesthetic—a Kondo’d Jon & Kate Plus 8. On YouTube, it’s more like late-capitalist Blue’s Clues.
Chris Williams, CEO of Pocket.watch, the studio Ryan is partnered with, assures me that perma-grinning YouTube Ryan is who this kid really is—and that last year’s windfall is no fluke. Traditional kids’ television, according to Williams and to ratings, is dying its too-uncool-for-school death, and it’s only been in the last two years that the industry and advertisers have worked out where their audience went: away from the TV set and onto their smartphones. Now that brands have found a way into this highly impressionable group’s watchtime, and their parents’ wallets, it’s hard to imagine why they’d stop.
If you take their parents at their word, these kids’ fame and fortunes were accidental: Nobody expected, or even seem to have wished, this for them. Ryan has been on YouTube since he was 3, years before “kidfluencing” would become a profitable venture; his parents figured videos would be a good way to share Ryan’s toddlerhood with family who lived abroad. As for the Fishers, their account started as a family photo album that blew up after a 2016 interview with The Daily Mail—from 3,000 to 20,000 followers in one week. Teen chef Amber Kelley, who has become YouTube’s Jamie Oliver, championing fresh and healthy food done so simply even a child can do it, couldn’t imagine anyone but her grandparents watching a stone-faced 9-year-old cook in an oversized chef’s jacket.
Now Kelley has her own cookbook and has dined with Michelle Obama. The Fishers have done sponsored posts for Chick-fil-A. Ryan’s parents refer to their “brand” as a “global franchise.” It all makes one start to wonder, despite assurances from everyone involved, if there’s any stage-parent weirdery here. “I don’t want to have a child 15 years from now sitting in a therapist’s office saying my parents made me take pictures every day,” Bee Fisher says. “If there’re days they’re totally not into it, they don’t have to be.” Well, one exception: “Unless it’s paid work,” Fisher adds. “Then they have to be there. We always have lollipops on those days.”
If incentivizing kids with candy seems pretty normal, it is—these kids are safer and better cared for than you might expect. Oddly, the medium in which they work, the internet, seems to cushion them against Child Starification. Most video shoots don’t take more than a few hours, and a paparazzi-free near-anonymity is attainable; Ryan goes to public school and plays on local sports teams.
“Their fame is not walking down red carpets or selling out shows at Madison Square Garden. It’s numbers on a screen.”
Chris Williams, CEO of studio Pocket.watch
“Their fame is not walking down red carpets or selling out shows at Madison Square Garden,” Williams says. “It’s numbers on a screen.” As long as grown-ups don’t let the pressures of social media stardom pollute their offline relationship with their kids, this form of celebrity seems lower-key and lower-impact than most. Even hiring a project manager for your kid, as Amber Kelley’s mom, Yohko Kelley, did, can be a way to preserve a sense of normalcy. “I don’t want to be nagging her about uploading and nagging her to clean up her room,” Yokho says.
That’s been good for Amber, who notes she can’t say “Oh, it’s just my mom” when her manager asks her to work. “It helped us make sure there was a line between our business life and family life,” she says.
Of course, there are still bad parts to being visible on the internet. A few years ago, Amber’s parents noticed a commenter getting obsessive, even stalker-ish—commenting too soon and too much and too aggressively adoringly, which is bad enough when directed at adult women, horrifying when directed at a 10-year-old. Yokho used it as a teaching moment, going over what is OK to share with subscribers and what isn’t, how to report people, how to avoid getting lured in by trolls. “Now she can handle the haters and creeps,” Yokho says.
Internet weirdos are, in some ways, the least of these parents’ worries. “It’s so much scarier on the road,” Bee Fisher says. She has to go through an entire stranger-danger routine every time they meet up with fans, which has happened in almost every city on their itinerary. “I’ll say, ‘These people will know your name. They will know mommy and daddy’s names. But you don’t go anywhere with them,'” Bee says.
Even with those precautions, they’ve had a few harrowing experiences. Once, they arranged to get dinner with a fan who bought the family expensive gifts. The fan never showed, not even after the family waited 90 minutes in a crowded mall. “I got this awful, bizarre feeling,” Bee says. Bee and Josh became petrified that someone was waiting for the right moment to grab a child or was sneaking into the RV to steal the dogs they’d left behind. Nothing happened, but even two months later and over the phone, the anxiety was apparent.
None of this can possibly last—right? Social media stardom seems to be like childhood itself: The longer you cling to it, the grosser it gets. The Fishers admitted to a certain fatigue. The Kelley family has found a happy medium in being modest micro-influencers: “Maybe we’re not milking it as much as we should, but she’s a kid! This is just one of the many things she should try,” says. “I’m happy she’s learning.”
Ryan’s parents are pursuing a different endgame: a kind of post-child relevance. Their partnership with Pocket.watch has resulted in a lifestyle brand, Ryan’s World, which has more Ryan-approved toys and less Ryan. “We’ve worked hard to create and incorporate animated characters like Combo Panda and Alpha Lexa into our content,” they say. “We recognize he will get older.”
Ryan the idea could continue to exist, in other words, long after Ryan the kid grows up—every parent’s dream.
On a frigid morning in Washington, DC, last week, four staffers from the United States Census Bureau stood shoulder to shoulder on a stage, smiling widely as they soaked in the whoops, whistles, and eager applause from the crowd seated before them. The Esri Federal GIS Conference, an annual event where government employees gather to talk about mapping technology, isn’t exactly what you’d call a rowdy affair. But this year, the Census Bureau representatives—a quartet of geographers and IT professionals—put on a particularly impressive show, demoing a suite of new tech tools for the 2020 census. At least, it was impressive if you knew anything about how the census usually works.
Despite the country’s ballooning population and advances in automation, the crucial process of counting every person living in the United States hasn’t changed all that much in the course of the census’ 230-year history. Until now, it’s mostly come down to distributing paper questionnaires to every home and hiring an army of clipboard-carrying canvassers to knock on the door of anyone who doesn’t respond. In 2020, that will change. For the first time ever, the bureau is asking the majority of people to answer the census online. Not only that, but behind the scenes the entire process of running the census is getting a high-tech facelift.
If the bureau’s plan works, a digital census could make the count more inclusive and, eventually at least, help cut costs—the 2010 census was the most expensive in US history, costing more than $12 billion. But surveying a population of 330 million people in real time using brand new technology is a lot harder than pulling off even the most high-stakes demo. For as many opportunities as this tech-centric approach to the census holds, experts fear the bureau is opening itself up to a range of new risks, from basic functionality and connectivity failures to cybersecurity threats and disinformation campaigns.
Given the ways that census data underpins the fundamentals of democracy, those aren’t risks to be taken lightly. It’s the census, after all, that decides how congressional districts get divvied up, how many seats each state holds in the Electoral College, how the federal budget is allocated, and, ultimately, whether people are fairly and accurately represented by their government.
Standing before a blue-lit background at Esri, the Census Bureau team showed off the goods.
First, there was a tool called BARCA, which uses satellite and aerial imagery to help census workers see how every block in the country has changed over the past decade. They can use that information to more efficiently build out address lists for every home in the United States before the census begins. What used to take two hours for canvassers to do on foot, the bureau representatives said, now takes just two minutes in the office.
Then came ROAM, a mapping product that’s helping the bureau predict where people are least likely to respond to the census using historical and demographic data. With this information, the bureau can target specific community groups, like churches and other organizations, to help spread the word. Both BARCA and ROAM were developed internally at the Bureau.
Finally, the team demoed what is arguably the most transformative tool of all: an app called ECaSE, which some 350,000 census workers will use as they take to the streets on foot next year to follow up with the estimated 60 million households that are expected not to respond to the census the first time around. The app, which was developed in partnership with a contractor, will run on iPhone 8 devices provided by the bureau, and will personalize canvassers’ routes based on their work availability, the languages they speak, and the best time of day to visit each household. The data they collect will be encrypted and automatically uploaded to the Census Bureau’s central repository. The goal is to replace, or at least radically reduce, the 17 million pages of paper maps that the bureau printed out for the 2010 census and the 50 million paper questionnaires that field workers had to tote around with them. And because the tools are expected to make field workers more efficient, the bureau expects to hire roughly half as many people as it did in 2010.
“There’s a good reason a lot of information is becoming digitized. It’s efficient and useful. But it also creates vulnerabilities.”
Josh Geltzer, Georgetown Law
Little wonder the audience seemed pleased with the presentation. And yet, thanks to years of budget cuts and a series of scaled-back field tests, some fear the Census Bureau and the broader US government are ill-equipped to handle any new issues that could arise as a result of the high-tech census, particularly at a time when hackers and propagandists seem to be working overtime to undermine American institutions.
“There’s a good reason a lot of information is becoming digitized. It’s efficient and useful,” says Josh Geltzer, executive director of the Institute for Constitutional Advocacy and Protection at Georgetown Law. “But it also creates vulnerabilities, and we’re reminded of that virtually every week in the form of a hack or data being used in ways it’s not supposed to be.” Last year, Geltzer and a group of cybersecurity experts sent a letter to the Census Bureau expressing their concerns and asking for answers about how the whole operation will work.
The bureau is well aware of the risks it faces, and it’s spent years developing defenses against them. The data will be encrypted, and both the field staff and office staff who access it will only be able to log into the system using two-factor authentication. The bureau is also working with the Department of Homeland Security to implement a system called EINSTEIN 3A, which will monitor government networks for malicious activity, and to communicate with the intelligence community about specific threats. In a recent program review, the agency said it would conduct a bug bounty program to test public-facing systems.
“From the moment we collect your responses, our goal—and legal obligation—is to keep them safe,” the Census Bureau said in a statement.
But the bureau’s tech team has also acknowledged that some external threats aren’t fully within their control, like, for instance, the threat posed by hundreds of millions of respondents using unsecured, potentially corrupted computers, phones, and tablets to report their answers, leaving those answers open to manipulation. The bureau has also warned that phishing attempts, where fraudsters contact people posing as the Census Bureau, could trick people into divulging sensitive information about themselves. The same goes for bogus websites that imitate the bureau. And, in the age of social media, there’s always the risk that a dedicated disinformation campaign could attempt to mislead people about the Census or undermine their faith in the process.
“Some of these challenges are new for the 2020 census,” the bureau’s deputy director Ron Jarmin said onstage at the Esri conference. “This is a foundational thing for our democracy. Much like elections, people that want to sow discord in our country might try to mess with the census.”
The bureau says it’s been working with companies like Google, Facebook, Twitter, and Microsoft to counter this sort of behavior and flag it before it’s too late. But as recent examples of election meddling around the world have shown, there are limits to what the government, or these companies, can do to totally mitigate these threats. “While we cannot control bad actors, we are working with partners to identify phishing attempts and website spoofing,” the bureau says.
According to John Thompson, who served as director of the Census Bureau from 2013 through May 2017, the most important thing the government can do is educate the public about what the Census Bureau will and won’t ask of them. It won’t, for example, ask people for their social security numbers or attempt to contact them by phone or email.
The problem is, the bureau has been woefully underfunded for years, an issue that has far-reaching consequences for census preparation.
Usually, the Census Bureau sees its budget increase in years leading up to each decennial count. Between 2014 and 2017, however, funding was essentially flat. Thompson says that forced the agency to defer a number of programs for years, including research on its paid advertising program, which helps inform the public about why it’s important to respond to the census and offers assurances about the security of census data.
“That’s an incredibly important program,” Thompson says. “A lot of stakeholders were becoming concerned it was being deferred.”
The bureau says it’s on track to begin running ads in January 2020, two months before the first census invitations go out in March. And while early research may have been deferred, the bureau has since conducted surveys and focus groups that will help the agency understand people’s mindset toward the census.
Research on advertising wasn’t the only thing that got cut during the lean days. Due to budget restraints, the bureau was forced cancel a series of planned field tests in 2017 and dramatically reduce the scope of its “dress rehearsal” test in 2018, which was supposed to replicate the full census in a few key geographic areas. Originally, the 2018 test was to be carried out in rural West Virginia; Providence, Rhode Island; and tribal lands in Washington State. In the end, the Census Bureau eliminated all its end-to-end tests except the one in Rhode Island. That meant the agency never got to see how the full system would function in a rural environment.
“It was a difficult decision, but it was all we could do,” Thompson says.
The lack of comprehensive testing in remote locations presents a serious possibility that the system simply won’t work properly in areas that are on the wrong side of the digital divide. “There’s been less practice for this than even the Census Bureau thought there should be,” Geltzer says. “Given it’s going to scale up dramatically for the real thing, the lack of practice is a concerning thing.”
The bureau did test what’s known as address canvassing—the process of building the address list—in West Virginia and Washington. This allowed field staff to at least try out some of the tools they’ll use during the real census. According to the bureau, those tests did turn up connectivity issues, motivating the agency to tweak the technology so that now, the bureau says, “if a census taker is in a low connectivity area, the data they collect is stored and encrypted until the device is connected to the Internet.”
As an additional backstop, the Census Bureau will also mail out paper questionnaires to the 20 percent of households in regions with limited connectivity or with older, less tech savvy populations. It relies heavily on data from the annual American Community Survey to assess where these populations live. These people will still have the option of completing the census online if they’re able. Every household will also have the opportunity to answer by phone for the first time.
For all of the bureau’s contingency plans, fancy new tools, and slick demos, there are plenty of recent examples of what could go wrong when invitations to respond begin going out across the country in March of 2020, says Terri Ann Lowenthal, former staff director of the House of Representatives’ census oversight subcommittee.
Healthcare.gov famously crashed after only a few thousand people tried to apply for health insurance. Traffic to the census response website will be several orders of magnitude larger, Lowenthal says.
Then there was the Australia debacle. On August 9, 2016, millions of Australians tried to complete their country’s first online census, only to receive an error message on the government website. Australia’s Bureau of Statistics tweeted that the site was experiencing an outage and hoped to have an update by morning. It wound up taking days to recover from what the bureau said was a distributed denial of service attack against the website. The total cost of the setback was more than $21 million.
“The Census Bureau must pull off the census on time, according to a minute-by-minute schedule,” Lowenthal says. “If something goes wrong, the entire process is not only disrupted but potentially undermined.”
You know what happens when I get a new toy? Physics happens. I can’t stop myself, it’s just the way I am.
In this case, the toy is a DJI Spark drone (it was a birthday present). I’ve always wanted a drone that could do some cool stuff. The one I had before was basically just a toy. But with this new toy, I am going to determine the angular field of view for the Spark camera.
Sometimes referred to as FOV, the angular field of view is the portion of the world that a camera can see.
Here, maybe this picture will help.
Anything within that angle (θ) can be seen by the camera. Who cares? If you know the FOV, you can get the angular size of objects that you see. Angular size depends on both the distance from the camera and the size of the object. If you measure the angular size in radians, then the following relationship holds.
In this expression, r is the distance to the object and L is the length of the object. But here’s the real deal: If you know the distance to the object and the angular size, you can find the actual size of the object. Pretty awesome, right? Now you can fly over some structure or thing and get the size of it.
OK, one more thing before I move on to the measurements. Isn’t it possible to just look up the technical specifications of the Spark drone to find the FOV? Most likely, yes. But what fun is that? It’s always more fun to measure these things for yourself.
So, here is the plan. I am going to fly the drone and move UP while looking down at an object of known length. As the drone moves higher, the apparent size of the object decreases. By plotting the apparent size (in units of the width of the video) vs. one over the height, I’ll get a line. The slope of that line will give me the angular field of view.
Oh, but how do I get the height of the drone? There are three methods we could use. First, there is the height reading straight from the Spark (I assume this is measured based on the barometric pressure). Second, I can measure the height with a second video looking at the drone from the side and scaling the video with a stick of known length (OK—it’s a yard stick).
Wait. What about the third way? Well, the third way is a fix for the second method. Cameras don’t actually measure distances and positions. Instead, each pixel in a video corresponds to an angular position. If the distance from the camera to the object changes significantly, you can’t really assume a constant distance scale. An alternative would be to use the camera to measure the angular position and then calculate the height using basic trigonometry.
How do these three different methods compare? Yes, I did all of these just for you. Here is what I get.
There is a small difference in altitude for the angular position and video analysis, but for the most part these methods seem to agree. Honestly, that’s great—it means I can use the simplest version to calculate the height of the drone. Of course the easiest method is to just get the reading straight from the Spark.
The next step is to collect data on the height and the apparent angular size of an object. In this case, my object is a wooden board with tape marks 0.5 meters apart. Of course, I don’t actually know the angular size since I don’t know the FOV. However, if I measure the stick size relative to some unknown FOV then I can write the following:
Remember, r is the distance to the object and L is the actual length of the object. The variable s is the measured length and the FOV is the field of view. In this equation, the two values that will change are the r and the s. I want to get this in the form of a linear equation so that I can find the slope. How about this?
According to this, I should see two things. First, the plot of L/r vs. s should be a straight line. Second, the slope of this line should be the field of view (in radians). Let’s do it.
Boom. It’s linear with a y-intercept very close to zero (that’s good) and a slope of 0.96345 radians. That gives the camera a field of view of 55.2 degrees. Oh wait! That is just for the video camera on the drone. I forgot to collect data for the photo camera—I’m pretty sure it has a different FOV. OK, I can fix this later.
But now what? What can you do with the FOV? Let’s say you are flying over your house and you want to find the dimensions. Or maybe you are flying over a giant alligator that you happen to see. Either way, you can now find the size of that object. This only works if the drone camera is looking straight down so that the altitude is the same as the distance to the object. Once you have the video, measure the size as a fraction of the width of the video. Multiply this by the altitude and then multiply by 0.96345. That’s it. Now you have the size of your object. It even works in distance units of feet instead of meters.
At Benchling, we’re building a platform to help scientists do research. Hundreds of thousands of scientists across academia and enterprise clients use Benchling to store and analyze scientific data, assemble DNA sequences, and design experiments.
Over the last three years, we’ve built out several new product modules as we’ve grown our user base over 10×, so we’ve been constantly iterating on our product and data model. In that time, we’ve run over 800 different migrations.¹ We host a separate database for many of our larger customers, so these migrations have also been run across 100s of databases.
We’ve spent the last two years automating and improving our migration process to address key issues we were having — manual intervention, backwards compatibility, correctness, and performance. This post dives into the problems we ran into and highlights some learnings and tools we made along the way.
Our database is Postgres (9.6). We use the ORM SQLAlchemy and its companion migration tool Alembic.
We have two ways of defining the schemas in our database: declaratively (SQLAlchemy models) and imperatively (Alembic migrations). To change a schema, we must first change the model code, then write a migration to match those changes.
# SQLAlchemy model class User(Model): # User's research institution institution = db.Column(db.String(1024))
Alembic has a feature to auto-generate this migration based on model definitions. While it doesn’t support all types of migrations, such as some check constraints, this saved us a lot of time. So we started writing migrations by auto-generating them, completing the schema changes that Alembic didn’t support, and sending for code review.
Both migration author and reviewer checked the migration was correct and backward-compatible with the old model code. They also checked that it was safe: limited to only inexpensive operations so we wouldn’t need to schedule downtime with our customers. We used references such as Braintree’s post on how to perform schema-changing operations this way.
After passing code review, the migration was ready to merge and run.
We used to run migrations manually. But that didn’t scale with hundreds of databases, so we designed running migrations into our deployment process.
We decided to support automatically running migrations both before and after deploying the server code that included the models. Additive changes like adding a column were made in the pre-deploy migrations (before the model code that needed it was deployed). Destructive changes like removing a column were made in the post-deploy migrations (so code would stop using the column before it was removed from the database).
When deploying a new set of commits to production, we’d:
Check production database version counter (via Alembic) to determine which pre-deploy migrations from the new commits are missing. Run each migration individually inside a transaction.
Deploy code to product servers.
Repeat the check for post-deploy migrations, and run them.
It worked well — until it didn’t, and we had to iterate.
Even when a migration was entirely correct and safe, it could still cause downtime. Investigating the running and waiting queries with pg_stat_activity revealed the reason to us: locking.
When we first wrote the migration to add the nullable institution column to the users table, we determined it was safe because adding a nullable column is backward-compatible and fast. However, when we ran the migration, user requests started failing. The investigation showed that it had interleaved with 2 transactions that were reading from the same table:
The first SELECT was in a long-running transaction in a cron job that ran for 5 minutes. The additive migration was waiting to acquire an ACCESS EXCLUSIVE lock on the same table, so it was blocked. The second SELECT in a user request was waiting to read from the table, so it was blocked by the migration.
Even though the add column operation is fast, Postgres was waiting for the exclusive lock. Normally the SELECTs between a cron job and user request wouldn’t conflict, but in this case, all subsequent user requests were blocked for minutes until the long-running transaction and migration were done. Our “safe” migration still caused downtime, so we looked for a fix we could build into the deployment process.
Postgres has two configuration options that we saw as fail-safes for a runaway migration:
lock_timeout: the maximum amount of time the transaction will wait while trying to acquire a lock before erroring and rolling back
statement_timeout: the maximum amount of time any statement in the transaction can take before erroring and rolling back
We set a default lock_timeout of 4 seconds and statement_timeout of 5 seconds for migrations. This limited migrations from blocking user requests by waiting for too long or running expensive queries. Neither helped the migration succeed, but they helped ensure the migration failed gracefully without affecting our users.
Reducing manual intervention
Even though we had a deploy system that was running migrations automatically, we still found ourselves having to manually intervene fairly often to get them through. These manual interventions fell into two categories.
Migration failed due to timeouts
After adding migration timeouts, our migrations were safe to run and could fail gracefully without causing user issues — but failing meant an engineer had to deal with it. Migrations that were touching hot tables or running when cron jobs or lots of users were online were likely to run into lock timeout issues and fail. We usually investigated and attempted one of the following strategies to get each through:
Check if the migration was safe to rerun, and if so, retry the migration manually to see if we just got “unlucky”
Investigate what locks we were being blocked by, and possibly shut down a cron system for some period of time
Wait till a better time (e.g. less usage) to run the migration
We found that #2 and #3 happened quite often, and put some time into making our P99 request time significantly lower. However, even with those improvements, we still found ourselves manually rerunning migrations on hot tables a few times before they succeeded.
So we built out the infrastructure to automatically retry “safe migrations”. At the surface, this was scary — having an automated system rerunning migrations that are touching the core data of our application was not a light decision. We added a few safeguards to help us be safe:
We only automatically retry migrations that have no intermediate commits (since the entire migration is in a transaction automatically, this means failed migrations are safe to retry).
We wait 2 minutes between retries to give any systems time to recover (and engineers a bit of time to respond in case anything goes horribly wrong).
We retry at most 10 times.
In 3 months of running this in production, we have seen all of our migrations go through successfully without any manual work.
While post-deploy migrations gave full flexibility to the developer (e.g. dropping a column in a single deploy), they also resulted in these problems:
Developers had to specify the right type for each migration, which was one more opportunity to make a mistake
We often had to reorder migrations when a deploy included multiple migrations, and a “post-deploy” migration appeared before “pre-deploy” migration³
We found that with just pre-deploy migrations, we were able to remove a lot of this complexity and room for error. We removed post-deploy migrations from our system.
Despite auto-generation and automatic retries, it was still entirely possible for a developer to (accidentally) write and run a backward-incompatible migration. Since the pre-deploy migration changed the schema before the code was deployed, user requests hit the incompatibility between the post-migrated database and old server code, and failed. We wanted to build a system that was more resilient and easier to write safe code.
The most common example was removing a column. After we implemented first-class support for teams to group users on Benchling, we wanted to remove the team_name column from the User model. Because all migrations were now pre-deploy, we needed to remove this in 2 deploy cycles:
Remove all usages of the column in code
Remove the column with a migration
(Otherwise the column would be removed too early, while existing code still depended on it.)
An engineer searched the codebase and removed all* usages of the column. We then ran the migration to drop the column in a separate deploy, and every query to the users table failed until the new code was deployed a few minutes later. The migration that we believed to be safe was actually backward-incompatible.
* We did have one remaining reference to the team_name column, on the User model itself. We have tests to ensure our database and SQLAlchemy models stay in sync, covered in a later section.²
However, because the team_name column was still on the User model, SQLAlchemy automatically used it in SELECTs and INSERTs of the model. When reading a user model, it tries to query the column. When creating a user, it tries to insert null for it. So, while the author thought they were safely removing the column with the migration, they actually weren’t because its declaration in the SQLAlchemy model constituted a usage.
SQLAlchemy has two configuration options to truly remove its usage of the column. deferred tells it to stop querying the column. evaluates_none tells it to stop inserting null for it. But we didn’t want the author or reviewer to have to remember these every time.
Building compatibility checks into the ORM
To make it easy for a developer to safely remove columns, we decided to write some abstractions on top of SQLAlchemy to help write backward-compatible migrations.
To remove columns, we made a simple column decorator deprecated_column that ensures the column is unused so it can be safely removed. It configures SQLAlchemy to ignore the column with deferred and evaluates_none. But more importantly, it checks every outgoing query and errors if it references the column. Thus tests for code that try to use the column fail. Next time, we simply decorated the column to remove with deprecated_column, removed usages it caught, and deployed that, then wrote a backward-compatible migration to safely remove it.
We also made it easier to rename columns. The renamed_to option we implemented automatically generates SQL triggers to copy values between the old and new columns on value change. This means renaming columns only requires 2 deploy cycles:
Create the column with the new name with a migration, change all usages of the old column to new, decorate the old column with deprecated_column(renamed_to=new_column)
Remove the old column with a migration
(Note: we automatically add triggers that copy from the old column to the new column, and from the new column to the old column. Maintaining equality between both columns is very important, since the deploy cycle may include some servers writing to the new column while others are reading from the old column.)
Our strategy was to extend the ORM to simplify writing backward-compatible migrations, and this worked well for us. Another strategy we plan to employ is automatically testing for backwards-compatibility. Since the post-migrated database and old code must always be compatible, we can test the same setup: running the full test suite of the pre-migration code against the post-migrated database. We confirmed with Quizlet that this strategy worked for them.
Automating the rest
Auto-generation, migration timeouts, automatic retries, and compatibility checks ensured migrations were easy to write and run without affecting user requests. However, correctness still lay in the eyes of the author and reviewer. It was still possible to make schema changes that were wrong: the migration didn’t exactly match the schema, or the changes themselves didn’t follow our database best practices.
Testing migrations are correct
Same as our initial setup, we wrote migrations by auto-generating them and completing the unsupported changes. Save for manually checking, we didn’t know that the ORM schema matched the migrations, even though it was critical that the two are in sync.
We tested the same setup to check they matched. We initialized a database from SQLAlchemy models and a database by running all migrations from a base schema, and compared their schemas to verify no differences.
Testing schema changes are correct
In addition to testing that the migrations are correct, we also want to avoid mistakes when making or changing models. We defined mistakes as a violations of invariants we hold true across our database. These are a set of living rules that include
SQLAlchemy’s inspection API was powerful enough to automate these checks. We wrote tests to check that each invariant holds for each table in the database. These checks did not cover all possible mistakes for schema changes, but enabled us to declare the rules to follow in a programmatic way.
We have not automated every part of writing a migration. In particular, we still need code reviews for something as critical as a migration. Every migration must be reviewed by a normal reviewer and someone on a short list of approved migration reviewers. In practice, however, as a migration reviewer myself, I usually don’t have any comments — these tests took care of most of the comments I usually had.
Explicitly setting a lock_timeout and statement_timeout when running migrations prevents accidental downtime. (Consider doing this for all transactions.)
Automatically running migrations in the deploy process saves a lot of engineering hours. Automatically retrying these migrations on lock timeout increases our probability of success without hurting the system.
Automatically testing that migrations match the declarative data model prevents schema correctness issues when using SQLAlchemy and Alembic.
Automatically testing database invariants that we care about, like indexes on foreign keys, allows us to codify what would otherwise be on a reviewer checklist.
Automatically generating migrations saves developers a lot of time. Alembic comes with this out of the box.
Building tools to handle backward-incompatible changes, like deprecated_column and renamed_to, allows developers to make changes faster.
As a small engineering team, we’ve found that automating the majority of writing and running migrations has allowed us to iterate quickly; automated tests before deployment have prevented us from making mistakes; and automated safeguards during deployment have prevented us from causing downtime. While there are still manual parts like code reviews, we have found that the current balance of automation and manual work has allowed us to move fast without breaking things — while migrating things.
Taking a step back, iteration speed has been one of the most important success factors for a team like ours. The ability to get something out into the wild, receive feedback, and make improvements quickly is critical for us to make good product. While making changes to the user interface and internal API is often straightforward (not always!), changing fundamental data models was quite hard at one point. But with the right tools, we were able to improve our iteration speed for one part of this journey — database migrations.
If you’re interested in working on problems like this, we’re hiring!
Thanks to Damon, Daniel, Ray, Scott, and Somak for reading drafts of this.
¹ This may sound like quite a lot of migrations (almost one per day) — it is! There are a lot of factors here that affected this:
We have a system that makes running migrations easier, so we often only “add functionality as we need it” vs. doing it once all upfront.
We generally store things in fully normalized formats (and have generally avoided e.g. JSON columns without strict schemas), so most data model changes result in a migration.
We have a very broad product that matches the complex and ever-changing nature of life sciences.
² We could remove it and allow ORM and migration schemas to drift out-of-sync, but this option doesn’t work as well to renaming columns.
³ Migrations must be performed in a particular order. If there are two migrations (one to drop a column, one to add a column) and the drop-column migration came first, there is no way to run that in a single deploy cycle without causing downtime. We could either split this into multiple deploys, or reorder the migrations.