Hexbyte Tech News Wired New ‘Game of Thrones’ Images Show … Umm, the Furs Are Great

Hexbyte Tech News Wired New ‘Game of Thrones’ Images Show … Umm, the Furs Are Great

Hexbyte Tech News Wired

The eighth and final season of Game of Thrones will be bigger, badder, and hairier than ever. No, we’re not talking about the saga—we’re talking about the furs! Because that’s literally all HBO sees fit (ha!) to show us. Earlier today, the studio released a collection of brooding character stills, notable mainly for the fabulous fashions. (Maybe that’s where they’re concealing all the plot twists—in the majestic folds of Brienne of Tarth’s capacious overcoat.) It’s been an incremental PR rollout, like water dribbling off an icicle, but at least we now know what our incestuous heroes and pretenders to the throne are hiding in their Westerosi winter wardrobes. Let’s unpack.

Angela Watercutter, Senior Editor: Whoohoo! New Game of Thrones images! Today is a blessed day. Much like Winterfell itself, it’s cold and grey here in New York and if there’s one thing that will warm my cold, dead heart, it’s some new images of the surviving members of the GoT cast—and boy, HBO really delivered on that. I mean, lookit! There’s Sansa Stark (Sophie Turner) looking stoic AF. Oh, and Arya Stark (Maisie Williams) looking all kinds of confident. Daenerys Targaryen (Emilia Clarke) also appears as though she’s ready to pummel some lands—and also maybe constipated? Everyone else is in some sort of furrowed-brow state (except Sam Tarly/John Bradley, who, I think we can all agree now, is probably going to be the only one to survive this mess). So, I guess the theme of the final season of Game of Thrones is “Be Worried”?

Whatever, these are character shots, and as such they reveal next to nothing about what to expect in Season 8 of Game of Thrones, except for maybe some inkling of who is still living when it starts. But let’s get past that bit of disappointment and get to what really matters. Friends, can we talk about these outfits? What are they wearing?

Emily Dreyfuss, Senior Writer: Having read all the books and watched every episode of this show, I have to admit that I still can’t remember what’s happening at this point of HBO’s Game of Thrones. [Eds. Note: Same.] Things are off the rails, yes? But the fashion gives me hope. I’m particularly excited about Arya’s modestly fur-lined wool-woven half cape.

Arya is my favorite blood-thirsty tween, but what I adore about this outfit is how little fur she’s sporting in comparison to her garish relatives and enemies. Arya wants to murder humans, not innocent animals—though, of course, if she has to kill what appears to be a squirrel to line her cape for warmth, she’ll do it. There’s just the slightest hint of femininity in the diamond stitching of that cape—which she has sewn on with leather straps. Plus, she’s obviously wrapped head-to-toe in leather—a dead animal product styled to keep her warm, protect her from stab wounds, and send the message that she’s a gender-role-nonconforming warrior at the same time. Arya’s practical in every way.

Arya’s outfit contrasts with her mortal enemy Cersei, whose decked out to look the part of the warrior, with her ornate epaulettes and perfectly placed lapel chains. Her outfit tells you that she’s very willing to orchestrate mass murder, but wouldn’t want to partake in anything as close-up as one-on-one combat. It might tousle her crown.

She also seems to have the sliiiiightest of grins on her face. Or am I imagining that? Is it a grimace?

Watercutter: Emily, you’re not dreaming. Jason, what’s your take here?

Jason Kehe, Senior Associate Editor: Poor Samwell—that looks like recycled polyester. Maybe he’s joined a high school biker gang? I think we’re supposed to believe he’s cool now.

Dreyfuss: LOL, Jason! No, he’s not cool, he’s enlightened! He’s done all the learning he could do at the citadel and now he doesn’t care about anything as silly as fashion or coolness.

Kehe: Generous of you, Emily. Also, I can’t stop staring at Daenerys’ ice-queen-pop-idol coat. Very Frozen. Is that polar bear? White fox? Ermine?! Perfectly fitted, with those flare-out sleeves. (I don’t know the official terms, or what an ermine actually is.) My question is, does she know the truth of Jon Snow’s identity here? What’s her face telling us? Either way, no amount of fur will warm up the frigid chemistry between these two, I’m convinced.

Dreyfuss: Jon Snow (Kit Harington) looks like he just realized Daenerys is his sister … five minutes after they slept together. Now he’s like, “Can this wolf-fur coat hide my shame?” And Daenerys is all, “Brother, your queasiness is very unattractive.”

Is the red thread of Daenerys’ coat a slight nod to the Red God?

Watercutter: Emily, I think you could be right there—yet that would be an actual possible plot detail, so dunno.

To answer Jon’s question, though, I’m not sure if furs can hide shame—and something about that pelt says Stride of Pride to me. If anything, I’d say their faces, and accompanying threads, are giving off an air of “We’re taking the Iron Throne and beating the Lannisters at their incest game while we’re at it.” That’s just me, though.

Speaking of (good) Lannisters, can we talk about Tyrion (Peter Dinklage) for a second?

Dreyfuss: Yes, please! What is going on with his neckerchief?

Watercutter: Right? It … kinda looks like a dickey? And, hey, I got nothing against a good dickey, but it’s friggin’ cold in Winterfell (or wherever he is, someplace frigid). You’re going to need to protect your neck, man. If not from the cold, at least from, I dunno, everyone who probably wants to slash your throat.

Dreyfuss: And the material is hard to identify. It looks like … plastic globules painted blue? Give my man a proper fur-lined neck, please, costume department.

Watercutter: And yet, Cersei has on some kind of medieval shoulder pads. Is she a linebacker now? Is she joining the cast of Alita: Battle Angel to play Motorball? I’m confused. That said, the look is cute. A little less Rhythm Nation than her previous ‘fits, but I’m OK with that.

Dreyfuss: She’s all about the lewk. That’s her whole schtick: projecting strength while not actually being able to defend her throne or her family. She always looks fierce as hell as she’s totally dropping the ball.

Whereas Jon Snow and Danerys continue to look fierce and actually be fierce—bedecked in various furs. It’s interesting to note who is wearing fur and who isn’t—none of the Lannisters, and also not Varys or Davos. What are you trying to tell us, promotional photos?!

Oh, you just want us to remember this show exists? And is coming back to television on April 14? And every character is hot and powerful? But also very cold? Message received.


More Great WIRED Stories

Read More

Hexbyte  Tech News  Wired Facebook’s Top PR Exec Is Leaving the Toughest Job in Tech

Hexbyte Tech News Wired Facebook’s Top PR Exec Is Leaving the Toughest Job in Tech

Hexbyte Tech News Wired

Hexbyte  Tech News  Wired

Caryn Marooney is the latest in a series of high-profile departures from Facebook’s communications department at a time when the company is perpetually under siege.

Lauren Joseph; Getty Images

Following more than two years of constant turbulence for Facebook, the company’s vice president of communications, Caryn Marooney, is leaving the company, Facebook has confirmed. Marooney, who previously cofounded the technology communications firm The Outcast Agency, joined Facebook in 2011 as director of technology communications, after representing the company at Outcast. Most recently, she has been responsible for all global communications. Marooney’s final day is not yet set, but spokesperson Vanessa Chan said she would be staying on to bring her replacement on board.

“She’s been at Facebook for eight years on the payroll,” and worked with the company even before that at Outcast, Chan said. “It’s been a really, really long time. I think she just wants to take a step back.” In 2016, Marooney became head of global communications, a position, she told WIRED, that she accepted while battling cancer. Facebook is now looking internally and externally for her replacement.

Marooney’s departure is just the latest in a string of shakeups at Facebook’s communications department over the past year. In early 2018, the entire company underwent a major executive reorganization. As part of the changes, Marooney began splitting her duties with Rachel Whetsone, who had been hired away from Uber by Facebook the year before. In June, vice president of communications and public policy, Elliot Schrage, announced that he was stepping down from Facebook after a decade there, although he has not departed. Sir Nick Clegg, former deputy prime minister of the United Kingdom, was later hired to lead Facebook’s global policy and communications. Whetstone announced she was leaving for a top job at Netflix in August. At that point, Marooney reassumed responsibility for all global communications, and was reporting to Clegg when she announced her departure Wednesday.

“I’ve decided it’s time to get back to my roots: going deep in tech and product,” Marooney wrote in a Facebook post Wednesday. “With Nick Clegg settled in at Facebook, this felt like the right time to start the transition.”

Chan also confirmed that Debbie Frost, Facebook’s vice president of international policy and communications and the longest tenured employee on Facebook’s communications team, has also announced her exit.1 According to Chan, Frost is retiring. Meanwhile, the company recently hired Sarah O’Brien, formerly of Tesla, to be the company’s vice president of executive communications.

The staffing shuffle underscores the sheer difficulty of defending Facebook’s reputation at a time when it is perpetually under siege. Since 2016, the company has faced a barrage of questions about the rise of fake news, the spread of foreign propaganda, a massive security breach, violations of user privacy, violent conflict fueled by social media myths overseas, and an ever-expanding list of scandals. As one former Facebook employee put it to WIRED, Facebook’s public relations department has become a “crisis communications” shop.

“I think that some folks left just because they got tired of the day-in-day-out criticism, not just media but also from people in Washington,” the former employee said of the recent turnover at Facebook.

Members of Facebook’s PR team have bore the brunt of some of Facebook’s most recent scandals. It was Schrage, for instance, who took the public blame for hiring Definers Public Relations, which conducted opposition research on Facebook’s biggest critics, including billionaire Democratic donor George Soros. It was only after Schrage published a blog post on the subject that Facebook’s chief operating officer Sheryl Sandberg acknowledged she, too, had been aware of Definers’ work.

During this tough time, Facebook also went on a hiring spree, growing from 17,048 employees by the end of 2016 to 35,587 employees at the end of 2018. Much of that increase went toward beefing up Facebook’s safety and security teams, and yet, according to the former employee, the dramatic increase led to “growing pains” across the company. “There would be internal tension over who gets to do what. That was tough to deal with,” the employee says.

“There were definitely executive camps, and this isn’t just comms, this is throughout the entire company,” another source familiar with Facebook’s communications team says. But the source noted that Marooney “did a good job keeping herself out of it.”

It’s still unclear which brave soul will take on the job next. Whoever it is will have their work cut out for them, with a Federal Trade Commission investigation into Facebook’s privacy practices hanging over the company’s head, plans for federal privacy legislation taking shape on Capitol Hill, and a battery of ongoing investigations happening overseas. That’s in addition to the near weekly news stories about how Facebook is prying into people’s private messages for market research or its history of bilking money from unsuspecting children playing games on the platform. The job Marooney is leaving behind just may be the hardest job in tech.

1CORRECTION on 2/6/2019, 1:24 pm ET: This story has been updated to correct Debbie Frost’s title at Facebook. It has also been updated to include a quote from Caryn Marooney’s Facebook post about her departure from the company.


More Great WIRED Stories

Read More

Hexbyte  Tech News  Wired YouTube and Instagram Tots Are the New Child Stars

Hexbyte Tech News Wired YouTube and Instagram Tots Are the New Child Stars

Hexbyte Tech News Wired

Hexbyte  Tech News  Wired

Ryan, of Ryan ToysReview, is a preternaturally cheerful and well-spoken 7-year-old who made $22 million last year testing toys on YouTube.

Michael Drager

“Do you like Instagram?” Bee Fisher asks her son, Tegan Fisher, a 3-year-old Instagram sensation who specializes in posing next to his family’s enormous Newfoundlands. He doesn’t seem to understand the question.

“Is this yogurt cooled down?” Tegan replies.

“Do you like Instagram? Do you like taking pictures?” Bee asks again. Once again, the temperature of yogurt prevails. “He means, ‘Has the yogurt thawed?’ Our fridge froze them,” Bee explains.

Finicky appliances are one of the challenges of RV life, and Bee and her family of five have been living in one for the past few months. They’re on a countrywide tour, taking pictures and meeting fans. “They think we’re on vacation,” she says of Tegan and his brothers. “We’ve tried to explain that this is the family business, but they don’t understand social media at all.”

The kids may not understand social media, but social media definitely gets them—some 200,000 people look at photos of Bee’s brood daily. (Bee and her husband, Josh, took the kids to a packed sports stadium to demonstrate the scale of their fanbase.) The Fisher family feed offers up a winsome, (literally) sanitized version of life with three boys under 8 and two dogs that weigh more than 100 pounds. In place of temper tantrums and cranky spouses, you’ll find a perfectly curated world of smiles and hand-holding. Even snapshots that are powerfully mundane, like the family sitting outside their RV or a kid biting into a pastry the size of his head, have scores of likes. In 2019, that’s enough to convert kiddie cuteness into a commodity.

In recent years, hundreds of kids have risen to bankable internet stardom on Instagram and YouTube. Marketers, ever the wordsmiths, have dubbed them “kidfluencers.” They’re the child stars of the social media age, tiny captains of industry with their own toy lines and cookbooks. On Instagram, families seem to go for a controlled-chaos aesthetic—a Kondo’d Jon & Kate Plus 8. On YouTube, it’s more like late-capitalist Blue’s Clues. And somehow, despite the brand deals and the creeps in the comments and the constant watchfulness of parents’ cameras and the general ickiness our society attaches to living the most innocent years of your life on a public stage, these kids seem all right.

No influencer, adult, child, or animal, is internetting as well as Ryan, of Ryan ToysReview. Ryan—last name undisclosed, location undisclosed—is a preternaturally cheerful and well-spoken 7-year-old who made $22 million last year testing toys on YouTube (many from his own product line, Ryan’s World), trying kiddie science experiments, and doing regular stuff like swimming “in a Super Cold Icy Swimming Pool!!!” for an audience of more than 18 million.

Marketers, ever the wordsmiths, have dubbed them “kidfluencers.” They’re the child stars of the social media age, tiny captains of industry with their own toy lines and cookbooks. On Instagram, families seem to go for a controlled-chaos aesthetic—a Kondo’d Jon & Kate Plus 8. On YouTube, it’s more like late-capitalist Blue’s Clues.

Chris Williams, CEO of Pocket.watch, the studio Ryan is partnered with, assures me that perma-grinning YouTube Ryan is who this kid really is—and that last year’s windfall is no fluke. Traditional kids’ television, according to Williams and to ratings, is dying its too-uncool-for-school death, and it’s only been in the last two years that the industry and advertisers have worked out where their audience went: away from the TV set and onto their smartphones. Now that brands have found a way into this highly impressionable group’s watchtime, and their parents’ wallets, it’s hard to imagine why they’d stop.

If you take their parents at their word, these kids’ fame and fortunes were accidental: Nobody expected, or even seem to have wished, this for them. Ryan has been on YouTube since he was 3, years before “kidfluencing” would become a profitable venture; his parents figured videos would be a good way to share Ryan’s toddlerhood with family who lived abroad. As for the Fishers, their account started as a family photo album that blew up after a 2016 interview with The Daily Mail—from 3,000 to 20,000 followers in one week. Teen chef Amber Kelley, who has become YouTube’s Jamie Oliver, championing fresh and healthy food done so simply even a child can do it, couldn’t imagine anyone but her grandparents watching a stone-faced 9-year-old cook in an oversized chef’s jacket.

Now Kelley has her own cookbook and has dined with Michelle Obama. The Fishers have done sponsored posts for Chick-fil-A. Ryan’s parents refer to their “brand” as a “global franchise.” It all makes one start to wonder, despite assurances from everyone involved, if there’s any stage-parent weirdery here. “I don’t want to have a child 15 years from now sitting in a therapist’s office saying my parents made me take pictures every day,” Bee Fisher says. “If there’re days they’re totally not into it, they don’t have to be.” Well, one exception: “Unless it’s paid work,” Fisher adds. “Then they have to be there. We always have lollipops on those days.”

If incentivizing kids with candy seems pretty normal, it is—these kids are safer and better cared for than you might expect. Oddly, the medium in which they work, the internet, seems to cushion them against Child Starification. Most video shoots don’t take more than a few hours, and a paparazzi-free near-anonymity is attainable; Ryan goes to public school and plays on local sports teams.

“Their fame is not walking down red carpets or selling out shows at Madison Square Garden. It’s numbers on a screen.”

Chris Williams, CEO of studio Pocket.watch

“Their fame is not walking down red carpets or selling out shows at Madison Square Garden,” Williams says. “It’s numbers on a screen.” As long as grown-ups don’t let the pressures of social media stardom pollute their offline relationship with their kids, this form of celebrity seems lower-key and lower-impact than most. Even hiring a project manager for your kid, as Amber Kelley’s mom, Yohko Kelley, did, can be a way to preserve a sense of normalcy. “I don’t want to be nagging her about uploading and nagging her to clean up her room,” Yokho says.

That’s been good for Amber, who notes she can’t say “Oh, it’s just my mom” when her manager asks her to work. “It helped us make sure there was a line between our business life and family life,” she says.

Of course, there are still bad parts to being visible on the internet. A few years ago, Amber’s parents noticed a commenter getting obsessive, even stalker-ish—commenting too soon and too much and too aggressively adoringly, which is bad enough when directed at adult women, horrifying when directed at a 10-year-old. Yokho used it as a teaching moment, going over what is OK to share with subscribers and what isn’t, how to report people, how to avoid getting lured in by trolls. “Now she can handle the haters and creeps,” Yokho says.

Internet weirdos are, in some ways, the least of these parents’ worries. “It’s so much scarier on the road,” Bee Fisher says. She has to go through an entire stranger-danger routine every time they meet up with fans, which has happened in almost every city on their itinerary. “I’ll say, ‘These people will know your name. They will know mommy and daddy’s names. But you don’t go anywhere with them,'” Bee says.

Even with those precautions, they’ve had a few harrowing experiences. Once, they arranged to get dinner with a fan who bought the family expensive gifts. The fan never showed, not even after the family waited 90 minutes in a crowded mall. “I got this awful, bizarre feeling,” Bee says. Bee and Josh became petrified that someone was waiting for the right moment to grab a child or was sneaking into the RV to steal the dogs they’d left behind. Nothing happened, but even two months later and over the phone, the anxiety was apparent.

None of this can possibly last—right? Social media stardom seems to be like childhood itself: The longer you cling to it, the grosser it gets. The Fishers admitted to a certain fatigue. The Kelley family has found a happy medium in being modest micro-influencers: “Maybe we’re not milking it as much as we should, but she’s a kid! This is just one of the many things she should try,” says. “I’m happy she’s learning.”

Ryan’s parents are pursuing a different endgame: a kind of post-child relevance. Their partnership with Pocket.watch has resulted in a lifestyle brand, Ryan’s World, which has more Ryan-approved toys and less Ryan. “We’ve worked hard to create and incorporate animated characters like Combo Panda and Alpha Lexa into our content,” they say. “We recognize he will get older.”

Ryan the idea could continue to exist, in other words, long after Ryan the kid grows up—every parent’s dream.


More Great WIRED Stories

Read More

Hexbyte Tech News Wired What It Takes to Pull Off the Country’s First Online Census

Hexbyte Tech News Wired What It Takes to Pull Off the Country’s First Online Census

Hexbyte Tech News Wired

Hexbyte  Tech News  Wired

Going digital could make the census more inclusive and efficient, but experts fear the Census Bureau is also opening itself up to new risks.

Pal Szilagyi Palko/Getty Images

On a frigid morning in Washington, DC, last week, four staffers from the United States Census Bureau stood shoulder to shoulder on a stage, smiling widely as they soaked in the whoops, whistles, and eager applause from the crowd seated before them. The Esri Federal GIS Conference, an annual event where government employees gather to talk about mapping technology, isn’t exactly what you’d call a rowdy affair. But this year, the Census Bureau representatives—a quartet of geographers and IT professionals—put on a particularly impressive show, demoing a suite of new tech tools for the 2020 census. At least, it was impressive if you knew anything about how the census usually works.

Despite the country’s ballooning population and advances in automation, the crucial process of counting every person living in the United States hasn’t changed all that much in the course of the census’ 230-year history. Until now, it’s mostly come down to distributing paper questionnaires to every home and hiring an army of clipboard-carrying canvassers to knock on the door of anyone who doesn’t respond. In 2020, that will change. For the first time ever, the bureau is asking the majority of people to answer the census online. Not only that, but behind the scenes the entire process of running the census is getting a high-tech facelift.

If the bureau’s plan works, a digital census could make the count more inclusive and, eventually at least, help cut costs—the 2010 census was the most expensive in US history, costing more than $12 billion. But surveying a population of 330 million people in real time using brand new technology is a lot harder than pulling off even the most high-stakes demo. For as many opportunities as this tech-centric approach to the census holds, experts fear the bureau is opening itself up to a range of new risks, from basic functionality and connectivity failures to cybersecurity threats and disinformation campaigns.

Given the ways that census data underpins the fundamentals of democracy, those aren’t risks to be taken lightly. It’s the census, after all, that decides how congressional districts get divvied up, how many seats each state holds in the Electoral College, how the federal budget is allocated, and, ultimately, whether people are fairly and accurately represented by their government.

Standing before a blue-lit background at Esri, the Census Bureau team showed off the goods.

First, there was a tool called BARCA, which uses satellite and aerial imagery to help census workers see how every block in the country has changed over the past decade. They can use that information to more efficiently build out address lists for every home in the United States before the census begins. What used to take two hours for canvassers to do on foot, the bureau representatives said, now takes just two minutes in the office.

Then came ROAM, a mapping product that’s helping the bureau predict where people are least likely to respond to the census using historical and demographic data. With this information, the bureau can target specific community groups, like churches and other organizations, to help spread the word. Both BARCA and ROAM were developed internally at the Bureau.

Finally, the team demoed what is arguably the most transformative tool of all: an app called ECaSE, which some 350,000 census workers will use as they take to the streets on foot next year to follow up with the estimated 60 million households that are expected not to respond to the census the first time around. The app, which was developed in partnership with a contractor, will run on iPhone 8 devices provided by the bureau, and will personalize canvassers’ routes based on their work availability, the languages they speak, and the best time of day to visit each household. The data they collect will be encrypted and automatically uploaded to the Census Bureau’s central repository. The goal is to replace, or at least radically reduce, the 17 million pages of paper maps that the bureau printed out for the 2010 census and the 50 million paper questionnaires that field workers had to tote around with them. And because the tools are expected to make field workers more efficient, the bureau expects to hire roughly half as many people as it did in 2010.

“There’s a good reason a lot of information is becoming digitized. It’s efficient and useful. But it also creates vulnerabilities.”

Josh Geltzer, Georgetown Law

Little wonder the audience seemed pleased with the presentation. And yet, thanks to years of budget cuts and a series of scaled-back field tests, some fear the Census Bureau and the broader US government are ill-equipped to handle any new issues that could arise as a result of the high-tech census, particularly at a time when hackers and propagandists seem to be working overtime to undermine American institutions.

“There’s a good reason a lot of information is becoming digitized. It’s efficient and useful,” says Josh Geltzer, executive director of the Institute for Constitutional Advocacy and Protection at Georgetown Law. “But it also creates vulnerabilities, and we’re reminded of that virtually every week in the form of a hack or data being used in ways it’s not supposed to be.” Last year, Geltzer and a group of cybersecurity experts sent a letter to the Census Bureau expressing their concerns and asking for answers about how the whole operation will work.

The bureau is well aware of the risks it faces, and it’s spent years developing defenses against them. The data will be encrypted, and both the field staff and office staff who access it will only be able to log into the system using two-factor authentication. The bureau is also working with the Department of Homeland Security to implement a system called EINSTEIN 3A, which will monitor government networks for malicious activity, and to communicate with the intelligence community about specific threats. In a recent program review, the agency said it would conduct a bug bounty program to test public-facing systems.

“From the moment we collect your responses, our goal—and legal obligation—is to keep them safe,” the Census Bureau said in a statement.

But the bureau’s tech team has also acknowledged that some external threats aren’t fully within their control, like, for instance, the threat posed by hundreds of millions of respondents using unsecured, potentially corrupted computers, phones, and tablets to report their answers, leaving those answers open to manipulation. The bureau has also warned that phishing attempts, where fraudsters contact people posing as the Census Bureau, could trick people into divulging sensitive information about themselves. The same goes for bogus websites that imitate the bureau. And, in the age of social media, there’s always the risk that a dedicated disinformation campaign could attempt to mislead people about the Census or undermine their faith in the process.

“Some of these challenges are new for the 2020 census,” the bureau’s deputy director Ron Jarmin said onstage at the Esri conference. “This is a foundational thing for our democracy. Much like elections, people that want to sow discord in our country might try to mess with the census.”

The bureau says it’s been working with companies like Google, Facebook, Twitter, and Microsoft to counter this sort of behavior and flag it before it’s too late. But as recent examples of election meddling around the world have shown, there are limits to what the government, or these companies, can do to totally mitigate these threats. “While we cannot control bad actors, we are working with partners to identify phishing attempts and website spoofing,” the bureau says.

According to John Thompson, who served as director of the Census Bureau from 2013 through May 2017, the most important thing the government can do is educate the public about what the Census Bureau will and won’t ask of them. It won’t, for example, ask people for their social security numbers or attempt to contact them by phone or email.

The problem is, the bureau has been woefully underfunded for years, an issue that has far-reaching consequences for census preparation.

Usually, the Census Bureau sees its budget increase in years leading up to each decennial count. Between 2014 and 2017, however, funding was essentially flat. Thompson says that forced the agency to defer a number of programs for years, including research on its paid advertising program, which helps inform the public about why it’s important to respond to the census and offers assurances about the security of census data.

“That’s an incredibly important program,” Thompson says. “A lot of stakeholders were becoming concerned it was being deferred.”

The bureau says it’s on track to begin running ads in January 2020, two months before the first census invitations go out in March. And while early research may have been deferred, the bureau has since conducted surveys and focus groups that will help the agency understand people’s mindset toward the census.

Research on advertising wasn’t the only thing that got cut during the lean days. Due to budget restraints, the bureau was forced cancel a series of planned field tests in 2017 and dramatically reduce the scope of its “dress rehearsal” test in 2018, which was supposed to replicate the full census in a few key geographic areas. Originally, the 2018 test was to be carried out in rural West Virginia; Providence, Rhode Island; and tribal lands in Washington State. In the end, the Census Bureau eliminated all its end-to-end tests except the one in Rhode Island. That meant the agency never got to see how the full system would function in a rural environment.

“It was a difficult decision, but it was all we could do,” Thompson says.

The lack of comprehensive testing in remote locations presents a serious possibility that the system simply won’t work properly in areas that are on the wrong side of the digital divide. “There’s been less practice for this than even the Census Bureau thought there should be,” Geltzer says. “Given it’s going to scale up dramatically for the real thing, the lack of practice is a concerning thing.”

The bureau did test what’s known as address canvassing—the process of building the address list—in West Virginia and Washington. This allowed field staff to at least try out some of the tools they’ll use during the real census. According to the bureau, those tests did turn up connectivity issues, motivating the agency to tweak the technology so that now, the bureau says, “if a census taker is in a low connectivity area, the data they collect is stored and encrypted until the device is connected to the Internet.”

As an additional backstop, the Census Bureau will also mail out paper questionnaires to the 20 percent of households in regions with limited connectivity or with older, less tech savvy populations. It relies heavily on data from the annual American Community Survey to assess where these populations live. These people will still have the option of completing the census online if they’re able. Every household will also have the opportunity to answer by phone for the first time.

For all of the bureau’s contingency plans, fancy new tools, and slick demos, there are plenty of recent examples of what could go wrong when invitations to respond begin going out across the country in March of 2020, says Terri Ann Lowenthal, former staff director of the House of Representatives’ census oversight subcommittee.

Healthcare.gov famously crashed after only a few thousand people tried to apply for health insurance. Traffic to the census response website will be several orders of magnitude larger, Lowenthal says.

Then there was the Australia debacle. On August 9, 2016, millions of Australians tried to complete their country’s first online census, only to receive an error message on the government website. Australia’s Bureau of Statistics tweeted that the site was experiencing an outage and hoped to have an update by morning. It wound up taking days to recover from what the bureau said was a distributed denial of service attack against the website. The total cost of the setback was more than $21 million.

“The Census Bureau must pull off the census on time, according to a minute-by-minute schedule,” Lowenthal says. “If something goes wrong, the entire process is not only disrupted but potentially undermined.”


More Great WIRED Stories

Read More

Hexbyte Tech News Wired How to Figure Out a Drone’s Angular Field of View

Hexbyte Tech News Wired How to Figure Out a Drone’s Angular Field of View

Hexbyte Tech News Wired

You know what happens when I get a new toy? Physics happens. I can’t stop myself, it’s just the way I am.

In this case, the toy is a DJI Spark drone (it was a birthday present). I’ve always wanted a drone that could do some cool stuff. The one I had before was basically just a toy. But with this new toy, I am going to determine the angular field of view for the Spark camera.

Sometimes referred to as FOV, the angular field of view is the portion of the world that a camera can see.

Here, maybe this picture will help.

Rhett Allain

Anything within that angle (θ) can be seen by the camera. Who cares? If you know the FOV, you can get the angular size of objects that you see. Angular size depends on both the distance from the camera and the size of the object. If you measure the angular size in radians, then the following relationship holds.

Rhett Allain

In this expression, r is the distance to the object and L is the length of the object. But here’s the real deal: If you know the distance to the object and the angular size, you can find the actual size of the object. Pretty awesome, right? Now you can fly over some structure or thing and get the size of it.

OK, one more thing before I move on to the measurements. Isn’t it possible to just look up the technical specifications of the Spark drone to find the FOV? Most likely, yes. But what fun is that? It’s always more fun to measure these things for yourself.

So, here is the plan. I am going to fly the drone and move UP while looking down at an object of known length. As the drone moves higher, the apparent size of the object decreases. By plotting the apparent size (in units of the width of the video) vs. one over the height, I’ll get a line. The slope of that line will give me the angular field of view.

LEARN MORE

The WIRED Guide to Drones

Oh, but how do I get the height of the drone? There are three methods we could use. First, there is the height reading straight from the Spark (I assume this is measured based on the barometric pressure). Second, I can measure the height with a second video looking at the drone from the side and scaling the video with a stick of known length (OK—it’s a yard stick).

Wait. What about the third way? Well, the third way is a fix for the second method. Cameras don’t actually measure distances and positions. Instead, each pixel in a video corresponds to an angular position. If the distance from the camera to the object changes significantly, you can’t really assume a constant distance scale. An alternative would be to use the camera to measure the angular position and then calculate the height using basic trigonometry.

How do these three different methods compare? Yes, I did all of these just for you. Here is what I get.

Rhett Allain

There is a small difference in altitude for the angular position and video analysis, but for the most part these methods seem to agree. Honestly, that’s great—it means I can use the simplest version to calculate the height of the drone. Of course the easiest method is to just get the reading straight from the Spark.

The next step is to collect data on the height and the apparent angular size of an object. In this case, my object is a wooden board with tape marks 0.5 meters apart. Of course, I don’t actually know the angular size since I don’t know the FOV. However, if I measure the stick size relative to some unknown FOV then I can write the following:

Rhett Allain

Remember, r is the distance to the object and L is the actual length of the object. The variable s is the measured length and the FOV is the field of view. In this equation, the two values that will change are the r and the s. I want to get this in the form of a linear equation so that I can find the slope. How about this?

Rhett Allain

According to this, I should see two things. First, the plot of L/r vs. s should be a straight line. Second, the slope of this line should be the field of view (in radians). Let’s do it.

Rhett Allain

Boom. It’s linear with a y-intercept very close to zero (that’s good) and a slope of 0.96345 radians. That gives the camera a field of view of 55.2 degrees. Oh wait! That is just for the video camera on the drone. I forgot to collect data for the photo camera—I’m pretty sure it has a different FOV. OK, I can fix this later.

But now what? What can you do with the FOV? Let’s say you are flying over your house and you want to find the dimensions. Or maybe you are flying over a giant alligator that you happen to see. Either way, you can now find the size of that object. This only works if the drone camera is looking straight down so that the altitude is the same as the distance to the object. Once you have the video, measure the size as a fraction of the width of the video. Multiply this by the altitude and then multiply by 0.96345. That’s it. Now you have the size of your object. It even works in distance units of feet instead of meters.

This is going to be useful. Trust me.


More Great WIRED Stories

Read More

Hexbyte  Hacker News  Computers Move fast and migrate things: how we automated migrations in Postgres

Hexbyte Hacker News Computers Move fast and migrate things: how we automated migrations in Postgres

Hexbyte Hacker News Computers

Hexbyte  Hacker News  Computers Go to the profile of Vineet Gopal

At Benchling, we’re building a platform to help scientists do research. Hundreds of thousands of scientists across academia and enterprise clients use Benchling to store and analyze scientific data, assemble DNA sequences, and design experiments.

Over the last three years, we’ve built out several new product modules as we’ve grown our user base over 10×, so we’ve been constantly iterating on our product and data model. In that time, we’ve run over 800 different migrations.¹ We host a separate database for many of our larger customers, so these migrations have also been run across 100s of databases.

We’ve spent the last two years automating and improving our migration process to address key issues we were having — manual intervention, backwards compatibility, correctness, and performance. This post dives into the problems we ran into and highlights some learnings and tools we made along the way.

(See Lessons learned for the summary)

How we initially set up migrations

Writing migrations

Our database is Postgres (9.6). We use the ORM SQLAlchemy and its companion migration tool Alembic.

We have two ways of defining the schemas in our database: declaratively (SQLAlchemy models) and imperatively (Alembic migrations). To change a schema, we must first change the model code, then write a migration to match those changes.

Example:

# SQLAlchemy model

class User(Model):

# User's research institution

institution = db.Column(db.String(1024))

# Alembic migration



op.add_column("users", Column("institution", String(1024)))

Alembic has a feature to auto-generate this migration based on model definitions. While it doesn’t support all types of migrations, such as some check constraints, this saved us a lot of time. So we started writing migrations by auto-generating them, completing the schema changes that Alembic didn’t support, and sending for code review.

Both migration author and reviewer checked the migration was correct and backward-compatible with the old model code. They also checked that it was safe: limited to only inexpensive operations so we wouldn’t need to schedule downtime with our customers. We used references such as Braintree’s post on how to perform schema-changing operations this way.

After passing code review, the migration was ready to merge and run.

Running migrations

We used to run migrations manually. But that didn’t scale with hundreds of databases, so we designed running migrations into our deployment process.

We decided to support automatically running migrations both before and after deploying the server code that included the models. Additive changes like adding a column were made in the pre-deploy migrations (before the model code that needed it was deployed). Destructive changes like removing a column were made in the post-deploy migrations (so code would stop using the column before it was removed from the database).

When deploying a new set of commits to production, we’d:

  1. Check production database version counter (via Alembic) to determine which pre-deploy migrations from the new commits are missing. Run each migration individually inside a transaction.
  2. Deploy code to product servers.
  3. Repeat the check for post-deploy migrations, and run them.

It worked well — until it didn’t, and we had to iterate.

Downtime

Even when a migration was entirely correct and safe, it could still cause downtime. Investigating the running and waiting queries with pg_stat_activity revealed the reason to us: locking.

When we first wrote the migration to add the nullable institution column to the users table, we determined it was safe because adding a nullable column is backward-compatible and fast. However, when we ran the migration, user requests started failing. The investigation showed that it had interleaved with 2 transactions that were reading from the same table:

The first SELECT was in a long-running transaction in a cron job that ran for 5 minutes. The additive migration was waiting to acquire an ACCESS EXCLUSIVE lock on the same table, so it was blocked. The second SELECT in a user request was waiting to read from the table, so it was blocked by the migration.

Even though the add column operation is fast, Postgres was waiting for the exclusive lock. Normally the SELECTs between a cron job and user request wouldn’t conflict, but in this case, all subsequent user requests were blocked for minutes until the long-running transaction and migration were done. Our “safe” migration still caused downtime, so we looked for a fix we could build into the deployment process.

Migration timeouts

Postgres has two configuration options that we saw as fail-safes for a runaway migration:

  1. lock_timeout: the maximum amount of time the transaction will wait while trying to acquire a lock before erroring and rolling back
  2. statement_timeout: the maximum amount of time any statement in the transaction can take before erroring and rolling back

We set a default lock_timeout of 4 seconds and statement_timeout of 5 seconds for migrations. This limited migrations from blocking user requests by waiting for too long or running expensive queries. Neither helped the migration succeed, but they helped ensure the migration failed gracefully without affecting our users.

Reducing manual intervention

Even though we had a deploy system that was running migrations automatically, we still found ourselves having to manually intervene fairly often to get them through. These manual interventions fell into two categories.

Migration failed due to timeouts

After adding migration timeouts, our migrations were safe to run and could fail gracefully without causing user issues — but failing meant an engineer had to deal with it. Migrations that were touching hot tables or running when cron jobs or lots of users were online were likely to run into lock timeout issues and fail. We usually investigated and attempted one of the following strategies to get each through:

  1. Check if the migration was safe to rerun, and if so, retry the migration manually to see if we just got “unlucky”
  2. Investigate what locks we were being blocked by, and possibly shut down a cron system for some period of time
  3. Wait till a better time (e.g. less usage) to run the migration

We found that #2 and #3 happened quite often, and put some time into making our P99 request time significantly lower. However, even with those improvements, we still found ourselves manually rerunning migrations on hot tables a few times before they succeeded.

So we built out the infrastructure to automatically retry “safe migrations”. At the surface, this was scary — having an automated system rerunning migrations that are touching the core data of our application was not a light decision. We added a few safeguards to help us be safe:

  • We only automatically retry migrations that have no intermediate commits (since the entire migration is in a transaction automatically, this means failed migrations are safe to retry).
  • We wait 2 minutes between retries to give any systems time to recover (and engineers a bit of time to respond in case anything goes horribly wrong).
  • We retry at most 10 times.

In 3 months of running this in production, we have seen all of our migrations go through successfully without any manual work.

Post-deploy migrations

While post-deploy migrations gave full flexibility to the developer (e.g. dropping a column in a single deploy), they also resulted in these problems:

  • Developers had to specify the right type for each migration, which was one more opportunity to make a mistake
  • We often had to reorder migrations when a deploy included multiple migrations, and a “post-deploy” migration appeared before “pre-deploy” migration³

We found that with just pre-deploy migrations, we were able to remove a lot of this complexity and room for error. We removed post-deploy migrations from our system.

Backward incompatibility

Despite auto-generation and automatic retries, it was still entirely possible for a developer to (accidentally) write and run a backward-incompatible migration. Since the pre-deploy migration changed the schema before the code was deployed, user requests hit the incompatibility between the post-migrated database and old server code, and failed. We wanted to build a system that was more resilient and easier to write safe code.

The most common example was removing a column. After we implemented first-class support for teams to group users on Benchling, we wanted to remove the team_name column from the User model. Because all migrations were now pre-deploy, we needed to remove this in 2 deploy cycles:

  1. Remove all usages of the column in code
  2. Remove the column with a migration

(Otherwise the column would be removed too early, while existing code still depended on it.)

An engineer searched the codebase and removed all* usages of the column. We then ran the migration to drop the column in a separate deploy, and every query to the users table failed until the new code was deployed a few minutes later. The migration that we believed to be safe was actually backward-incompatible.

* We did have one remaining reference to the team_name column, on the User model itself. We have tests to ensure our database and SQLAlchemy models stay in sync, covered in a later section.²

However, because the team_name column was still on the User model, SQLAlchemy automatically used it in SELECTs and INSERTs of the model. When reading a user model, it tries to query the column. When creating a user, it tries to insert null for it. So, while the author thought they were safely removing the column with the migration, they actually weren’t because its declaration in the SQLAlchemy model constituted a usage.

SQLAlchemy has two configuration options to truly remove its usage of the column. deferred tells it to stop querying the column. evaluates_none tells it to stop inserting null for it. But we didn’t want the author or reviewer to have to remember these every time.

Building compatibility checks into the ORM

To make it easy for a developer to safely remove columns, we decided to write some abstractions on top of SQLAlchemy to help write backward-compatible migrations.

To remove columns, we made a simple column decorator deprecated_column that ensures the column is unused so it can be safely removed. It configures SQLAlchemy to ignore the column with deferred and evaluates_none. But more importantly, it checks every outgoing query and errors if it references the column. Thus tests for code that try to use the column fail. Next time, we simply decorated the column to remove with deprecated_column, removed usages it caught, and deployed that, then wrote a backward-compatible migration to safely remove it.

We also made it easier to rename columns. The renamed_to option we implemented automatically generates SQL triggers to copy values between the old and new columns on value change. This means renaming columns only requires 2 deploy cycles:

  1. Create the column with the new name with a migration, change all usages of the old column to new, decorate the old column with deprecated_column(renamed_to=new_column)
  2. Remove the old column with a migration

(Note: we automatically add triggers that copy from the old column to the new column, and from the new column to the old column. Maintaining equality between both columns is very important, since the deploy cycle may include some servers writing to the new column while others are reading from the old column.)

Our strategy was to extend the ORM to simplify writing backward-compatible migrations, and this worked well for us. Another strategy we plan to employ is automatically testing for backwards-compatibility. Since the post-migrated database and old code must always be compatible, we can test the same setup: running the full test suite of the pre-migration code against the post-migrated database. We confirmed with Quizlet that this strategy worked for them.

Automating the rest

Auto-generation, migration timeouts, automatic retries, and compatibility checks ensured migrations were easy to write and run without affecting user requests. However, correctness still lay in the eyes of the author and reviewer. It was still possible to make schema changes that were wrong: the migration didn’t exactly match the schema, or the changes themselves didn’t follow our database best practices.

Testing migrations are correct

Same as our initial setup, we wrote migrations by auto-generating them and completing the unsupported changes. Save for manually checking, we didn’t know that the ORM schema matched the migrations, even though it was critical that the two are in sync.

We tested the same setup to check they matched. We initialized a database from SQLAlchemy models and a database by running all migrations from a base schema, and compared their schemas to verify no differences.

Testing schema changes are correct

In addition to testing that the migrations are correct, we also want to avoid mistakes when making or changing models. We defined mistakes as a violations of invariants we hold true across our database. These are a set of living rules that include

  • every foreign key column must covered by an index
  • there are no redundant indexes
  • all joined-table-inheritance tables have a trigger to delete parent row when the child row is deleted

SQLAlchemy’s inspection API was powerful enough to automate these checks. We wrote tests to check that each invariant holds for each table in the database. These checks did not cover all possible mistakes for schema changes, but enabled us to declare the rules to follow in a programmatic way.

We have not automated every part of writing a migration. In particular, we still need code reviews for something as critical as a migration. Every migration must be reviewed by a normal reviewer and someone on a short list of approved migration reviewers. In practice, however, as a migration reviewer myself, I usually don’t have any comments — these tests took care of most of the comments I usually had.

Lessons learned

  • Explicitly setting a lock_timeout and statement_timeout when running migrations prevents accidental downtime. (Consider doing this for all transactions.)
  • Automatically running migrations in the deploy process saves a lot of engineering hours. Automatically retrying these migrations on lock timeout increases our probability of success without hurting the system.
  • Automatically testing that migrations match the declarative data model prevents schema correctness issues when using SQLAlchemy and Alembic.
  • Automatically testing database invariants that we care about, like indexes on foreign keys, allows us to codify what would otherwise be on a reviewer checklist.
  • Automatically generating migrations saves developers a lot of time. Alembic comes with this out of the box.
  • Building tools to handle backward-incompatible changes, like deprecated_column and renamed_to, allows developers to make changes faster.

As a small engineering team, we’ve found that automating the majority of writing and running migrations has allowed us to iterate quickly; automated tests before deployment have prevented us from making mistakes; and automated safeguards during deployment have prevented us from causing downtime. While there are still manual parts like code reviews, we have found that the current balance of automation and manual work has allowed us to move fast without breaking things — while migrating things.

Taking a step back, iteration speed has been one of the most important success factors for a team like ours. The ability to get something out into the wild, receive feedback, and make improvements quickly is critical for us to make good product. While making changes to the user interface and internal API is often straightforward (not always!), changing fundamental data models was quite hard at one point. But with the right tools, we were able to improve our iteration speed for one part of this journey — database migrations.

If you’re interested in working on problems like this, we’re hiring!

Discuss on Hacker News

Thanks to Damon, Daniel, Ray, Scott, and Somak for reading drafts of this.

¹ This may sound like quite a lot of migrations (almost one per day) — it is! There are a lot of factors here that affected this:

  • We have a system that makes running migrations easier, so we often only “add functionality as we need it” vs. doing it once all upfront.
  • We generally store things in fully normalized formats (and have generally avoided e.g. JSON columns without strict schemas), so most data model changes result in a migration.
  • We have a very broad product that matches the complex and ever-changing nature of life sciences.

² We could remove it and allow ORM and migration schemas to drift out-of-sync, but this option doesn’t work as well to renaming columns.

³ Migrations must be performed in a particular order. If there are two migrations (one to drop a column, one to add a column) and the drop-column migration came first, there is no way to run that in a single deploy cycle without causing downtime. We could either split this into multiple deploys, or reorder the migrations.