Hexbyte  Tech News  Wired Space Photos of the Week: Shooting Stars and Dwarf Galaxies

Hexbyte Tech News Wired Space Photos of the Week: Shooting Stars and Dwarf Galaxies

Hexbyte Tech News Wired

Iron your space suit and polish your helmet, because this week we are are going intergalactic. Let’s begin by visiting a galaxy in a far-off constellation called Phoenix. This cosmic patch might look like a random arrangement of stars, and while the Phoenix Dwarf galaxy is a real galaxy, it’s still a bit … odd.

Next we sift through the debris of a comet called 21P/Giacobini-Zinner, which is responsible for the Draconic meteor shower in the October skies. Did you know that meteor showers are actually the Earth intersecting a comet’s tail? When tiny particles of ice and dust burn up in our atmosphere, they create what we know as shooting stars.

Read More

Hexbyte  Tech News  Wired 20 Best Weekend Tech Deals: Apple, Amazon, Samsung, Sony

Hexbyte Tech News Wired 20 Best Weekend Tech Deals: Apple, Amazon, Samsung, Sony

Hexbyte Tech News Wired

This week, Sony revealed a new miniature PlayStation, Nintendo launched its online service, a bundle of Apple products started shipping, and Amazon put its Kindles and other top devices on sale as it showed off a heap of new products. With the help of TechBargains, we’ve put together the best deals of the week so you can sift through the savings without hassle.

New Products We Like

You’ve read your last complimentary article this month.

To read the full article, SUBSCRIBE NOW.

If you’re already a subscriber, please sign in and and verify your subscription.

Related Video

Gadgets

iPhone XS & XS Max Review: Do You Need to Upgrade?

WIRED’s Lauren Goode reviews the latest iPhone models — the iPhone XS and iPhone XS Max — and tests the battery life, camera and video capabilities.
CORRECTION, Sept. 19, 5:05 PM EST: The video above misstated the water rating for the iPhone XS and XS Max. While the IP68 standard states that devices must be waterproof to more than 1 meter, Apple’s new phones are waterproof up to 2 meters for up to 30 minutes.”

Read More

Hexbyte  Tech News  Wired First North Carolina Got a Hurricane. Then a Pig Poop Flood. Now It’s a Coal Ash Crisis

Hexbyte Tech News Wired First North Carolina Got a Hurricane. Then a Pig Poop Flood. Now It’s a Coal Ash Crisis

Hexbyte Tech News Wired

After the storm comes the flood. Hurricane Florence poured 8 trillion gallons of rain onto North Carolina, and now the landscape between the Cape Fear River and the barrier islands of the Carolinas is a waterworld. Because ecological disasters happen in irony loops, that means long-recognized hazards have now become add-on catastrophes. First the floodwaters found thousands of literal cesspools containing the waste of 6 million hogs, and on Friday the waters reached a pool of toxic coal ash.

The water has breached the cooling lake at the LV Sutton natural gas plant on the Cape Fear River, forcing it to shut down. Also onsite are two coal ash basins, at least one of which—containing 400,000 cubic yards of the stuff, according to the owner of the facility, Duke Energy—may already be leaking coal ash into the River.

Read More

Hexbyte  Tech News  Wired The Stubborn Bike Commuter Gap Between American Cities

Hexbyte Tech News Wired The Stubborn Bike Commuter Gap Between American Cities

Hexbyte Tech News Wired

Hexbyte  Tech News  Wired

Just 0.57 percent of Americans regularly use their bicycles to commute, but some cities are doing much, much better than others at getting residents to pedal to work.

Samuel Corum/Anadolu Agency/Getty Images

Hexbyte  Tech News  Wired

Just 0.57 percent of Americans regularly use their bicycles to commute, but some cities are doing much, much better than others at getting residents to pedal to work.

Samuel Corum/Anadolu Agency/Getty Images

Cycle commuting is hot.

Warm, at least.

You’ve read your last complimentary article this month.

To read the full article, SUBSCRIBE NOW.

If you’re already a subscriber, please sign in and and verify your subscription.

Read More

Hexbyte  Tech News  Wired The Vaonis Stellina Smart Telescope Finds the Heavenly Bodies for You

Hexbyte Tech News Wired The Vaonis Stellina Smart Telescope Finds the Heavenly Bodies for You

Hexbyte Tech News Wired

If you want your kids to actually get excited about astronomy, don’t ­fumble around with your telescope for 20 minutes while you try to locate Pegasus. ­BOR-ing. Leave the star-searching to the smartphone-connected Stellina scope from the French company Vaonis. The companion app streamlines heavenly gazing by precisely aiming the robotic telescope—at, say, Pegasus—in as little as a few seconds. The system uses Wi-Fi to display the live view on your phone, where it’s easy to capture photographs and video, and it tracks the target across the sky long enough for everyone to get a look. The 19-inch-tall instrument is more portable than a traditional scope, so you can carry it into the wilderness where light pollution fades or just gaze at the constellations from your own backyard.

$2,999 at the MoMA Design Store

Read More

Hexbyte  Tech News  Wired #WhyIDidntReport and the Tragic Banality of Rape in America

Hexbyte Tech News Wired #WhyIDidntReport and the Tragic Banality of Rape in America

Hexbyte Tech News Wired

Hexbyte  Tech News  Wired

Cláudio Policarpo/Getty Images

Hexbyte  Tech News  Wired

Cláudio Policarpo/Getty Images

Professor Christine Blasey Ford was a teenager when she says Supreme Court nominee Brett Kavanaugh tried to rape her. You know the story by now. She didn’t report it at the time, but has come forward now that Kavanaugh is close to being confirmed as a justice to the highest court in the land. On Friday morning, President Trump tweeted that he had “no doubt” that if it had happened, Blasey Ford would have reported it right away.

That’s not how this works. That’s not how any of this works. I know this because this is my story, too, and the story of millions of people. Don’t believe me? Look at Twitter today. Look at the hashtag #WhyIDidntReport. Read the cacophony of stories—each different but the same. Stories of assault by strangers, friends, family members, teachers. The hashtag exposes the sheer banality of rape in America. Sexual assault is not rare. It’s common. According to the National Crime Victimization Survey, there were 320,000 sexual assaults in the US in 2016. And 77 percent of people who experienced rape or sexual assault say they did not tell police.

Read More

Hexbyte  Tech News  Wired Model 3 Crash Tests Hammer Home Tesla’s Safety Excellence

Hexbyte Tech News Wired Model 3 Crash Tests Hammer Home Tesla’s Safety Excellence

Hexbyte Tech News Wired

Hexbyte  Tech News  Wired

Tesla’s cars have always shown themselves to be super safe, and the Model 3 is no exception.

Tesla

Smash! Bang! Ratings success!

The National Highway Traffic Safety Administration, the arm of the federal Department of Transportation charged with reducing the number of people killed on US roads, yesterday released the results of the crash test for the Tesla Model 3: The car earned five stars in every category.

You’ve read your last complimentary article this month.

To read the full article, SUBSCRIBE NOW.

If you’re already a subscriber, please sign in and and verify your subscription.

Read More

Hexbyte  Hacker News  Computers For Hackers, Anonymity Was Once Critical. That’s Changing.

Hexbyte Hacker News Computers For Hackers, Anonymity Was Once Critical. That’s Changing.

Hexbyte Hacker News Computers

Surfacing

At Defcon, one of the world’s largest hacking conferences, new pressures are reshaping the community’s attitudes toward privacy and anonymity.

LAS VEGAS — Ask any hacker who’s been around long enough, and there’s a good chance you’ll hear an archetypal story, tinged with regret, about the first time his or her real identity was publicly disclosed.

After enjoying years of online anonymity, the hacker known as Grifter was unmasked by a less-than-scrupulous spouse. “Hey, Neil!” his wife called out at him, absent-mindedly, from across a crowded room, while accompanying him (for the very first time) at a hacking conference. “My beautiful wife, she outed me in front of the entire hacker community,” he said with a laugh.

Dead Addict’s version of the story involves an employer who pushed him to apply for a patent — for which he was required to provide his full legal name. “The people who later doxxed me,” he said, using a term for publishing private information about someone, usually with malicious intent, “pointed to that patent.”

Nico Sell managed to stay “ungoogleable,” she said, until around 2012, when, acting as chief executive of a secure-messaging company, Wickr, she felt she needed to become more of a public figure — if reluctantly. “My co-founders and I, we all drew straws,” she said, “and that was that.”

Image
Nico Sell. “I’m lucky enough never to have had my eyes on Google,” she said, referring to the fact that she’s never been photographed without sunglasses. “It’s one of the only things I could keep.”CreditStephen Hiltner/The New York Times

I met Grifter, whose real name is Neil Wyler; Dead Addict, who, citing privacy concerns, spoke with me on the condition that I not share his real name; Nico Sell, which, while undeniably the name she uses publicly, may or may not be her legal name; and dozens of other self-described hackers in August at Defcon, an annual hacking convention — one of the world’s largest — held in Las Vegas.

A lion’s share of the media attention devoted to hacking is often directed at deeply anonymous (and nefarious) hackers like Guccifer 2.0, a shadowy online avatar — alleged to have been controlled by Russian military intelligence officers — that revealed documents stolen from the Democratic National Committee in 2016. And, to be sure, a number of Defcon attendees, citing various concerns about privacy, still protect their identities. Many conceal their real names, instead using only pseudonyms or hacker aliases. Some wear fake beards, masks or other colorful disguises.

But new pressures, especially for those who attend Defcon, seem to be reshaping the community’s attitudes toward privacy and anonymity. Many longtime hackers, like Ms. Sell and Mr. Wyler, have been drawn into the open by corporate demands, or have traded their anonymity for public roles as high-level cybersecurity experts. Others alluded to the ways in which a widespread professionalization and gamification of the hacking world — as evidenced by so-called bug bounty programs offered by companies like Facebook and Google, which pay (often handsomely) for hackers to hunt for and disclose cybersecurity gaps on their many platforms — have legitimized certain elements of the culture.

Image

Dead Addict. As a rule, he said, hackers have always been especially attuned to privacy issues.CreditStephen Hiltner/The New York Times

“It’s probably fair to say that fewer and fewer people are hiding behind their handles,” said Melanie Ensign, a longtime Defcon attendee who works on security and privacy at Uber. “A lot of hackers who have been around for a while — they have families and mortgages now. At some point, you have to join the real world, and the real world does not run on anonymity.”

“This is a profession for a lot of people now,” she added. “And you can’t fill out a W-9 with your hacker handle.”

Image

CreditStephen Hiltner/The New York Times

Defcon has grown exponentially since its founding in 1993, when Jeff Moss — or, as many of his hacker friends know him, The Dark Tangent, or simply D.T. — gathered about 100 of his hacker friends for a hastily assembled party. By contrast, this year’s convention, the 26th, drew some 27,000 attendees, including students, security researchers, government officials and children as young as 8.

It’s difficult to characterize the conference without being reductive. One could describe all of its 28 constituent “villages” (including the Voting Machine Hacking Village, where attendees deconstructed and scrutinized the vulnerabilities of electronic voting machines, and the Lockpick Village, where visitors could tinker with locks and learn about hardware and physical security), offer a complete list of this year’s presentations (including one by Rob Joyce, a senior cybersecurity official at the National Security Agency), catalog its many contests and events (like the Tin Foil Hat Contest and Hacker Karaoke) and still not get at its essence.

The ethos of Defcon is perhaps best embodied by a gentleman I encountered in a hallway toward the end of the conference. He was wearing an odd contraption on his back, with wires and antennas protruding from its frame and with a blinking black box at its center. An agribusiness giant, he said, had recently heralded the impenetrability of the security systems built into one of its new computing components. He had obtained a version of it — how, he wouldn’t say — and, having now subjected it to the ever-probing Defcon crowds, had disproved the company’s claims. “Turns out it’s not very secure after all,” he said with a grin, before vanishing around a corner.

Image

Jeff Moss, a.k.a. The Dark Tangent. “It’s gotten harder and harder and harder to legitimately have an alternative identity,” he said.CreditStephen Hiltner/The New York Times

As with many of his early online friends, Mr. Moss’s foray into aliases was directly tied to his interest in hacking and phone phreaking (the manipulation of telecommunications systems) — “stuff that wasn’t really legal,” he said. Aliases provided cover for such activity. And every once in a while, he explained — if a friend let slip your name, or if you outgrew a juvenile, silly alias — you’d have to burn your identity and come up with a new name.

“In my case, I had a couple previous identities,” he said, “but when I changed to The Dark Tangent, I was making a clear break from my past. I’d learned how to manage identities; I’d learned how the scene worked.”

He also remembers when everything changed. During the dot-com boom, many hackers transitioned to “real jobs,” he said, “and so they had to have real names, too.”

“My address book doubled in size,” he said with a laugh.

“The thing I worry about today,” he added, taking a more serious tone, “is that people don’t get do-overs.” Young people now have to contend with the real-name policy on Facebook, he said, along with the ever-hovering threats of facial-recognition software and aggregated data. “How are you going to learn to navigate in this world if you never get to make a mistake — and if every mistake you do make follows you forever?”

Image

Philippe Harewood. “I’m still not all that comfortable communicating with people on my Facebook profile, under my real name.”CreditStephen Hiltner/The New York Times

Philippe Harewood, who is 30, represents a relatively new class of hackers. He is currently ranked second on Facebook’s public list of individuals who have responsibly disclosed security vulnerabilities for the site in 2018. And while he maintains an alias on Twitter (phwd), a vast majority of his hacking work is done under his real name — which is publicized on and by Facebook. He also maintains a blog (again, under his real name) where he analyzes and discusses his exploits.

For Mr. Harewood, maintaining his alias is partly about creating a personal brand — a retro nod, in a sense, to the era when using a hacker handle was a more essential element of the trade. But it also has practical advantages. “People want to reach out all the time,” he said. “And I’m still not all that comfortable communicating with people on my Facebook profile, under my real name.”

“In a way,” he said, “it just helps me filter my communications.”

In the wake of the Cambridge Analytica scandal, Facebook expanded its existing bug bounty with a program that specifically targets data abuse. And just this week the company again widened the scope to help address vulnerabilities in third-party apps. Such efforts — coupled with the rise in recent years of companies like Bugcrowd and HackerOne, which mediate between hackers and companies interested in testing their online vulnerabilities — have created a broader marketplace for hackers interested in pursuing legitimate forms of compensation.

Image

Emmett Brewer. “I think an alias helps you get more recognition,” he said, “sort of like how The Dark Tangent has his.”CreditStephen Hiltner/The New York Times

Like Mr. Harewood, 11-year-old Emmett Brewer, who garnered national media attention at this year’s Defcon by hacking a mock-up of the Florida state election results website in 10 minutes, also alluded to the marketing appeal of his alias, p0wnyb0y.

“I came up with it a couple years ago, when I first got included in a news article,” he said. “I think an alias helps you get more recognition — sort of like how The Dark Tangent has his.”

“P0wnyb0y is shorter and catchier than my name,” he added. “And it just seems a lot cooler.”

Emmett said his involvement with Defcon — he has attended for several years, accompanied by his father — has left him skeptical about the degree to which his peers share things online. “My friends put everything up on the internet,” he said, “but I’m more mindful.” Still, he said he wasn’t invested in keeping his real name separate from his alias. “I don’t see it as the end of the world” if people can easily link the two, he said. “But some other people take that stuff more seriously.”

(About his hacking the simulated election results: “The goal was to modify with the candidates’ votes — to delete them or add new ones,” he said. “I changed everyone else’s votes to zero, added my name, then gave myself billions of votes.”)

Image

CyFi. “The less data there is about you out in the world, the less people can try to mess with you,” she said.CreditStephen Hiltner/The New York Times

That’s not to say, though, that the younger generations of hackers are all comfortable operating so openly. Ms. Sell’s daughter, who spoke with me on the condition that I refer to her only by her hacking handle, CyFi, was especially guarded about her identity.

“When I was 9, I discovered a class of zero-day vulnerabilities,” said CyFi, who is now 17, referring to software bugs that developers are unaware of. She ultimately disclosed the bugs, she added, “but I didn’t want to risk being sued by all those companies — so hiding my identity was the best way to go.”

As with Emmett, CyFi is wary of her generation’s penchant for oversharing online. “My friends have definitely been frustrated with my lack of social media,” she said. “But the less data there is about you out in the world, the less people can try to mess with you.”

Image

Linton Wells II. After the Edward Snowden leak, he said, “the feds were — well, if not uninvited, then at least tacitly not particularly welcome.”CreditStephen Hiltner/The New York Times

One of the most intriguing aspects of Defcon is the relationship between the hacker community and the attendees from the federal government, the complexities of which have ebbed and flowed over time. For many years, the tension resulted in a cat-and-mouse game called “Spot the Fed.”

“In the early days, if a fed got spotted, it was pretty consequential,” Mr. Moss said. “Later on, they were outing each other,” he said with a laugh — because they wanted the T-shirt granted to both the fed and the person who outed them.

Linton Wells II, a former principal deputy to the assistant secretary of defense for networks and information integration, began attending Defcon around 2003. He now volunteers as a “goon” — the term for the volunteers (roughly 450 this year) who help organize and run the conference.

Mr. Wells said that governmental officials who attend Defcon fall into one of three categories. “One was the people who openly announced they were feds — either speakers who announced their affiliations, or there was a Meet the Fed panel,” he said. “There were others who wouldn’t deny it if you asked them, but who didn’t go out of their way to advertise it. And then there were those who were either officially or unofficially undercover.”

The relationship hasn’t always been contentious, he added, noting that, in 2012, Keith Alexander, who was then director of the N.S.A., “came out here and spoke in a T-shirt and bluejeans.” Less than a year later, though, after the Edward Snowden leak, things soured. “For the next couple years,” Mr. Wells said, “the feds were — well, if not uninvited, then at least tacitly not particularly welcome.”

Image

Joe Grand, a.k.a. Kingpin. “Hiding behind a fake name doesn’t mean you’re doing something malicious, and it doesn’t mean you’re a bad person,” he said. “It means you’re trying to protect your privacy.”CreditStephen Hiltner/The New York Times

Joe Grand, who for many years operated under his alias, Kingpin, understands the complexities of the relationship as well as anyone. Twenty years ago, in May 1998, Mr. Grand was one of seven computer hackers who testified before a congressional panel that included Senators John Glenn, Joseph Lieberman and Fred Thompson. The hackers, members of a collective called L0pht (pronounced “loft”), had recently boasted that they could shut down the internet in 30 minutes, and lawmakers had taken notice.

“Due to the sensitivity of the work done at the L0pht,” Senator Thompson explained in his opening remarks — haltingly, as if for effect — “they’ll be using their hacker names of Mudge, Weld, Brian Oblivion, Kingpin, Space Rogue, Tan and Stefan.” Chuckles echoed through the room. Until then, staff members had told the L0pht hackers, the only witnesses to testify while using aliases had been members of the witness protection program. “I hope my grandkids don’t ask me who my witnesses were today,” Senator Thompson added, to another chorus of laughter.

“It probably helped their agenda — by having these kids show up with fake names,” said Mr. Grand, who sat for an interview at Defcon. “It probably made it that much more intriguing.”

“But using our handles,” he added, “was our natural way of communicating. And having that protection, it felt good. We were putting ourselves out there as hackers communicating with the government — which, at the time, was not something you did.”

As with many longtime hackers, Mr. Grand — who became widely known after appearing on a Discovery Channel show called “Prototype This!” — has grown more comfortable operating in the open. But he still appreciates the value of anonymity. “Hiding behind a fake name doesn’t mean you’re doing something malicious, and it doesn’t mean you’re a bad person,” he said. “It means you’re trying to protect your privacy.”

“And, in this day and age, you need to,” he added, “because everywhere you look, your privacy is being stripped away.”

Image

Keren ElazariCreditStephen Hiltner/The New York Times

Keren Elazari, a cybersecurity expert whose 2014 TED talk has been viewed millions of times, expressed a similar sentiment — that hackers, by fighting to maintain their anonymity, can help push back against the trends of eroding online privacy. But she also described what she calls a “maturing of the industry and the community.”

“More and more people who started hacking in the nineties are now becoming icons and thought leaders — and, most importantly, role models for the younger generations of hackers,” she said.

To help guide younger generations, elder hackers can often still use nicknames, she added. “But sometimes it makes it more powerful when they can speak up in their own voices.”

Stephen Hiltner is a reporter and visual journalist for the Surfacing column. A graduate of the University of Oxford and the University of Virginia, he joined The Times as a staff editor in 2016 after editing for six years at The Paris Review. @sahiltner Facebook

Read More

Hexbyte  Hacker News  Computers NVIDIA/vid2vid

Hexbyte Hacker News Computers NVIDIA/vid2vid

Hexbyte Hacker News Computers

Project | YouTube | Paper | arXiv

Pytorch implementation for high-resolution (e.g., 2048×1024) photorealistic video-to-video translation. It can be used for turning semantic label maps into photo-realistic videos, synthesizing people talking from edge maps, or generating human motions from poses.

Video-to-Video SynthesisTing-Chun Wang1, Ming-Yu Liu1, Jun-Yan Zhu2, Guilin Liu1, Andrew Tao1, Jan Kautz1, Bryan Catanzaro1


1NVIDIA Corporation, 2MIT CSAIL
In arXiv, 2018.

Hexbyte Hacker News Computers Video-to-Video Translation

  • Label-to-Streetview Results


  • Edge-to-Face Results


  • Pose-to-Body Results

Hexbyte Hacker News Computers Prerequisites

  • Linux or macOS
  • Python 3
  • NVIDIA GPU + CUDA cuDNN
  • PyTorch 0.4

Hexbyte Hacker News Computers Getting Started

Installation

  • Install python libraries dominate and requests.
pip install dominate requests
  • If you plan to train with face datasets, please install dlib.
  • If you plan to train with pose datasets, please install DensePose and/or OpenPose.
  • Clone this repo:
git clone https://github.com/NVIDIA/vid2vid
cd vid2vid

Testing

  • Please first download example dataset by running python scripts/download_datasets.py.

  • Next, download and compile a snapshot of FlowNet2 by running python scripts/download_flownet2.py.

  • Cityscapes

    • Please download the pre-trained Cityscapes model by:

      python scripts/street/download_models.py
    • To test the model (bash ./scripts/street/test_2048.sh):

      #!./scripts/street/test_2048.sh
      python test.py --name label2city_2048 --label_nc 35 --loadSize 2048 --n_scales_spatial 3 --use_instance --fg --use_single_G

      The test results will be saved in: ./results/label2city_2048/test_latest/.

    • We also provide a smaller model trained with single GPU, which produces slightly worse performance at 1024 x 512 resolution.

      • Please download the model by
      python scripts/street/download_models_g1.py
      • To test the model (bash ./scripts/street/test_g1_1024.sh):
      #!./scripts/street/test_g1_1024.sh
      python test.py --name label2city_1024_g1 --label_nc 35 --loadSize 1024 --n_scales_spatial 3 --use_instance --fg --n_downsample_G 2 --use_single_G
    • You can find more example scripts in the scripts/street/ directory.

  • Faces

    • Please download the pre-trained model by:
      python scripts/face/download_models.py
    • To test the model (bash ./scripts/face/test_512.sh):
      #!./scripts/face/test_512.sh
      python test.py --name edge2face_512 --dataroot datasets/face/ --dataset_mode face --input_nc 15 --loadSize 512 --use_single_G

      The test results will be saved in: ./results/edge2face_512/test_latest/.

Dataset

  • Cityscapes
    • We use the Cityscapes dataset as an example. To train a model on the full dataset, please download it from the official website (registration required).
    • We apply a pre-trained segmentation algorithm to get the corresponding semantic maps (train_A) and instance maps (train_inst).
    • Please add the obtained images to the datasets folder in the same way the example images are provided.
  • Face
    • We use the FaceForensics dataset. We then use landmark detection to estimate the face keypoints, and interpolate them to get face edges.
  • Pose
    • We use random dancing videos found on YouTube. We then apply DensePose / OpenPose to estimate the poses for each frame.

Training with Cityscapes dataset

  • First, download the FlowNet2 checkpoint file by running python scripts/download_models_flownet2.py.
  • Training with 8 GPUs:
    • We adopt a coarse-to-fine approach, sequentially increasing the resolution from 512 x 256, 1024 x 512, to 2048 x 1024.
    • Train a model at 512 x 256 resolution (bash ./scripts/street/train_512.sh)
    #!./scripts/street/train_512.sh
    python train.py --name label2city_512 --label_nc 35 --gpu_ids 0,1,2,3,4,5,6,7 --n_gpus_gen 6 --n_frames_total 6 --use_instance --fg
    • Train a model at 1024 x 512 resolution (must train 512 x 256 first) (bash ./scripts/street/train_1024.sh):
    #!./scripts/street/train_1024.sh
    python train.py --name label2city_1024 --label_nc 35 --loadSize 1024 --n_scales_spatial 2 --num_D 3 --gpu_ids 0,1,2,3,4,5,6,7 --n_gpus_gen 4 --use_instance --fg --niter_step 2 --niter_fix_global 10 --load_pretrain checkpoints/label2city_512

If you have TensorFlow installed, you can see TensorBoard logs in ./checkpoints/label2city_1024/logs by adding --tf_log to the training scripts.

  • Training with a single GPU:

    • We trained our models using multiple GPUs. For convenience, we provide some sample training scripts (train_g1_XXX.sh) for single GPU users, up to 1024 x 512 resolution. Again a coarse-to-fine approach is adopted (256 x 128, 512 x 256, 1024 x 512). Performance is not guaranteed using these scripts.
    • For example, to train a 256 x 128 video with a single GPU (bash ./scripts/street/train_g1_256.sh)
    #!./scripts/street/train_g1_256.sh
    python train.py --name label2city_256_g1 --label_nc 35 --loadSize 256 --use_instance --fg --n_downsample_G 2 --num_D 1 --max_frames_per_gpu 6 --n_frames_total 6
  • Training at full (2k x 1k) resolution

    • To train the images at full resolution (2048 x 1024) requires 8 GPUs with at least 24G memory (bash ./scripts/street/train_2048.sh). If only GPUs with 12G/16G memory are available, please use the script ./scripts/street/train_2048_crop.sh, which will crop the images during training. Performance is not guaranteed with this script.

Training with face datasets

  • If you haven’t, please first download example dataset by running python scripts/download_datasets.py.
  • Run the following command to compute face landmarks for training dataset:
    python data/face_landmark_detection.py train
  • Run the example script (bash ./scripts/face/train_512.sh)
    python train.py --name edge2face_512 --dataroot datasets/face/ --dataset_mode face --input_nc 15 --loadSize 512 --num_D 3 --gpu_ids 0,1,2,3,4,5,6,7 --n_gpus_gen 6 --n_frames_total 12  
  • For single GPU users, example scripts are in train_g1_XXX.sh. These scripts are not fully tested and please use at your own discretion. If you still hit out of memory errors, try reducing max_frames_per_gpu.
  • More examples scripts can be found in scripts/face/.
  • Please refer to More Training/Test Details for more explanations about training flags.

Training with pose datasets

  • If you haven’t, please first download example dataset by running python scripts/download_datasets.py.
  • Example DensePose and OpenPose results are included. If you plan to use your own dataset, please generate these results and put them in the same way the example dataset is provided.
  • Run the example script (bash ./scripts/pose/train_256p.sh)
    python train.py --name pose2body_256p --dataroot datasets/pose --dataset_mode pose --input_nc 6 --num_D 2 --resize_or_crop ScaleHeight_and_scaledCrop --loadSize 384 --fineSize 256 --gpu_ids 0,1,2,3,4,5,6,7 --batchSize 8 --max_frames_per_gpu 3 --no_first_img --n_frames_total 12 --max_t_step 4
  • Again, for single GPU users, example scripts are in train_g1_XXX.sh. These scripts are not fully tested and please use at your own discretion. If you still hit out of memory errors, try reducing max_frames_per_gpu.
  • More examples scripts can be found in scripts/pose/.
  • Please refer to More Training/Test Details for more explanations about training flags.

Training with your own dataset

  • If your input is a label map, please generate label maps which are one-channel whose pixel values correspond to the object labels (i.e. 0,1,…,N-1, where N is the number of labels). This is because we need to generate one-hot vectors from the label maps. Please use --label_nc N during both training and testing.
  • If your input is not a label map, please specify --input_nc N where N is the number of input channels (The default is 3 for RGB images).
  • The default setting for preprocessing is scaleWidth, which will scale the width of all training images to opt.loadSize (1024) while keeping the aspect ratio. If you want a different setting, please change it by using the --resize_or_crop option. For example, scaleWidth_and_crop first resizes the image to have width opt.loadSize and then does random cropping of size (opt.fineSize, opt.fineSize). crop skips the resizing step and only performs random cropping. scaledCrop crops the image while retraining the original aspect ratio. randomScaleHeight will randomly scale the image height to be between opt.loadSize and opt.fineSize. If you don’t want any preprocessing, please specify none, which will do nothing other than making sure the image is divisible by 32.

Hexbyte Hacker News Computers More Training/Test Details

  • We generate frames in the video sequentially, where the generation of the current frame depends on previous frames. To generate the first frame for the model, there are 3 different ways:

      1. Using another generator which was trained on generating single images (e.g., pix2pixHD) by specifying --use_single_G. This is the option we use in the test scripts.
      1. Using the first frame in the real sequence by specifying --use_real_img.
      1. Forcing the model to also synthesize the first frame by specifying --no_first_img. This must be trained separately before inference.
  • The way we train the model is as follows: suppose we have 8 GPUs, 4 for generators and 4 for discriminators, and we want to train 28 frames. Also, assume each GPU can generate only one frame. The first GPU generates the first frame, and pass it to the next GPU, and so on. After the 4 frames are generated, they are passed to the 4 discriminator GPUs to compute the losses. Then the last generated frame becomes input to the next batch, and the next 4 frames in the training sequence are loaded into GPUs. This is repeated 7 times (4 x 7 = 28), to train all the 28 frames.

  • Some important flags:

    • n_gpus_gen: the number of GPUs to use for generators (while the others are used for discriminators). We separate generators and discriminators into different GPUs since when dealing with high resolutions, even one frame cannot fit in a GPU. If the number is set to -1, there is no separation and all GPUs are used for both generators and discriminators (only works for low-res images).
    • n_frames_G: the number of input frames to feed into the generator network; i.e., n_frames_G - 1 is the number of frames we look into the past. the default is 3 (conditioned on previous two frames).
    • n_frames_D: the number of frames to feed into the temporal discriminator. The default is 3.
    • n_scales_spatial: the number of scales in the spatial domain. We train from the coarsest scale and all the way to the finest scale. The default is 3.
    • n_scales_temporal: the number of scales for the temporal discriminator. The finest scale takes in the sequence in the original frame rate. The coarser scales subsample the frames by a factor of n_frames_D before feeding the frames into the discriminator. For example, if n_frames_D = 3 and n_scales_temporal = 3, the discriminator effectively sees 27 frames. The default is 3.
    • max_frames_per_gpu: the number of frames in one GPU during training. If you run into out of memory error, please first try to reduce this number. If your GPU memory can fit more frames, try to make this number bigger to make training faster. The default is 1.
    • max_frames_backpropagate: the number of frames that loss backpropagates to previous frames. For example, if this number is 4, the loss on frame n will backpropagate to frame n-3. Increasing this number will slightly improve the performance, but also cause training to be less stable. The default is 1.
    • n_frames_total: the total number of frames in a sequence we want to train with. We gradually increase this number during training.
    • niter_step: for how many epochs do we double n_frames_total. The default is 5.
    • niter_fix_global: if this number if not 0, only train the finest spatial scale for this number of epochs before starting to fine-tune all scales.
    • batchSize: the number of sequences to train at a time. We normally set batchSize to 1 since often, one sequence is enough to occupy all GPUs. If you want to do batchSize > 1, currently only batchSize == n_gpus_gen is supported.
    • no_first_img: if not specified, the model will assume the first frame is given and synthesize the successive frames. If specified, the model will also try to synthesize the first frame instead.
    • fg: if specified, use the foreground-background separation model as stated in the paper. The foreground labels must be specified by --fg_labels.
    • no_flow: if specified, do not use flow warping and directly synthesize frames. We found this usually still works reasonably well when the background is static, while saving memory and training time.
  • For other flags, please see options/train_options.py and options/base_options.py for all the training flags; see options/test_options.py and options/base_options.py for all the test flags.

  • Additional flags for edge2face examples:

    • no_canny_edge: do not use canny edges for background as input.
    • no_dist_map: by default, we use distrance transform on the face edge map as input. This flag will make it directly use edge maps.
  • Additional flags for pose2body examples:

    • densepose_only: use only densepose results as input. Please also remember to change input_nc to be 3.
    • openpose_only: use only openpose results as input. Please also remember to change input_nc to be 3.
    • add_face_disc: add an additional discriminator that only works on the face region.
    • remove_face_labels: remove densepose results for face, and add noise to openpose face results, so the network can get more robust to different face shapes. This is important if you plan to do inference on half-body videos (if not, usually this flag is unnecessary).
    • random_drop_prob: the probability to randomly drop each pose segment during training, so the network can get more robust to missing poses at inference time. Default is 0.2.

Hexbyte Hacker News Computers Citation

If you find this useful for your research, please cite the following paper.

@inproceedings{wang2018vid2vid,
   author    = {Ting-Chun Wang and Ming-Yu Liu and Jun-Yan Zhu and Guilin Liu
                and Andrew Tao and Jan Kautz and Bryan Catanzaro},
   title     = {Video-to-Video Synthesis},
   booktitle = {Advances in Neural Information Processing Systems (NIPS)},   
   year      = {2018},
}

Hexbyte Hacker News Computers Acknowledgments

We thank Karan Sapra, Fitsum Reda, and Matthieu Le for generating the segmentation maps for us. We also thank Lisa Rhee for allowing us to use her dance videos for training. We thank William S. Peebles for proofreading the paper.


This code borrows heavily from pytorch-CycleGAN-and-pix2pix and pix2pixHD.