Huge Cavity in Antarctic Glacier Signals Rapid Decay - Hexbyte Inc. – Glen Cove, NY

Huge Cavity in Antarctic Glacier Signals Rapid Decay – Hexbyte Inc. – Glen Cove, NY

NEWS | JANUARY 30, 2019

Huge Cavity in Antarctic Glacier Signals Rapid Decay

Thwaites Glacier
Changes in surface height at Thwaites Glacier's grounding line

Thwaites Glacier. Credit: NASA/OIB/Jeremy Harbeck
› Larger view

A gigantic cavity – two-thirds the area of Manhattan and almost 1,000 feet (300 meters) tall – growing at the bottom of Thwaites Glacier in West Antarctica is one of several disturbing discoveries reported in a new NASA-led study of the disintegrating glacier. The findings highlight the need for detailed observations of Antarctic glaciers’ undersides in calculating how fast global sea levels will rise in response to climate change.

Researchers expected to find some gaps between ice and bedrock at Thwaites’ bottom where ocean water could flow in and melt the glacier from below. The size and explosive growth rate of the newfound hole, however, surprised them. It’s big enough to have contained 14 billion tons of ice, and most of that ice melted over the last three years.

“We have suspected for years that Thwaites was not tightly attached to the bedrock beneath it,” said Eric Rignot of the University of California, Irvine, and NASA’s Jet Propulsion Laboratory in Pasadena, California. Rignot is a co-author of the new study, which was published today in Science Advances. “Thanks to a new generation of satellites, we can finally see the detail,” he said.

The cavity was revealed by ice-penetrating radar in NASA’s Operation IceBridge, an airborne campaign beginning in 2010 that studies connections between the polar regions and the global climate. The researchers also used data from a constellation of Italian and German spaceborne synthetic aperture radars. These very high-resolution data can be processed by a technique called radar interferometry to reveal how the ground surface below has moved between images.

“[The size of] a cavity under a glacier plays an important role in melting,” said the study’s lead author, Pietro Milillo of JPL. “As more heat and water get under the glacier, it melts faster.”

Numerical models of ice sheets use a fixed shape to represent a cavity under the ice, rather than allowing the cavity to change and grow. The new discovery implies that this limitation most likely causes those models to underestimate how fast Thwaites is losing ice.

About the size of Florida, Thwaites Glacier is currently responsible for approximately 4 percent of global sea level rise. It holds enough ice to raise the world ocean a little over 2 feet (65 centimeters) and backstops neighboring glaciers that would raise sea levels an additional 8 feet (2.4 meters) if all the ice were lost.

Thwaites is one of the hardest places to reach on Earth, but it is about to become better known than ever before. The U.S. National Science Foundation and British National Environmental Research Council are mounting a five-year field project to answer the most critical questions about its processes and features. The International Thwaites Glacier Collaboration will begin its field experiments in the Southern Hemisphere summer of 2019-20.

How Scientists Measure Ice Loss

There’s no way to monitor Antarctic glaciers from ground level over the long term. Instead, scientists use satellite or airborne instrument data to observe features that change as a glacier melts, such as its flow speed and surface height.

Another changing feature is a glacier’s grounding line – the place near the edge of the continent where it lifts off its bed and starts to float on seawater. Many Antarctic glaciers extend for miles beyond their grounding lines, floating out over the open ocean.

Just as a grounded boat can float again when the weight of its cargo is removed, a glacier that loses ice weight can float over land where it used to stick. When this happens, the grounding line retreats inland. That exposes more of a glacier’s underside to sea water, increasing the likelihood its melt rate will accelerate.

An Irregular Retreat

For Thwaites, “We are discovering different mechanisms of retreat,” Millilo said. Different processes at various parts of the 100-mile-long (160-kilometer-long) front of the glacier are putting the rates of grounding-line retreat and of ice loss out of sync.

The huge cavity is under the main trunk of the glacier on its western side – the side farther from the West Antarctic Peninsula. In this region, as the tide rises and falls, the grounding line retreats and advances across a zone of about 2 to 3 miles (3 to 5 kilometers). The glacier has been coming unstuck from a ridge in the bedrock at a steady rate of about 0.4 to 0.5 miles (0.6 to 0.8 kilometers) a year since 1992. Despite this stable rate of grounding-line retreat, the melt rate on this side of the glacier is extremely high.

“On the eastern side of the glacier, the grounding-line retreat proceeds through small channels, maybe a kilometer wide, like fingers reaching beneath the glacier to melt it from below,” Milillo said. In that region, the rate of grounding-line retreat doubled from about 0.4 miles (0.6 kilometers) a year from 1992 to 2011 to 0.8 miles (1.2 kilometers) a year from 2011 to 2017. Even with this accelerating retreat, however, melt rates on this side of the glacier are lower than on the western side.

These results highlight that ice-ocean interactions are more complex than previously understood.

Milillo hopes the new results will be useful for the International Thwaites Glacier Collaboration researchers as they prepare for their fieldwork. “Such data is essential for field parties to focus on areas where the action is, because the grounding line is retreating rapidly with complex spatial patterns,” he said.

“Understanding the details of how the ocean melts away this glacier is essential to project its impact on sea level rise in the coming decades,” Rignot said.

The paper by Milillo and his co-authors in the journal Science Advances is titled “Heterogeneous retreat and ice melt of Thwaites Glacier, West Antarctica.” Co-authors were from the University of California, Irvine; the German Aerospace Center in Munich, Germany; and the University Grenoble Alpes in Grenoble, France.

News Media ContactEsprit Smith 
Jet Propulsion Laboratory, Pasadena, California
818-354-4269
Esprit.smith@jpl.nasa.gov

Brian Bell
University of California, Irvine
949-824-8249
bpbell@uci.edu

Written by Carol Rasmussen
NASA’s Earth Science News Team

https://www.jpl.nasa.gov/news/news.php?feature=7322

Goodbye to a beauty in the night sky – Hexbyte Inc. – Glen Cove, NY

Goodbye to a beauty in the night sky

January 30, 2019 by Jeff Heinrich, University of Montreal
Goodbye to a beauty in the night sky
(L) A view of Eta Carinae by NASA’s Hubble Space Telescope in 2000. (R) What the star could look like in 2032, when it overshadows its nebula.

For over a century and a half, Eta Carinae has been one of the most luminous – and most enigmatic – stars of the southern Milky Way.

Part of its nature was revealed in 1847, when, in a giant eruption, it ejected a nebula called the Homunculus (“little man”). The event made Eta Carinae the second-brightest star in the sky after Sirius, visible even in broad daylight and (later) easily distinguishable from other, similarly unstable stars called Luminous Blue Variables, whose nebulae are not so clearly visible.

Aside from making Eta Carinae one of the most beautiful and frequently photographed objects in the night sky, the giant Homunculus contains information about its parent star, ranging from the energy of its expansion to its bipolar outflow and chemical composition.

In as little as a decade from now, however, we will no longer be able to see the nebula clearly.

A recent study indicates that the Homunculus will be obfuscated by the increasing brightness of Eta Carinae itself. So rapidly is it growing, in fact, that in 2036 the star will be 10 times brighter than its nebula, which in the end will make it indistinguishable from other LBVs.

Good news

But there’s an upside.

A team of 17 researchers led by Brazilian astronomer Augusto Damineli, with input from Université de Montréal’s Anthony Moffatt, believe that the increasing brightness of Eta Carinae is not intrinsic to the star itself, as is commonly believed. In fact, it is likely caused by the dissipation of a dust cloud positioned exactly in front of it as seen from the Earth.

This cloud, the researchers posit in a new study in the Monthly Notices of the Royal Astronomical Society, completely shrouds the star and its winds, blotting out much of its light emanating towards Earth. The surrounding Homunculus, by contrast, can be seen directly because it is 200 times larger than the obscuring cloudlet and its brightness is thus almost unaffected.

In 2032 (with an uncertainty of plus or minus four years), the dusty cloud will have dissipated, so that the brightness of the central star will no longer increase and the Homunculus will be lost in its glare, the research team believes.

And that will provide an opportunity for deeper study of Eta Carinae itself, even showing that it is not one, but in fact two, stars.

“There have been a number of recent revelations about this unique object in the sky, but this is among the most important,” said Moffat. “It may finally allow us to probe the true nature of the central engine and show that it is a close binary system of two very massive interacting stars.”

Goodbye to a beauty in the night sky - Hexbyte Inc. - Glen Cove, NY

 Explore further: Stars vs. dust in the Carina Nebula

More information: Distinguishing Circumstellar from Stellar Photometric Variability in Eta Carinae. arXiv:1901.00531 [astro-ph.SR] arxiv.org/abs/1901.00531

A Damineli et al. Distinguishing circumstellar from stellar photometric variability in Eta Carinae, Monthly Notices of the Royal Astronomical Society (2019). DOI: 10.1093/mnras/stz067 

search and more info
website

Journal reference: Monthly Notices of the Royal Astronomical Society  

search and more info
website

Provided by: University of Montreal  3117 shares

feedback

 feedback to editors

Read more at: https://phys.org/news/2019-01-goodbye-beauty-night-sky.html#jCp

Hexbyte  Tech News  Wired Teens Don’t Use Facebook, but They Can’t Escape It, Either

Research report Psycho-emotional status but not cognition is changed under the combined effect of ionizing radiations at doses related to deep space missions Hexbyte Inc. – Glen Cove, NY

Research reportPsycho-emotional status but not cognition is changed under the combined effect of ionizing radiations at doses related to deep space missions

Author links open overlay panelV.S.KokhanaE.V.ShakhbazianaN.A.MarkovabShow morehttps://doi.org/10.1016/j.bbr.2019.01.024Get rights and content

Highlights

Exposure to the ionizing radiation in the Mars exploration mission doses significantly modulates the psycho-emotional status, but not cognition•

The alteration of serotonin metabolism indicates the wide range of neuroadaptive rearrangements rather than the pathophysiologic process•

We believe that the change in psycho-emotional status is indirectly responsible for the cognitive abilities shift

Abstract

Human spaceflight launch is the big challenge that the humanity work on. The astronauts’ task performance vulnerability to ionizing radiations is one of the major factors limiting deep space missions. In this work, we study the effect of ionizing radiations (γ-quanta and 12C6+ in combination) on cognitive abilities and psycho-emotional status of Wistar rats. Irradiation led to the hyperlocomotion, increase of anxiety-like behavior, suppression of depressive-like behavior and enhancement of spatial learning. These data are consistent with the neurochemical/molecular analysis: enhanced monoaminergic innervation within the hypothalamus (HYP), inhibition of serotonin turnover in the prefrontal cortex and neurokenin 1 receptor overexpression in the amygdala (AMY). In addition, we observe decreased expression of certain biomolecules in the AMY (5-HT2c and 5-HT3) and in the HYP (5-HT2a, 5-HT4 and VMAT2) that can be explained as neuroadaptive changes. Thus, the ionizing radiation exposure significantly modulates the psycho-emotional status. With that, for the first time we received data that radiation effects in the doses and composition of interplanetary space (in terrestrial modeling) could be relatively safe for cognitive functions.

Keywords

CNS risksIonizing radiationPsycho-emotional statusCognitionSerotonin metabolismMars exploration mission© 2019 Elsevier B.V. All rights reserved.

https://www.sciencedirect.com/science/article/pii/S0166432818313573

WannaCry Hero Marcus Hutchins’ New Legal Woes Spell Trouble for White Hat Hackers

WannaCry Hero Marcus Hutchins’ New Legal Woes Spell Trouble for White Hat Hackers

British security researcher Marcus Hutchins, who was indicted and arrested last summer for allegedly creating and conspiring to sell the Kronos banking trojan, now faces four additional charges. Hutchins, also called MalwareTech and MalwareTechBlog, is well-known in the security community for slowing the spread of WannaCry ransomware as it tore through the world’s PCs in May 2017. And as the months have dragged on since his indictment—he has been living in Los Angeles on bail—the latest developments in the case have stoked further fears among white hat hackers that the Department of Justice wants to criminalize their public interest research.

Wednesday’s superseding indictment, which ups the total number of charges Hutchins faces to 10, alleges that in addition to Kronos, Hutchins also created a hacking tool called UPAS Kit, and sold it in 2012 to a coconspirator known as “VinnyK” (also called “Aurora123” and other monikers). Prosecutors also assert that Hutchins lied to the FBI during questioning when he was apprehended in Las Vegas last year. The original Hutchins indictment listed a redacted defendant along with Hutchins; the superseding indictment only lists Hutchins, which indicates to some observers that a shift has occurred.

“Back when Hutchins was originally indicted I thought there was a possibility that he might be cooperating and that he might get favorable treatment because of WannaCry. Now that seems way more unlikely,” says Marcus Christian, a cybersecurity-focused litigation partner at the firm Mayer Brown, who was previously a prosecutor in the Florida US Attorney’s Office. “It’s usually a bad sign when they’re charging additional crimes, particularly when one has to do with lack of honesty, so there could be someone else who’s cooperating.”

One of Hutchins’ lawyers, Brian Klein, said in a tweet on Wednesday that the new indictment is “meritless” and “only serves to highlight the prosecution’s serious flaws.” Klein added, “We expect @MalwareTechBlog to be vindicated and then he can return to keeping us all safe from malicious software.”

The superseding indictment in the Hutchins case raises further alarms for security researchers who already saw the case as problematic. The indictment expands the list of alleged “overt acts” that the prosecution claims fueled the conspiracy, which broadens the implications for white hat hackers as well.

‘It’s usually a bad sign when they’re charging additional crimes, particularly when one has to do with lack of honesty.’

Marcus Christian, Mayer Brown

The case has always charged Hutchins under the Computer Fraud and Abuse Act, which traditionally applies to illicit hacking cases. CFAA prosecutions, though, have generated tension between law enforcement and the security community for decades, with researchers and digital rights advocates arguing that the deeply flawed law is open to manipulation and overuse. But Hutchins’ case actually goes a step further. Both indictments have also included counts of wiretapping, in keeping with a broader trend toward classifying malware that can steal data as an “intercepting device.”

“The word ‘device’ is very fuzzy,” says Ahmed Ghappour, an associate law professor at Boston University who specializes in cybersecurity and criminal law. “If you were to stretch it to include development of malware, wiretapping provisions potentially have a broader scope than the Computer Fraud and Abuse Act and could really do an end-run on security research.”

In one episode noted by the superseding indictment, Hutchins evaluated the hacking tool Phase Bot in late 2014 and blogged about its shortcomings and weaknesses. Phase Bot is a type of “fileless malware” that is noteworthy as part of a larger trend in concealing hacking tools from detection. The indictment interprets Hutchins’ analysis of Phase Bot as an attempt to discredit a competitor of the Kronos banking trojan. But the two types of malware are very different, and Hutchins’ blog posts, if anything, would have helped Phase Bot’s developers improve their tool—a strange approach if Hutchins wanted to undermine the malware—at the same time that the research gave defenders a better understanding of how to defeat it.

Monkeying with malware platforms, reverse engineering samples, and analyzing how hacking tools work are routine activities for white hat hackers, and crucial components of the defense intelligence pipeline in private industry. By identifying these types of actions as criminal in Hutchins’ case, the superseding indictment could have a chilling effect on digital defense research.

The security community has rallied to Hutchins’ cause, which appears to have empowered and embolden him. “It’s been overwhelming the amount of people reaching out to show support lately,” he tweeted on Thursday. But in spite of his resources, which aren’t a given in these types of cases, Hutchins still faces the very real risk of a conviction and prison sentence.

“It’s an ongoing investigation and both sides can continue gathering evidence to present at trial,” BU’s Ghappour says. “But researchers have a legitimate cause for concern that they might be subject to a technicality in the law. Frankly, it’s something that we should all be concerned about, because we rely on these people for our security.”


More Great WIRED Stories

Read More

*’Ocean’s 8′ Is Good, but It’s Time for New Women’s Stories, Not Just Gender-Swaps

*’Ocean’s 8′ Is Good, but It’s Time for New Women’s Stories, Not Just Gender-Swaps

For the entire history of moviemaking, men have been the focal point. There have been exceptions, of course, but the epic hero journeys, the buddy comedies, the spy thrillers—most of them revolved around dudes. Until recently. From calls for equal pay for actresses to calls for more women-led films, Hollywood has slowly been making strides to rectify its gender imbalances. There have been growing pains, but progress has been made. Things are working! There’s just one issue: The success of female-fronted movies is always measured against the boys who came before.

The most recent example, naturally, is Ocean’s 8, which opens today. A continuation of the Steven Soderbergh-helmed franchise starring George Clooney, it’s built around the conceit What if women had been involved in those heists? (Yes, I know Danny Ocean employed Julia Roberts in later installments, but Tess Ocean’s skill was playing a Julia Roberts lookalike, not, you know, hacking a security system.) As the story goes, Danny had a sister, Debbie (Sandra Bullock), who is also good at pulling off a job and has a whole cadre of other female friends who are too. Directed by Gary Ross (Soderbergh served as a producer), it’s 100 minutes of fast-talking, fast-acting fun. It’s just like the Ocean’s movies that came before it.

And that’s the problem. The movie’s critical and economic reception will forever be measured against those of the previous installments. As Hollywood has broadened its horizons to include movies led by women, written by women, directed by women, one question has always loomed: Will these films do as well as those from their male counterparts? Will critics like them? Will audiences go see them? Because the (very wrong) collective wisdom of Tinseltown had stipulated that audiences only wanted male-led films, movies that bucked that wisdom always got heaped with the burden of Having Something To Prove. It happened with The Hunger Games; it happened with the all-female Ghostbusters. It’s happening again with Ocean’s 8.

In a way, this can be a good thing. So far, the positive reviews have largely pointed out that it has the magic that the original three films did. When the movie’s early box-office tracking numbers came in, reports noted that it was in line to claim more cash than Ocean’s Eleven. (Though some were quick to wonder whether it would meet the same poor-performing fate as *Ghostbusters.) By both of those counts, Ocean’s 8 is doing part of what it set out to do—prove that a previously bro-tastic franchise could be executed successfully with a cast of women. Behold: It is proven.

Ocean’s 8 is doing part of what it set out to do—prove that a previously bro-tastic franchise could be executed with the same success rate with a cast of women. Behold: It is proven.

How it’s proven—and what it’s proving—is another story. In the lead-up to release, trade publications ran stories on how the movie’s studio, Warner Bros., and theater chains were going to market the film. While the studio seems to be leaning on the movie’s stars—Bullock, Anne Hathaway, Cate Blanchett, Rihanna, Mindy Kaling, Sarah Paulson, Awkwafina, and Helena Bonham Carter—theaters are drumming up interest with themed screenings. I went to one of these, a black-tie event at the Alamo Drafthouse in Brooklyn; it was a hoot, and the most dressed-up crowd I’d ever seen at an Alamo Drafthouse.

Thanks in part to the misogynist, racist reactions to movies like Ghostbusters, Wonder Woman, and even Rogue One: A Star Wars Story, fans now know that if they want to keep seeing movies like this, they have to show up in droves on opening night to prove they’re being serviced. No surprise that studios and theaters are more than happy to cater to that.

But that all side-steps the real issue. No matter how good Ocean’s 8 is—and it is—it will never be judged on its own merits. It’ll only ever be seen as an example of women being able to do something as well as men. And that’s the truly unfortunate thing. Ocean’s 8 is full of moments that speak to women’s experiences, moments that are more than just women doing things typically thought of as “guy stuff,” but because of the very nature of the “____, but with women” concept, they get drowned out.

Earlier this week, writing for Vulture about the [fan trolling]{https://www.wired.com/story/star-wars-toxic-fandom/) of Star Wars: The Last Jedi actress Kelly Marie Tran, Abraham Riesman wrote:

It might be incumbent upon the rest of us to stop caring so much about
Star Wars and Marvel movies and other empires originally built in less-progressive eras. … Maybe it’s time for us to put our heads
together and walk a new path, one that doesn’t put faith in corporate
mega-properties that are predicated on appealing to as many people as
possible. … Could it be that the best route forward is to start
putting more dollars into truly *new * stories, ones that center
traditionally marginalized creators and characters, that are not just
tilting toward our values, but are instead built on them?

Movies in the Ocean’s franchise don’t approach the scope of a Star Wars or Marvel film, but what he’s saying still applies. Ocean’s 8 holds its own, but it could’ve been better if that same group of eight fantastic women had been hired to pull off a job of their own design. When that happens, it will be clear that women can truly steal the show.


More Great WIRED Stories

Read More

Behind the Scenes With the Stanford Laptop Orchestra

Behind the Scenes With the Stanford Laptop Orchestra

Ten days before the big concert, the members of the Stanford Laptop Orchestra are performing technology triage. Rehearsal has only just started, but already, things seemed to be falling apart. First there was trouble with the network that connects the laptops to one another. Then one of the laptops crashed; its human component, a graduate student named Juan Sierra, groans loudly. One of the hemispherical speakers emits a low, crunchy noise, like a fart.

The orchestra members have gathered at Stanford’s Center for Computer Research in Music and Acoustics to rehearse a new kind of musical composition. Together, sitting on meditation pillows in front of MacBooks, they create songs that stretch the definition of music. The orchestra plays laptops like accordions, turns video games into musical scores, and harnesses face-tracking software to turn webcams into instruments. But at this rehearsal, the Stanford Laptop Orchestra (SLOrk) looks less like the symphony of the future and more like an overworked IT department.

“Slorkians! Lend me your ears,” shouts Ge Wang, the SLOrk’s founder and director. He wears a grey T-shirt and black pants, as he does every day, his black hair down to his shoulders. Wang gives the group five more minutes to troubleshoot and then, he says, it’s time for rehearsal to begin.

SLOrk’s Mark Hertensteiner conducts a piece.

Ge Wang

Fixing a broken network isn’t as simple as a replacing a snapped string on a violin. But in a laptop orchestra, the potential for disaster is part of the delight. Since it was founded in 2008, the SLOrk has been making music that surprises audiences while it subverts the concept of orchestral performance. The compositions, part-machine and part-human, don’t always go according to plan. Technical difficulties are all but guaranteed. Now, as the orchestra prepares for its tenth anniversary show on Saturday, June 9 at Stanford’s Bing concert hall, it’s playing with those same principles—and shaping the next decade of musical experimentation.

Stanford’s Center for Computer Research in Music and Acoustics lives in a Spanish Gothic mansion on campus called The Knoll. Originally the house of Ray Lyman Wilbur, Stanford’s president in 1915, the estate sits on a high hill where two of Stanford’s main roads bisect; from a back window, rolling green hills give way to the horizon.

The program was founded in 1964 by John Chowning, a composer by training who’d come to Stanford for his doctorate in music composition. Chowning had never seen a computer before, so when a colleague showed him a paper about programming instruments with machines, Chowning was intrigued. A few years later, he would create the Center for Computer Research in Music and Acoustics—abbreviated as CCRMA, and pronounced like “karma”—as an offshoot of Stanford’s new AI laboratory. It would be a space for musicians, like him, as well as Stanford’s litany of engineers, scientists, and programmers.

Computer music proved fruitful for Chowning. A few years after founding CCRMA, he would make a major breakthrough by discovering frequency modulation synthesis, the technique used to elicit pure-tone sounds out of machines. It could make a stroke of a key sound like the reed-tone of a clarinet, or make a cell phone ring tone sound like a recognizable song. Chowning patented the technology and licensed it to Yamaha, leading to the Yamaha DX7, the first commercially viable digital synthesizer, and the rise of electronic keyboards. It became Stanford’s most lucrative patent at the time. A few years later, in 1986, the university gave CCRMA the mansion on the hill.

Tucker Leavitt performs with SLOrk.

Ge Wang

Since then, Stanford’s program has created mathematical models to simulate the crisp sound of a Steinway, or the sliding sweetness of a violin. Other programs sprung up across the nation, in universities like Princeton, Columbia, and Johns Hopkins. The field has devoted considerable energy to reproducing acoustic instruments in digital formats—but it’s also invented entirely new ones.

“Nothing’s better at being a cello than a cello,” says Wang. “So we’re not trying to make a cello. We’re trying to make something you don’t have a name for yet.”

“The question of the future of instruments is an interesting one,” Chowning said in a Stanford press release from 1994, introducing digital waveguide synthesis, which would pave the way for a new class of electronic synthesizers. “Some people think that totally new instruments will be developed and take over. But I don’t think so, because so much of music is tied to repertoire and tradition, which is tied to specific instruments.”

But what if, as an experiment, you took something like an orchestra—a type of musical ensemble steeped in repertoire and tradition—and subverted it with entirely new instruments? What would you learn about the nature of music, the limitations of certain instruments, or the qualities of art that transcend mediums? What would you gain from the unlikely pairing of an orchestra, “an almost archaic institution whose continued existence is something of a miracle,” as computer music researcher Dan Trueman once put it, with the technological newcomer: a laptop?

In 2005, Trueman and fellow Princeton computer researcher decided to see if it would work. The two founded the Princeton Laptop Orchestra, an ensemble of 15 “laptop-based meta-instruments.” (Wang, a Princeton graduate student at the time, was also a founding member.) They dreamed of challenging the very idea of an instrument, of an ensemble, of the relationship between human and machine. An orchestra captured the broader narratives of nations, cultures, modern institutions over time. Could a laptop orchestra provide the next chapter in that story?

The Stanford Laptop Orchestra meets to rehearse every Wednesday night in the spring from 7:30 to 10:30 pm (The late hours are a remnant of Wang’s night-owl habits as a graduate student.) It’s a for-credit course at Stanford—Music 128, cross-listed in the computer science department as CS 170—but getting in isn’t easy. The group of 15 students includes those with computer science credentials, and those with more traditional music backgrounds, but neither is enough to become a great laptop orchestra player. The most important thing is curiosity. “We’re unified by this interest to make music together with computers,” says Wang, “and to figure out what that means.”

Wang likes to call SLOrk a kitchen of sound. “We can go to a restaurant, order delicious food, and enjoy that,” he says. “But there’s a special joy in going back into the kitchen with raw ingredients and being able to concoct your own dish. The process of making—and eating—your own creation carries with it its own satisfaction.”

Every orchestra member gets a MacBook, propped up on an Ikea breakfast tray, with a meditation pillow beside it.

In the ten years that SLOrk has existed, it’s composed over 200 original works and created almost as many new instruments. Most of these works have little in common, but they all start with the same set-up: Every orchestra member gets a MacBook, propped up on an Ikea breakfast tray, with a meditation pillow beside it. The laptop connects to a homespun hemispherical speaker, made by adding car speaker drivers and high-efficiency amplifiers to Ikea salad bowls. (From far away, they look a bit like Minions.) Wang created the speakers during the first year of SLOrk, with an aim to add an acoustic element to an otherwise machine-heavy ensemble. “We want the computer instruments to seem more like acoustic instruments where the sound isn’t coming from a PA system around you but from the artifact itself,” he says. While the MacBooks and cables have been replaced a few times, the hemispherical speakers are the same ones SLOrk used ten years ago.

Every station also includes a GameTrak, a game controller with a retractable cable. GameTraks were originally used in golf simulation video games, where they could turn someone’s virtual golf swing into data points. It was a commercial flop, but computer music researchers immediately saw the appeal. “We bought no less than 100 of them at massively discounted prices,” says Wang.

The Stanford Laptop Orchestra uses a range of noise-makers, from game controllers and physical instruments to, yes, laptops.

Ge Wang

Kimberly Juarez-Rico gets tuneful.

Ge Wang

The device maps movement in three-dimensional space. For a laptop orchestra, that means turning fluid movement into sound value. “It opens up the infinite space of human music, and the dancelike qualities of musical performance,” says Matt Wright, a longtime SLOrkian and one of the orchestra’s instructors. “You can put one in someone’s hands and say, ‘Here. Make an instrument out of this.'”

In past performances the ensemble has used GameTraks to operate video-games that translate into melodic compositions, or finger-plucked the cable like a traditional string instrument. One composition in SLOrk’s upcoming show introduces a new instrument, created by hanging GameTraks upside down on a beam and weighting them with various wooden blocks. Performers push them like swings on a playground to create the song. The performance is wildly playful, like watching kids on a playground discover the delightful sounds of their own laughter for the first time.

One student used a face-tracking program called FaceOSC to turn facial movements into sound.

During the SLOrk term, each student creates their own instruments, composes their own scores, and performs them with the class. There are virtually no rules, other than the limits of imagination and programmability. One student, Kunwoo Kim, used a face-tracking program called FaceOSC to turn facial movements into sound. He and fellow SLOrk member Avery Bick stared into their laptop web cams while opening their eyes wide, or raising their eyebrows, or stretching their mouth to scream, to control the pitch and tempo of the face-tracking instrument.

“Using a face as a controller was a very interesting concept for us,” he says. “We wanted to deliver a human message that uses human parameters.”

Kim came to Stanford after earning a bachelor’s in mechanical engineering and a master’s in electrical engineering. He joined CCRMA because he wanted an interdisciplinary program that would let him continue engineering while also studying music; when he heard about SLOrk, he figured he’d give it a shot.

“I had no idea what was going on,” he says about his first day in the orchestra.

Soon, though, things started to click—and Kim found something in SLOrk that he’d never found before in his engineering coursework. The point of SLOrk isn’t to have a direction. It’s to find a direction.

“The engineering that I have been doing was about solving problems,” says Kim. “But in SLOrk, there’s no problem to solve. We try to cover more of the sentimental side of human beings. And I think that’s very interesting. You’re actually trying to say something about humanity through the computers.”

The nature of computer music means that SLOrk performances can sometimes be hard to grasp. The orchestra’s music often sounds like a chorus of beep-boops, or worse: Some compositions create screechy, metallic sounds, the noise a computer overlord would make when demolishing the human race. Other passages just sound weird, the result of too much randomization from the computer program used to create the song.

“We don’t always like the music we make,” says Wang. “The litmus test is: Is it interesting?”

By SLOrkian standards, “interesting” has a wide berth. A performance that makes use of technology in a novel way (like Kim’s face-tracking webcam instrument) usually qualifies, as do performances that subvert common conceptions of music (like a composition from last year’s concert, where Trijeet Mudhopadhyay invited the audience to open a web page on their phones that made musical sounds). Once, SLOrk students created a piece played by using two Oculus Rifts attached to Leap Motion controllers to track arm movement, which produced some of the sounds. The experiment brought computer music into the virtual world, even if the “music” it created wasn’t exactly nice. Wang pushes his students to think about aesthetics, even when the results are sometimes bizarre.

Ge Wang

But SLOrk can also be beautiful. With computers, the orchestra can simulate the sound of a violin 100 feet tall. It can create sliding sounds on static instruments. And it can prototype sound through human gestures—even human expressions—to make music the likes of which we’ve never heard, or seen, before.

“Given that you have infinite options, and it’s very hard to control everything, people sometimes [create compositions] that are crazy,” says Juan Sierra, a masters student at CCRMA and a member of SLOrk. “They don’t need to be crazy all the time. It’s not impossible to create very tonal and very traditional music with computers.”

Sierra, who comes from a background in sound engineering, gravitates toward more melodic sounds. Together with Doga Cavdir, he created a piece for the last SLOrk concert reminiscent of a traditional string orchestra. Cavdir played a solo, stretching the cable of the GameTrak in melodic, emotional gestures. From the back of the audience, or by the look of concentration on her face, you would be forgiven to think she was playing a cello instead of an outdated video game controller.

In the penultimate rehearsal before the tenth anniversary show, Wang teaches the ensemble one of the oldest laptop orchestra compositions. The piece, “Non-specific Gamelan Taiko Fusion,” was composed during the first-ever laptop orchestra performance in 2005, at Princeton. Wang calls it a “classic,” in a genre that’s anything but.

The piece involves the entire orchestra, with each musician running a program on their laptops that looks like a primitive computer game. There are squares in different colors; clicking on them cues various percussive sounds. A human conductor (Wang, in this case) controls the timbre of the bells and directs different members of the orchestra to play at different times. It creates a percussive melody that starts as wind chimes, and then gives way to bossier sounds, like bongos and taiko drums. It doesn’t require much skill on the part of the players, who are effectively learning a new instrument. Just some skillful programming, and a willingness to play along.

Watching them, it’s easy to see beyond the laptop. No one here knows exactly what they’re doing, or even if they’re doing it right. But unlike nearly every other exercise in computer programming, it doesn’t matter. They’re just a group of people learning to play, as if for the very first time.


More Great WIRED Stories

Read More

Wanna Pull Water Out of Air? Grab Some Ions or a Weird Sponge

Wanna Pull Water Out of Air? Grab Some Ions or a Weird Sponge

Find yourself adrift at sea, surrounded by undrinkable water, and you will parch to death. Find yourself lost in a desert and you will meet the same fate, also surrounded by water, also undrinkable. That’s because, even in the driest of lands, the air is loaded with water molecules—they just won’t do you any good.

Devices exist that can pull that water out of the air and convert it into liquid, but they are bulky and use a lot of energy. A pair of studies out today in Science Advances, however, describe clever technologies that could suck water right out of the air, one using zero energy and the other using very little. The techniques won’t quench the collective thirst of humanity, but they’ve got serious potential to help us augment water supplies in particularly dry places, especially as climate change wreaks its havoc.

The first technology isn’t a new concept, but a supercharged version of of an old one: fog collection. Fog is just a cloud of tiny, innumerable droplets of water. Collect enough of those droplets and you can get yourself a glass of water. In Chile, for instance, fine nets capture fog and funnel it into pipes for drinking and even beer-making.

Great, but not as great as it could be. “The efficiency of these sort of passive fog collectors is on the order of anywhere between 1 and 2 percent, it’s extremely poor,” says MIT mechanical engineer Kripa Varanasi, coauthor of one of the new papers. When foggy wind passes through your typical netting, most of it flows through the holes between the strands. That means it takes a long time for enough water droplets to smack into the strands and accumulate there. So just make a finer net, right? Nope—the wind just tries to go around it.

What you really want is for the water droplets to be attracted to the mesh. To do that, Varanasi turned to electric fields. In the lab, he propelled a stream of fog through an ion emitter, which in this case produces charged air atoms. “As these ions are moving forward, they get intercepted by the droplets, and the droplets get charged,” Varanasi says.

These ionized droplets are positively ga-ga for the mesh collector. Take a look at the GIF below. It starts off with fog flowing like normal, but once the ion emitter switches on, the fog can’t escape the collector. The effect is so powerful, water droplets that do make it through the mesh then make a U-turn and come right back for it, resulting in an efficiency of 99 percent. The trapped fog then drips as liquid water into a glass below.

Varanasi Research Group at MIT

Are you listening, San Francisco? Theoretically, any region with a healthy supply of fog could deploy nets and ion emitters, which may run at high voltage but actually draw a small current. In the lab, the system operates at 60 watts per square meter of mesh. Compare that to another technology used in thirsty places like India: “air water generators,” which act like refrigerators to cool the air and allow it to condense, but at considerable energetic cost.

So the ionization works, but you can’t just deploy it willy-nilly wherever there might be a little bit of fog. You’d want a lot of the stuff, and you’d want the system to know when’s best to switch on. “What you’d really need to turn this into a viable water supply is to have a good sense of when the fog is present,” says chemical engineer Greg Peters, who studies air water generation techniques. “If it’s just going to sit there being struck by lightning on a hilltop for half the year, then that’s a lot of sunk costs.”

Varanasi Research Group at MIT

The technology could even make its way into power plants, specifically cooling towers, which spew water vapor. It takes a lot of water to cool these things. Like, 39 percent of total freshwater withdrawals in the United States are earmarked for power plants. Over the course of a year, one facility can use as much water as 100,000 people. “We can capture the plumes and collect that water,” Varanasi says, something no other technology can do.

To use this technology to collect natural fog, though, you need natural fog, which deserts don’t really have much of. That’s where our second new technology comes in. Researchers at UC Berkeley have developed what is essentially a water battery: It charges at night and drains during the day.

The water battery is based on a material known as a metal-organic framework. The metal being zirconium and the organic bit being carbon atoms. Combined, the two substances form a powder—a framework with lots of space inside. A very fancy sponge, more or less.

“If you expose this material to humid air, the framework will get saturated with water molecules,” says chemist Eugene Kapustin, coauthor on the paper. “And then, because the water molecules don’t stick too tightly to the interior of the framework, we can release this water by heating the powder.”

The researchers took this metal-organic framework and spread it on top of a box. They then put this box inside another clear box with a lid. At night, they keep the lid open, letting air in. This air is relatively humid compared to the day. “During the day we simply close the lid of the outer box and expose it to sunlight,” Kapustin says. This heats the material and releases the water as vapor. “After 5 hours, at the bottom of the outer box we can see liquid water as it condenses on the walls and flows down.”

Sure, it doesn’t produce a tremendous amount of water at the moment: 7 ounces for every 2 pounds of metal-organic framework. But the researchers are testing an aluminum-based version of the material that is cheaper and twice as efficient. Scale up your box and add more of the metal-organic framework, and you collect even more water.

Also, the water battery can withstand at least 150 cycles without any degradation. “We analyzed the purity of the collected water, and we didn’t see any organic parts or inorganic parts,” Kapustin says. “So this tells us that the material is stable, and also we see that the performance of our device doesn’t decrease over time.”

Plus, the beauty of this system is its passivity—it uses only the power of the sun. And it works out in the wild, too—in tests in Arizona, the researchers got the thing to collect water even though humidity during the day dropped to 8 percent.

No, technologies like these won’t quench the world’s thirst. But they could help water-strapped areas follow perhaps the most important water rule of all: diversify your sources. Rely solely on infrastructure that pipes in faraway rainfall and you’re asking for trouble. Technologies like metal-organic frameworks and ionized fog collection won’t work everywhere, but one day they could help humanity avoid withering on the vine.


More Great WIRED Stories

Read More

It’s Time for the Next Wave of Ocean Exploration and Protection

It’s Time for the Next Wave of Ocean Exploration and Protection

This week marks the 10th Annual World Oceans Day, a global confluence of ocean-awareness events intended to bring our oceans the level of public attention they deserve. As we both have had the opportunity to explore a fair amount of our globe’s seas, on this occasion we’d like to share our excitement and our vision for the future.

WIRED OPINION

ABOUT

Ray Dalio (@raydalio) is the founder of Bridgewater Associates and the OceanX initiative. Marc Benioff (@benioff) is CEO and chair of Salesforce, as well as founder of the Benioff Ocean Initiative at the University of California, Santa Barbara.

To us, the ocean is humanity’s most important and most under-examined treasure. While the world below the ocean’s surface is more than twice the size of the world above it and contains an estimated 94 percent of the space where life can exist on Earth, only 5 percent of the world’s oceans have been fully explored.

The ocean is critical to human life—more than 50 percent of the oxygen we breathe comes from it. It drives our weather, provides a nutritious food supply, and is a key source of commerce, supporting more than 28 million jobs in the United States alone. For those reasons, it deserves our reverence and protection. Instead, humans neglect it and treat it like a toilet that we overfish from.

We believe ocean exploration is more exciting and more important than space exploration. Yet it only receives about one-one hundredth as much funding. We want to change that by showing people the ecosystems and underwater habitats across the globe that are brimming with unexplored environments filled with species that have evolved in ways we cannot possibly imagine. By discovering and understanding these ecosystems, we can unlock cures to diseases, grow new foods, discover new medicines, create new industries, and fully understand our planet. The possibilities are endless, if we choose to open the door to them.

Thanks to profound technological advancements in recent years, we now have the potential to open these doors like never before. Using new sensors, submarine technology, and autonomous vehicles, humans have the opportunity to advance the public’s understanding and appreciation of the ocean, much as the development of scuba equipment and underwater cameras allowed ocean explorer Jacques Cousteau to captivate the world 50 years ago.

Using many of these technologies, the BBC’s recent Blue Planet II inspired a new generation of explorers to turn toward the ocean and prodded public policymakers to introduce new measures to prevent plastics pollution. When it sets sail next year, the Alucia2, a new vessel funded by OceanX, will be the most advanced vessel designed for both media production and cutting-edge scientific research, bringing the excitement of ocean discovery to the world as broadcast and digital programs and in real time.

Exploring our oceans is key to protecting them. As renowned oceanographer Sylvia Earle has said, “Far and away, the biggest threat to the ocean is ignorance.” Exploration is the key to ending that ignorance and making the oceans accessible, tangible, and exciting to the broader world so that people will understand and protect them.

Creating such understanding will ensure we don’t lose the richness of the biological assets in our oceans before they are even discovered. Protected areas, such as Papahānaumokuākea and the Pacific Remote Islands Marine National Monuments, serve as savings accounts that protect strategic areas of our ocean for future exploration and discovery while simultaneously producing more fish, food, and income for fisheries.

Even as we begin to explore the most remote reaches of our oceans, we are finding them sullied—with trash detected at the bottom of the Marianas Trench, the deepest part of our ocean, and floating in the most remote regions of Antarctica. This signals we must act now to ensure we discover more than plastic bottles during this next generation of ocean exploration.

There are many great government and non-profit organizations, philanthropists, scientists, and entrepreneurs doing critical and unheralded work to protect our oceans—but they need more support. Not just in the form of funding, but in the form of public energy and momentum.

One positive sign is that world leaders at the G7 Summit in Canada today are prioritizing ocean health, calling for aggressive measures to combat plastic pollution and climate change. We need this interest to translate into a firm G7 stance on oceans. And we need this ripple of leadership to turn into a tidal wave of public, private, and community support for securing the healthy future for our oceans upon which we all depend.

We must embrace ocean exploration in the same way President Kennedy inspired the nation when he called for man to land on the moon. We’ve spent 65 years since that moonshot pledge looking up at the stars, while the oceans and all the wonders and creations they hold are sitting right at our feet, waiting to be discovered.

Our goal is to revive the Jacques Cousteau moment, creating one big wave of excitement and interest among the public in what lies beneath the waterline—because we know that if humans explore our oceans, we will love them, and if we love them, we will protect them.

WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.


More Great WIRED Stories

Read More

The US Again Has World’s Most Powerful Supercomputer

The US Again Has World’s Most Powerful Supercomputer

Plenty of people around the world got new gadgets Friday, but one in Eastern Tennessee stands out. Summit, a new supercomputer unveiled at Oak Ridge National Lab is, unofficially for now, the most powerful calculating machine on the planet. It was designed in part to scale up the artificial intelligence techniques that power some of the recent tricks in your smartphone.

America hasn’t possessed the world’s most powerful supercomputer since June 2013, when a Chinese machine first claimed the title. Summit is expected to end that run when the official ranking of supercomputers, from an organization called Top500, is updated later this month.

Supercomputers have lost some of their allure in the era of cloud computing and humongous data centers. But many thorny computational problems require the giant machines. A US government report last year said the nation should invest more in supercomputing, to keep pace with China on defense projects such as nuclear weapons and hypersonic aircraft, and commercial innovations in aerospace, oil discovery, and pharmaceuticals.

Summit, built by IBM, occupies floor space equivalent to two tennis courts, and slurps 4,000 gallons of water a minute around a circulatory system to cool its 37,000 processors. Oak Ridge says its new baby can deliver a peak performance of 200 quadrillion calculations per second (that’s 200 followed by 15 zeros) using a standard measure used to rate supercomputers, or 200 petaflops. That’s about a million times faster than a typical laptop, and nearly twice the peak performance of China’s top-ranking Sunway TaihuLight.

The view inside one of the Summit supercomputer’s 4,608 servers.

Oak Ridge National Laboratory

During early testing, researchers at Oak Ridge used Summit to perform more than a quintillion calculations per second in a project analyzing variation between human genome sequences. They claim that’s the first time a scientific calculation has reached that computational scale.

America’s new best computer is significant for more than just the geopolitics of computational brawn. It’s designed to be more suited than previous supercomputers to running the machine learning techniques popular with tech companies such as Google and Apple.

One reason computers have lately got much better at recognizing our voices and beating us at board games is that researchers discovered that graphics chips could put more power behind an old machine learning technique known as deep neural networks. Facebook recently disclosed that a single AI experiment using billions of Instagram photos occupied hundreds of graphics chips for almost a month.

Summit has nearly 28,000 graphics processors made by Nvidia, alongside more than 9,000 conventional processors from IBM. Such heavy use of graphic chips is unusual for a supercomputer, and it should enable breakthroughs in deploying machine learning on tough scientific problems, says Thomas Zacharia, director of Oak Ridge National Lab. “We set out to build the world’s most powerful supercomputer,” he says, “but it’s also the world’s smartest supercomputer.”

Summit’s thousands of servers could fill two tennis courts.

Carlos Jones/Oak Ridge National Laboratory

Eliu Huerta, a researcher at the National Center for Supercomputing Applications, at the University of Illinois at Urbana-Champaign, describes Summit’s giant GPU pool as “like a dreamland.” Huerta previously used machine learning on a supercomputer called Blue Waters to detect signs of gravitational waves in data from the LIGO observatory that won its founders the 2017 Nobel Prize in physics. He hopes Summit’s might will help analyze the roughly 15 terabytes of imagery expected to arrive each night from the Large Synoptic Survey Telescope, due to switch on in 2019.

Summit will also be used to apply deep learning to problems in chemistry and biology. Zacharia says it could contribute to an Energy Department project using medical records from 22 million veterans, about a quarter-million of which include full genome sequences.

Some people worried about US competitiveness in oversized calculating machines hope that the hoopla around Summit will inspire more interest in building its successors.

The US, China, Japan, and the European Union have all declared the first “exascale” computer—with more than 1,000 petaflops of computing power—as the next big milestone in large-scale computing. China claims it will achieve that milestone by 2020, says Stephen Ezell, vice president for global innovation policy at the Information Technology and Innovation Foundation. The US may get there in 2021 if Summit’s successor, known as Aurora, is completed on schedule, but the program has previously had delays.

The Trump administration’s budget this spring asked for $376 million in extra funding to help meet the 2021 target. It’s now up to the nation’s legislators to approve it. “High-performance computing is absolutely essential for a country’s national security, economic competitiveness, and ability to take on scientific challenges,” Ezell says.

More Great WIRED Stories

Read More