“Location data is growing in demand as more decisions are now based on ‘where.’ In fact, a recent analyst report indicates that by 2022, 30% of customer interactions will be influenced by real-time location analysis, up from 4% in 2017. Tableau’s new vector based maps offer greater detail and a smoother browsing experience than image-based mapping products. This means when customers zoom or pan, Tableau scales the map accordingly vs. loading images, no longer breaking customers from the flow of their analysis. Mapbox’s leading technology also provides new background mapping layers to add context — including subway and train stations, building footprints, terrain, and water labels — to geospatial data.”
With this update, Tableau continues to be at the forefront of BI and visual analytics. Users will get accurate and locale-aware places and boundaries worldwide, as well as high-performance maps that enable powerful analytics at scale, easy customization, and fast data exploration.
The Tableau and Mapbox story began five years ago, with two developers talking open-source tech at a cartography meetup in Seattle. Tableau had just completed its initial public offering, and Mapbox was a small startup working from a garage on the first vector maps platform.
It was clear even then that Tableau’s mission — to help people see and understand data — offered a radically new way to experience the world that tightly aligned with ours. It was also clear that the two teams shared a relentless focus on cultivating a data-driven ecosystem and empowering users with powerful, elegant tools.
The Tableau community’s positive reaction to the first Mapbox plugin at Tableau Conference 2015 showed we were onto something. They understood the power of a capability within Tableau that put performant maps in immediate reach. Since then, Tableau and Mapbox users have collaborated on a litany of projects, like Allan Walker and Anya A’Hearn’s work with PATH and the Tableau Foundation on the #VisualizeNoMalaria project:
Today’s announcement means our teams will be working together even more closely, and the future of location for business intelligence will be available to even more people.
I’m thrilled to be part of the Tableau community and to help put these (and many more) tools into the hands of Tableau users. We can’t wait to see what you build.
Total solar eclipses are the Super Bowls of astronomy. Teams of scientists develop game plans for capturing data months or years in advance of each one—as do amateur astronomers, working to create their own stellar images or execute their own studies—and it all comes down to a few precious seconds under the Moon’s shadow. When the clock runs out and the sun re-emerges, it’s game over.
The eclipse on Tuesday in Chile and Argentina was a uniquely special experience. Not only did it occur during a so-called solar minimum, when activity in the sun’s 11-year cycle of energy release is at its lowest, thereby reducing the amount of “clutter” for certain kinds of research, it also happened to streak directly above several of the world’s most prominent observatories. Their operators chose the locations in Chile’s Atacama Desert because of the pristine viewing conditions there—conditions that also contributed to incredibly crisp and clear viewing of the eclipse, even if the big telescopes were shuttered for the event to protect their sensitive instruments.
The event didn’t disappoint. I observed with researchers and a small collection of enthusiasts atop Cerro Tololo Inter-American Observatory, operated by the National Optical Astronomy Observatory. Though the planning for the event began years ago, final on-site preparations took place in the days, hours, and minutes leading up to the two minutes of totality on the late afternoon in the southern winter, with the final push arriving after a pre-dawn drive to the mountaintop from La Serena, on the Chilean Coast. Once on site, researchers from around the world set up and aligned their telescopes, calibrated their sensors, and ran power and data cables all around the top of the mountain. When show time arrived, they all announced they were ready for action.
It will take months for their data to be fully analyzed, but afterwards, the scientists were pleased and confident. “The eclipse was great,” said project scientist Paul Bryans, of the University Corporation for Atmospheric Research, whose group is investigating magnetic fields in the sun’s corona, or upper atmosphere. “Cerro Tololo was an awesome place to view it and the weather was perfect. As for the results, it is a little too early to really tell. First look at the data is promising, though. We’ll analyze it properly when we return to Boulder.”
For everyone else on the mountain, the eclipse was also both riveting and a bit nerve-wracking—at least in the lead-up—especially if you’re working to create images of your own, as I was. I’d thought for months about how many cameras to use, which lenses, and most critically, which settings. My plan was complex in theory, but relatively simple in execution. I would first fly my DJI Mavic Pro 2 drone up and park it—with permission from the observatory—about 1,000 feet behind the telescopes, to capture 4K video of the moon’s shadow passing over the observatory. This would happen about 10 minutes prior to totality, and I’d be able to simply start filming and the drone would automatically hold position.
Then I planned to spend the first 15 seconds of totality shooting through a 400mm f/2.8 Sony lens that the company loaned me for this—it’s a behemoth, and pricey at $12,000, but an astounding piece of gear—before switching to a wide-angle lens for another 15 seconds, hoping to capture the eclipse behind two of the observatory’s domed telescope enclosures. After that, I’d spend a full minute not shooting anything at all, but rather just walking toward the edge of the flattened mountain to enjoy the eclipse and absorb the otherworldly beauty of the event. This is an important element of the eclipse experience: just enjoying it. Many photographers and scientists get so wrapped up in their plans for capturing the eclipse that they forget to take it in themselves, since that awesome—in the truest sense of the word—experience really can’t be replicated through photo or video. You have to see it in person to fully grasp what it is.
Of course, when the countdown timer on my smartphone drew down toward the beginning of totality, all my plans immediately went up in smoke. I realized by looking at the feed from the drone that it wasn’t getting quite the view I’d hoped for, so I switched to photo mode, which has a much wider field of view, and lowered its altitude a bit. As totality began, I squeezed off a few shots manually; then I switched to the big Sony lens. My settings weren’t quite spot on, so I had to shift the shutter-speed dial on the fly to capture a range of exposures, and therefore both the detail of prominences and the brighter corona that was now visible. Fortunately, I could see through the digital viewfinder that I was getting good images, and I switched to the wider-field camera.
I didn’t spend a lot of time worrying about how good those would be, because I quickly realized it would be almost impossible to dial that image in perfectly in the time I had. So I burned off a few shots at different settings, switched lenses, and walked over to the fence at the edge of the mountain.
The walk was glorious, with the blocked-out moon hovering in the sky and the sun’s shimmering corona in the background. The coincidence that makes the whole thing possible—namely, the sun and moon having exactly the same apparent size in the sky, thus matching each other perfectly—is an astounding gift from the universe. My walk slowed to a stop as I stood trying to process both the surreal weirdness and the beauty of the spectacle. I picked up my camera, squeezed off a few more half-hearted frames of the eclipse, the landscape below, and the 360-degree sunrise we experienced at the observatory’s 7,000-foot altitude, and then rode out the final few seconds just marveling at the sight.
It ended with a burst of light on the right-hand side of the Moon, and the telescopes and the crowd began swimming in sunlight once again.
My images turned out surprisingly well given the chaos of those two minutes, and I’m grateful for that. I’d also trade all of them for another two minutes in the moon’s shadow. People often talk about the mystical and spiritual side of eclipses, and there’s definitely an emotional impact for a great many observers. But for me it’s simply a rare, special gift from the cosmos—a reminder that the universe is precise and predictable and endlessly surprising, but also beautiful and mysterious. The eclipse is a sublime manifestation of all those qualities, condensed into a two-minute show put on every 18 months, wherever it damn well pleases.
Want to see an eclipse yourself? If you missed the 2017 event in the US—or just want to take another crack at it—start preparing for the 2024 eclipse that will swing from Mexico to Canada, cutting straight across the US from Texas to Maine.
Maybe you figure that 60-inch 4K TV you just bought gives you a good excuse to never leave the house. All the entertainment you could ever need gets caught in its internet-enabled gravity well, orbits your streaming services a few times, and then, thwoomp! Into your eyeballs comes the sweet dopamine hit of fun.
But you are being deceived. Color televisions show color, sure—but not real, accurate, bold-as-life, wonderful-world-of color. And if you’re hoping to stay home on the Fourth of July and use that snazzy new TV to watch elegant feats of pyrotechnical expertise, you got a problem, my friend. Because one thing even the best color TVs cannot do is show you colorifically correct fireworks.
It’d make sense to think of TVs in terms of pixels, of how many dots they use to make a picture. No surprise there; it’s how TV makers pitch them—“high definition,” “4K,” and so on all refer, however obliquely, to dots per screen. But the standards that describe a TV screen’s capabilities also account for the range of possible colors it can display. That’s called a gamut, and it’s essentially a function of the ability of the light-emitting diodes that comprise that screen to put out light—both white light, for brightness, but also colors. The specific technologies vary in small ways, but the key is that the TVs output three colors: red, green, and blue. By varying the levels of each, the screens can emulate millions of colors, and human eyes and brains can’t tell the difference between those mixtures and a pure wavelength. Equal amounts of red and blue look the same, for example, as a straight-out violet.
But just because a TV can output a tremendous number of different apparent colors doesn’t mean it can output all the colors. In the 1970s a researcher named MR Pointer actually set out to account for all the colors reflected by the surfaces we humans see—a very different problem than the colored lights a screen might emit, but hold that thought. So Pointer combined a bunch of different databases of color (of flowers, dyes, and so on) and mapped them onto the theoretical space of all possible colors in the visible spectrum, the thin slice of electromagnetic energy that human eyes can register and human brains can process. Pointer found that the colors humans actually encounter in nature are in fact a subset of the colors that eyes and brains can handle. But more importantly for our purposes, he found that televisions at the time sucked.
The natural world had a gamut; televisions in 1980 had a narrower one. Today’s 4K TVs generally use light-emitting diodes either tuned to the right wavelengths or filtered in clever ways; back when Pointer was at work, the TVs used colored phosphors to emit light. But that set-up, even today, can only hope to create something like 90 percent of the colors humans can actually see. Those three emitters make the corners of an imaginary triangle that can lay on top of the perceivable colorspace; everything outside the triangle is a color that humans with color-normal vision can see, but that a television can’t produce. “In terms of chromaticity there are two areas of colour that are inadequately covered: high-purity green and green-blue colours, and high-purity red, magenta, and purple colours,” Pointer wrote in 1980. But on the other hand, he added, the blue phosphor was so good that “it falls outside the real-colour gamut.” It was more blue than blue.
Televisions have gotten a lot better since 1980, obviously. The original high-definition color standard captured a relatively small triangle of the human-perceivable color gamut. So-called Ultra High Definition, or UHD (called BT.2020 in tech-speak) gets bigger. Still, there’s no way around the truth here: Color televisions show color, but they don’t show all the colors.
Which brings me back to fireworks. They work because of the way certain elements deal with energy. Basically, if you pump energy into their atoms—in this case in the form of heat, from the fire part of fireworks—that energy gets absorbed by the electrons that orbit the atomic nucleus. But they can’t hold onto that excess juice, so they re-emit it—as photons, which is to say, light. And different atoms emit photons with different amounts of energy. Strontium pushes out red. Barium makes green. Sure, the mixes get more complicated and different recipes (and different shapes of explosive) give you different effects.
You can overlay those basic colors onto the map of all the possible colors, and onto the triangle of the BT.2020 gamut. That’s what Allison Harn of the quantum dot company Nanosys did a few years back for a blog post. And the results were … not great. An older high-def gamut just barely managed to capture the true hue of bluish copper chloride. Orange-y calcium chloride was just outside the triangle. Green barium and red strontium, though? Forget it.
UHD, the newer version, only did a bit better. The intense green and red of fireworks are still outside the gamut. The TVs would replace them with the closest available color they were capable of. So if you watched fireworks on your home display, you didn’t see their actual colors. You saw your TV’s best effort. No real surprise there. Good as it is, “BT.2020 is still only about 75 percent of human color vision,” says Poppy Crum, chief scientist at Dolby Laboratories.
But all is not lost. Don’t let darkness overtake you. Or rather, do, because it may well be that light and dark are what’ll save us here, rather than some kind of theoretical expansion of the colorspace. That’d be “high dynamic range,” or brighter whites and darker, inkier blacks. Now, for sure, UHD TV makers tout excellent dynamic range already, but nominally lifelike whitest whites and blackest blacks remain challenging to the highest-end televisions. But Crum thinks they’re the secret to more intense and realistic color experiences—and they also happen to be a key to the next standard, BT.2100. She describes being able to see the metallic, explosive sparkle of pyrotechnic light against a deep black nighttime sky as “transformative.”
Sadly, if you want the full patriotic red-red, white-white, and blue-blue experience for this year’s Fourth, you’ll still have to go outside and use your meatbag eyeballs and the clump of neurons behind them to get it. But maybe by the time this holiday rolls around again, BT.2100 TVs will be big on the market. And in between now and then, there’s another almost equally patriotic holiday that might help you get one, if you’re inclined to let your money get pulled in that direction. Around here we call it “Black Friday.”
For almost 30 years they passed as quirky eccentrics, diligently setting up their insect traps in the Rhine countryside to collect tens of millions of bugs and creepy crawlers.
Now the group of German entomology enthusiasts can boast a world-class scientific treasure: evidence of what is described as one of Earth’s worst extinction phases since the dinosaurs vanished.
Insects, which comprise two thirds of all terrestrial species, have been dying off at alarming rates, with disastrous impacts on food chains and habitats, researchers say.
The home of the Amateur Entomology Society of Krefeld on the Dutch border is a former school building where thick curtains block out the sunlight.
Inside in glass cabinets are stored thousands of butterflies, their wings bleached by time, along with exotic fist-sized beetles and dragonflies, brought back from around the world by amateur collectors.
Traditionally “entomology was mainly about drying and collecting rare specimens,” says the society’s president Martin Sorg, wearing John Lennon-style glasses, a multi-pocket jacket and sandals.
He and an army of volunteers have over the years gathered as many as 80 million insects that are now floating in countless ethanol bottles.
Each bottle contains the amount caught by a single insect trap over a set period, and each box represents a collection of such catches over nearly three decades.
“Since 1982, the traps we manufacture ourselves have been standardised and controlled, all of the same size and the same material, and they are collected at the same rate in 63 locations that are still identical,” explains Sorg.
The result is a treasure trove of quantitative data that dwarfs that of any funded university project, he says.
But if he is visibly proud of the society’s research, the outcome terrifies him: in the test period, the total biomass of flying insects here has plummeted by 76 percent.
Quaint Sunday hobby
To demonstrate the rapid decline, a lab technician holds up two bottles: one from 1994 contains 1,400 grammes of trapped insects, the newest one just 300 grammes.
“We only became aware of the seriousness of this decline in 2011, and every year since then we have seen it get worse,” says Sorg, the man who sounded the alarm.
At the time, the news didn’t make major waves outside ecological circles.
Concern about biodiversity loss focused mostly on large charismatic mammal species, and environmental monitoring such as that in Krefeld was considered a quaint Sunday hobby, largely ignored by the scientific community.
Also in 2011, just across the Dutch border, ecology professor Hans de Kroon was working on the decline of birds in the region.
He hypothesised that the birds suffered from a shortage of food, especially insects, but had no data to prove it.
“Then our German colleagues from Krefeld got in touch and said, ‘we have the data, we’ve witnessed a strong decline, we are very concerned, could you analyse the data?’.
“That’s how it all started.”
‘Point of no return’
In the search for the cause, the landscape around Krefeld provides some clues.
In the distance, industrial chimneys billow smoke.
On one side of the road lies a protected nature reserve. On the other, a sugar beet field is being sprayed with pesticides by an agricultural machine.
“You see, protected reserves are not so protected,” says Sorg.
Across the border, Kroon says, “we must realise that here in western Europe our nature is getting smaller, the agriculture fields are very hostile to insects. There is no food, they get poisoned.
“And nature areas are also more and more isolated. Insects can’t move from one place to another, it’s too far away.”
Although the exact cause for the die-off is not yet clear, he says, “the cause is anthropogenic, there’s no doubt about it.
“It is our greatest fear that a point of no return will be reached, which will lead to a permanent loss of diversity.”
‘Path of extinction’
The Krefeld research played a central role in a meta-study published by Francisco Sanchez-Bayo and Kris Wyckhuys from the Australian universities of Sydney and Queensland.
In February, they published the first synthesis of 73 studies on entomological fauna around the world over the past 40 years, listing places from Costa Rica to southern France.
They calculated that over 40 percent of insect species are threatened with extinction, and each year about one percent is added to the list.
This is equivalent, they noted, to “the most massive extinction episode” since the dinosaurs disappeared.
The main drivers appeared to be habitat loss and land conversion to intensive agriculture and urbanisation, followed by pollution, mainly from pesticides and fertilisers, invasive species and climate change.
“The conclusion is clear,” they wrote. “Unless we change our ways of producing food, insects as a whole will go down the path of extinction in a few decades.”
Insect apocalypse: German bug watchers sound alarm (2019, July 1)
retrieved 4 July 2019
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Last week, Verizon caused a major BGP misroute that took large chunks of the Internet, including CDN company Cloudflare, partially down for a day. This week, the rest of the Internet has apparently asked Verizon to hold its beer.
Facebook and its properties WhatsApp and Instagram have suffered widespread outages relating to image display for most of today. The problem seems to be bad timestamp data being fed to the company’s CDN in some image tags; when I looked into the broken images littering my own Facebook timeline, I discovered different timestamp arguments embedded in the same URLs. Loading an image from fbcdn.net with bad “oh=” and “oe=” arguments—or no arguments at all—results in an HTTP 403 “Bad URL timestamp”.
If you dig into the code, Facebook refers to the same image URL with different arguments in its anchor tags and its image tags; in the posts with broken images, the URLs in the anchor
Apple introduced several of the headlining features of its upcoming iOS 13 during WWDC, but people playing with the closed beta version have uncovered some additional tools. One newly found addition is FaceTime Attention Correction, which adjusts the image during a FaceTime video call to make it look like a person is looking into the camera rather than at their device’s screen.
In practice, that means that while both you and your contact are looking at each other’s faces, you’ll both appear to be making direct eye contact. Mike Rundle and Will Sigmon were the first to tweet about the find, and they describe it as uncanny, “next-century shit.” Another beta tester, Dave Schukin, posited that the feature relies on ARKit to make a map of a person’s face and use that to inform the image adjustments.
Guys – “FaceTime Attention Correction” in iOS 13 beta 3 is wild.
The feature appears to only be rolling out to the iPhone XS and iPhone XS Max with the current beta testing. It will get a wider release to the general public when iOS 13 officially goes live, which will likely be sometime this fall.
Apple has been introducing more and more features centered on automatically changing images. It has been giving its cameras tools like Smart HDR, which analyzes and composites multiple frames for the “best” shot or automatic reductions in the effect of shaky hands. Usually, these tools are optional, although you may need to dig around in your device’s settings to make sure the tools are off r
During 1950s the pioneer of galaxy cluster-finding, astronomer George Abell, spent many years searching for galaxy clusters by eye, using a magnifying lens and photographic plates to locate them. Abell manually analysed around 2,000 photographic plates, looking for visual signatures the of galaxy clusters, and detailing the astronomical coordinates of the dense regions of galaxies. His work resulted in the ‘Abell catalogue’ of galaxy clusters found in the northern hemisphere.
Deep-CEE builds on Abell’s approach for identifying galaxy clusters but replaces the astronomer with an AI model that has been trained to “look” at colour images and identify galaxy clusters. It is a state-of-the-art model based on neural networks, which are designed to mimic the way a human brain learns to recognise objects by activating specific neurons when visualizing distinctive patterns and colours.
Chan trained the AI by repeatedly showing it examples of known, labelled, objects in images until the algorithm is able to learn to associate objects on its own. Then ran a pilot study to test the algorithm’s ability to identify and classify galaxy clusters in images that contain many other astronomical objects.
“We have successfully applied Deep-CEE to the Sloan Digital Sky Survey” says Chan, “ultimately, we will run our model on revolutionary surveys such as the Large Synoptic Survey telescope (LSST) that will probe wider and deeper into regions of the Universe never before explored.
New state-of-the-art telescopes have enabled astronomers to observe wider and deeper than ever before, such as studying the large-scale structure of the universe and mapping its vast undiscovered content.
By automating the discovery process, scientists can quickly scan sets of images, and return precise predictions with minimal human interaction. This will be essential for analysing data in future. The upcoming LSST sky survey (due to come online in 2021) will image the skies of the entire southern hemisphere, generating an estimated 15 TB of data every night.
“Data mining techniques such as deep learning will help us to analyse the enormous outputs of modern telescopes” says Dr. John Stott (Chan’s Ph.D. supervisor). “We expect our method to find thousands of clusters never seen before by science”.
Chan will present the findings of his paper “Fishing for galaxy clusters with “Deep-CEE” neural nets” on 4 July at 3:45pm in the ‘Machine Learning in Astrophysics’ session. (Chan and Stott 2019) which has been submitted to MNRAS and can be found on Arxiv.
Deep-CEE I: Fishing for Galaxy Clusters with Deep Neural Nets, arXiv:1906.08784 [astro-ph.GA] arxiv.org/abs/1906.08784
Deep-CEE: The AI deep learning tool helping astronomers explore deep space (2019, July 3)
retrieved 4 July 2019
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purpose
Superhuman, the buzzy and currently invite-only email startup that you might have come across even if you yourself don’t have access if you’ve ever encountered a “Sent via Superhuman” email signature, is making some changes based on community feedback. These include removing location logging altogether, getting rid of all existing location data and turning off read receipts by default and making them an opt-in feature for users.
The email app’s default email tracking behavior (embedding the commonly used advertising tool of a “pixel” in emails to report back to senders info like whether an email’s been opened or not) raised a number of concerns, centered around this blog post by former Twitter design executive Mike Davidson. Davidson’s post generated a lot of community response, and now Superhuman founder Rahul Vohra has issued a response to that response, including a list of actions that his company is taking to address concerns. Specifically, Superhuman’s product changes are focused around mitigating the potential for abuse of sharing location data – which could be very dangerous in the hands of a sender with ill intent for their recipient.
These include immediately stopping any location logging for any emails sent by the service, and also rolling out new versions of the app that don’t show location data in the interface. All existing logged location data will also be deleted so it’s not even discoverable through means other than the UI, Vohra says in a blog post detailing the changes.
Superhuman won’t be getting rid of its “read status” feature entirely however — it’ll still provide info to Superhuman users about whether or not an email was opened. This feature will be turned off by default, however, so it’s on users to activate it. Note that that still doesn’t change anything for recipients of Superhuman emails with read receipts turned on — they don’t get an option to consent to sending read receipts. Finally, Superhuman will enable disabling of remote image loading, which is itself a way to block incoming tracking pixels.
Vohra said on Twitter the reason Superhuman hasn’t issued a response to this previously, despite a few days of heated conversation about their company, is that the startup was considering how best to address the concerns. As Matthew noted in an article Tuesday on the subject, this is actually how discussion and debate should work.