Hexbyte  Tech News  Wired ‘Red Dead Redemption 2’ Goes ‘GTA’ and the Rest of the Week in Games

Hexbyte Tech News Wired ‘Red Dead Redemption 2’ Goes ‘GTA’ and the Rest of the Week in Games

Hexbyte Tech News Wired

Hexbyte  Tech News  Wired

With the massive, blistering success of Grand Theft Auto Online eclipsing everything Rockstar has ever made up, it’s no wonder the studio is applying the model to its forthcoming Western game.

Rockstar Games

Hexbyte  Tech News  Wired

With the massive, blistering success of Grand Theft Auto Online eclipsing everything Rockstar has ever made up, it’s no wonder the studio is applying the model to its forthcoming Western game.

Rockstar Games

This week on Replay, we’ve got more Steam antics, a bit of gloating, and a new trend. Because if games can do something once, they can do it over and over. You know how it goes.

Red Dead Redemption 2 is Going the Way of Grand Theft Auto

Read More

Hexbyte  Tech News  Wired Siemen’s Self-Driving Street Car Puts Autonomous Tech on Track

Hexbyte Tech News Wired Siemen’s Self-Driving Street Car Puts Autonomous Tech on Track

Hexbyte Tech News Wired

Hexbyte  Tech News  Wired

In the past year, Siemens’ (passenger-free) street cars have puttered through Potsdam more than 450 times.

Siemens

Hexbyte  Tech News  Wired

In the past year, Siemens’ (passenger-free) street cars have puttered through Potsdam more than 450 times.

Siemens

Of the many acronyms engineers spend their lives internalizing, few are more valuable than KISS: Keep It Simple, Stupid. Constrain the problem, reduce the variables, and make life as easy as possible when designing novel systems—like, say, a self-driving car. The world is a messy, complicated place. The less of it you need to solve, the closer you are to having a working product.

That’s why Waymo tests and plans to deploy its vehicles in Chandler, Arizona, with its reliably sunny weather, calm traffic, and meticulously mapped roads. But even there, its robots are still reportedly running into issues with routine maneuvers, like left turns into traffic. Which makes it easier to understand why engineers in Potsdam, Germany, have taken the KISS idea to an extreme: They’ve put their autonomous vehicles on tracks. Siemens Mobility has spent the week showing off the first autonomous streetcar-style, light rail project—what Europeans would call a tram—at the giant InnoTrans rail trade show in Berlin.

Read More

Hexbyte  Tech News  Wired How to Set Up Your New iPhone

Hexbyte Tech News Wired How to Set Up Your New iPhone

Hexbyte Tech News Wired

What a lovely new iPhone you have! Here’s how to get acquainted with it to ensure a two-to-four year lifetime, give or take, of happiness. Or just one, if you’re an annual upgrade person.

When you turn on your new iPhone, you’ll be greeted by the Setup Assistant, through which you’ll establish essentials like your Wi-Fi network and six-digit passcode, Touch ID (or Face ID if you’re on the iPhone Xs or Xs Max), your Apple ID and iCloud account, and whether you want to activate Find My Phone and Location Services. You’ll also be asked if you want to set up Siri (you do!), which includes saying a few phrases so the assistant can get to know your voice.

Read More

Hexbyte  Tech News  Wired How to Sell Your Old Phone for the Most Money

Hexbyte Tech News Wired How to Sell Your Old Phone for the Most Money

Hexbyte Tech News Wired

Hexbyte  Tech News  Wired

Neil Goldwin/T3 Magazine/Getty Images

Hexbyte  Tech News  Wired

Neil Goldwin/T3 Magazine/Getty Images

When you’ve got a new phone, the world seems sparkly and bright. But don’t forget about your old phone yet. A number of services will give you cash or store credit for a used device. Just make sure to wipe all your data off your old phone before you part with it.

We’ve gathered some services where you can trade in an old device for cash or store credit. Most of these services also take Samsung and Google devices, and all of them take iPhones. To compare the prices offered by each service, we ran a price check for an Apple iPhone 8 (64GB, AT&T) in various states of damage.

Read More

Hexbyte  Tech News  Wired The Delta II Rocket That Gave Us GPS and the Mars Rovers Retires

Hexbyte Tech News Wired The Delta II Rocket That Gave Us GPS and the Mars Rovers Retires

Hexbyte Tech News Wired

Last weekend, the Delta II rocket—for 30 years a regular fixture on launchpads in the United States—lifted off for the final time. The vehicle, built by the United Launch Alliance, had long carried the title of the most reliable rocket in service. With a record 153 successful launches out of 155 flights, the 125-foot-tall monolith, with its sporty teal-and-white paint scheme, is now officially a figure of the past.

The Delta II first launched on Valentine’s Day, February 14, 1989, carrying the first full-scale GPS satellite and kickstarting the navigation constellation that we continue to depend on decades later. That satellite was originally slated to hitch a ride to orbit on the back of a space shuttle. After the Challenger’s tragic explosion in 1986, the Air Force had to find new rides for its planned satellite constellation. With the shuttle program grounded for the foreseeable future, President Ronald Reagan directed the military to develop its own rockets, which led to a series of upgrades culminating in the Delta II.

Read More

Hexbyte  Tech News  Wired Cloudflare Embraces Google Roughtime, Giving Internet Security a Boost

Hexbyte Tech News Wired Cloudflare Embraces Google Roughtime, Giving Internet Security a Boost

Hexbyte Tech News Wired

Hexbyte  Tech News  Wired

Nada Stankova/Getty Images

Hexbyte  Tech News  Wired

Nada Stankova/Getty Images

Cryptography schemes are complicated to understand and implement. A lot of things can go wrong. But when it comes to web encryption, a surprising number errors actually stem from a straightforward and seemingly basic mechanism: timekeeping.

Synced clocks in operating systems may make digital timekeeping look easy, but it takes a lot of work behind the scenes, and doesn’t always solve problems online. The internet’s decentralized nature means that the clocks behind every web browser and web application can actually have major discrepancies, which in turn can undermine security protections. In a step toward addressing these inconsistencies, the internet infrastructure firm Cloudflare will now support a free timekeeping protocol known as Roughtime, which helps synchronize the internet’s clocks and validate timestamps.

Read More

Hexbyte  Hacker News  Computers Titans of Mathematics Clash Over Epic Proof of ABC Conjecture | Quanta Magazine

Hexbyte Hacker News Computers Titans of Mathematics Clash Over Epic Proof of ABC Conjecture | Quanta Magazine

Hexbyte Hacker News Computers

number theory
ByErica Klarreich

September 20, 2018

Two mathematicians have found what they say is a hole at the heart of a proof that has convulsed the mathematics community for nearly six years.

In a report posted online today, Peter Scholze of the University of Bonn and Jakob Stix of Goethe University Frankfurt describe what Stix calls a “serious, unfixable gap” within a mammoth series of papers by Shinichi Mochizuki, a mathematician at Kyoto University who is renowned for his brilliance. Posted online in 2012, Mochizuki’s papers supposedly prove the abc conjecture, one of the most far-reaching problems in number theory.

Despite multiple conferences dedicated to explicating Mochizuki’s proof, number theorists have struggled to come to grips with its underlying ideas. His series of papers, which total more than 500 pages, are written in an impenetrable style, and refer back to a further 500 pages or so of previous work by Mochizuki, creating what one mathematician, Brian Conrad of Stanford University, has called “a sense of infinite regress.”

Between 12 and 18 mathematicians who have studied the proof in depth believe it is correct, wrote Ivan Fesenko of the University of Nottingham in an email. But only mathematicians in “Mochizuki’s orbit” have vouched for the proof’s correctness, Conrad commented in a blog discussion last December. “There is nobody else out there who has been willing to say even off the record that they are confident the proof is complete.”

Nevertheless, wrote Frank Calegari of the University of Chicago in a December blog post, “mathematicians are very loath to claim that there is a problem with Mochizuki’s argument because they can’t point to any definitive error.”

That has now changed. In their report, Scholze and Stix argue that a line of reasoning near the end of the proof of “Corollary 3.12” in Mochizuki’s third of four papers is fundamentally flawed. The corollary is central to Mochizuki’s proposed abc proof.

“I think the abc conjecture is still open,” Scholze said. “Anybody has a chance of proving it.”

Scholze and Stix’s conclusions are based not only on their own study of the papers but also on a weeklong visit they paid to Mochizuki and his colleague

Yuichiro Hoshi

in March at Kyoto University to discuss the proof. That visit helped enormously, Scholze said, in distilling his and Stix’s objections down to their essence. The pair “came to the conclusion that there is no proof,” they wrote in their report.

But the meeting led to an oddly unsatisfying conclusion: Mochizuki couldn’t convince Scholze and Stix that his argument was sound, but they couldn’t convince him that it was unsound. Mochizuki has now posted Scholze’s and Stix’s report on his website, along with several reports of his own in rebuttal. (Mochizuki and Hoshi did not respond to requests for comments for this article.)

In his rebuttal, Mochizuki attributes Scholze and Stix’s criticism to “certain fundamental misunderstandings” about his work. Their “negative position,” he wrote, “does not imply the existence of any flaws whatsoever” in his theory.

Just as Mochizuki’s high reputation made mathematicians view his work as a serious attempt on the abc conjecture, Scholze and Stix’s stature guarantees that mathematicians will pay attention to what they have to say. Though only 30, Scholze has risen quickly to the top of his field. He was awarded the Fields Medal, mathematics’ highest honor, in August. Stix, meanwhile, is an expert in Mochizuki’s particular area of research, a field known as anabelian geometry.

“Peter and Jakob are extremely careful and thoughtful mathematicians,” Conrad said. “Any concerns that they have … definitely merit being cleared up.”

Hexbyte Hacker News Computers The Sticking Point

The abc conjecture, which Conrad has called “one of the outstanding conjectures in number theory,” starts with one of the simplest equations imaginable: a + b = c. The three numbers a, b and c are supposed to be positive integers, and they are not allowed to share any common prime factors — so, for example, we could consider the equation 8 + 9 = 17, or 5 + 16 = 21, but not 6 + 9 = 15, since 6, 9 and 15 are all divisible by 3.

Given such an equation, we can look at all the primes that divide any of the three numbers — so, for instance, for the equation 5 + 16 = 21, our primes are 5, 2, 3 and 7. Multiplying these together produces 210, a much larger number than any of the numbers in the original equation. By contrast, for the equation 5 + 27 = 32, whose primes are 5, 3 and 2, the prime product is 30 — a smaller number than the 32 in the original equation. The product comes out so small because 27 and 32 have only small prime factors (3 and 2, respectively) that get repeated many times to make them.

If you start playing around with other abc triples, you’ll find that this second scenario is extremely rare. For example, among the 3,044 different triples you can make in which a and b are between 1 and 100, there are only seven in which the product of primes is smaller than c. The abc conjecture, which was first formulated in the 1980s, codifies the intuition that this kind of triple hardly ever happens.

More specifically, coming back to the 5 + 27 = 32 example, 32 is larger than 30, but only by a little. It’s smaller than 302, or 301.5, or even 301.02, which is about 32.11. The abc conjecture says that if you pick any exponent bigger than 1, then there are only finitely many abc triples in which c is larger than the product of the prime factors raised to your chosen exponent.

“The abc conjecture is a very elementary statement about multiplication and addition,” said Minhyong Kim of the University of Oxford. It’s the kind of statement, he said, where “you feel like you’re revealing some kind of very fundamental structure about number systems in general that you hadn’t seen before.”

And the simplicity of the a + b = c equation means that a wide range of other problems fall under the conjecture’s sway. For instance, Fermat’s Last Theorem is about equations of the form xn + yn = zn, and Catalan’s Conjecture, which says that 8 and 9 are the only two consecutive perfect powers (since 8 = 23 and 9 = 32), is about the equation xm + 1 = yn. The abc conjecture (in certain forms) would offer new proofs of these two theorems and solve a host of related open problems.

The conjecture “always seems to lie on the boundary of what is known and what is unknown,”

Dorian Goldfeld

of Columbia University

has written

.

The wealth of consequences that would spring from a proof of the abc conjecture had convinced number theorists that proving the conjecture was likely to be very hard. So when word spread in 2012 that Mochizuki had presented a proof, many number theorists dived enthusiastically into his work — only to be stymied by the unfamiliar language and unusual presentation. Definitions went on for pages, followed by theorems whose statements were similarly long, but whose proofs only said, essentially, “this follows immediately from the definitions.”

“Each time I hear of an analysis of Mochizuki’s papers by an expert (off the record) the report is disturbingly familiar: vast fields of trivialities followed by an enormous cliff of unjustified conclusions,” Calegari wrote in his December blog post.

Scholze was one of the paper’s early readers. Known for his ability to absorb mathematics quickly and deeply, he got further than many number theorists, completing what he called a “rough reading” of the four main papers shortly after they came out. Scholze was bemused by the long theorems with their short proofs, which struck him as valid but insubstantial. In the two middle papers, he later wrote, “very little seems to happen.”

Then Scholze got to Corollary 3.12 in the third paper. Mathematicians usually use the word “corollary” to denote a theorem that is a secondary consequence of a previous, more important theorem. But in the case of Mochizuki’s Corollary 3.12, mathematicians agree that it is at the core of the proof of abc. Without it, “there is no proof at all,” Calegari wrote. “It is a critical step.”

This corollary is the only theorem in the two middle papers whose proof is longer than a few lines — it fills nine pages. As Scholze read through them, he reached a point where he couldn’t follow the logic at all.

Scholze, who was only 24 at the time, believed the proof was flawed. But he mostly stayed out of discussions about the papers, except when asked directly for his thoughts. After all, he thought, perhaps other mathematicians would find significant ideas in the paper that he had missed. Or, perhaps, they would eventually come to the same conclusion as he had. One way or the other, he thought, the mathematics community would surely be able to sort things out.

Hexbyte Hacker News Computers Escher’s Staircase

Meanwhile, other mathematicians were grappling with the densely written papers. Many had high hopes for a meeting dedicated to Mochizuki’s work in late 2015 at the University of Oxford. But as several of Mochizuki’s close associates tried to describe the key ideas of the proof, a “cloud of fog” seemed to descend over the listeners, Conrad wrote in a report shortly after the meeting. “Those who understand the work need to be more successful at communicating to arithmetic geometers what makes it tick,” he wrote.

Within days of Conrad’s post, he received unsolicited emails from three different mathematicians (one of them Scholze), all with the same story: They had been able to read and understand the papers until they hit a particular part. “For each of these people, the proof that had stumped them was for 3.12,” Conrad later wrote.

Kim heard similar concerns about Corollary 3.12 from another mathematician, Teruhisa Koshikawa, currently at Kyoto University. And Stix, too, got perplexed in the same spot. Gradually, various number theorists became aware that this corollary was a sticking point, but it wasn’t clear whether the argument had a hole or Mochizuki simply needed to explain his reasoning better.

Then in late 2017 a rumor spread, to the consternation of many number theorists, that Mochizuki’s papers had been accepted for publication. Mochizuki himself was the editor-in-chief of the journal in question, Publications of the Research Institute for Mathematical Sciences, an arrangement that Calegari called “poor optics” (though editors generally recuse themselves in such situations). But much more concerning to many number theorists was the fact that the papers were still, as far as they were concerned, unreadable.

“No expert who claims to understand the arguments has succeeded in explaining them to any of the (very many) experts who remain mystified,”

Matthew Emerton

of the University of Chicago

wrote

.

Calegari wrote a blog post decrying the situation as “a complete disaster,” to a chorus of amens from prominent number theorists. “We do now have the ridiculous situation where ABC is a theorem in Kyoto but a conjecture everywhere else,” Calegari wrote.

PRIMS soon responded to press inquiries with a statement that the papers had not, in fact, been accepted. Before they had done so, however, Scholze resolved to state publicly what he had been saying privately to number theorists for some time. The whole discussion surrounding the proof had gotten “too sociological,” he decided. “Everybody was talking just about how this feels like it isn’t a proof, but nobody was actually saying, ‘Actually there is this point where nobody understands the proof.’”

So in the comments section below Calegari’s blog post, Scholze wrote that he was “entirely unable to follow the logic after Figure 3.8 in the proof of Corollary 3.12.” He added that mathematicians “who do claim to understand the proof are unwilling to acknowledge that more must be said there.”

Shigefumi Mori, Mochizuki’s colleague at Kyoto University and a winner of the Fields Medal, wrote to Scholze offering to facilitate a meeting between him and Mochizuki. Scholze in turn reached out to Stix, and in March the pair traveled to Kyoto to discuss the sticky proof with Mochizuki and Hoshi.

Mochizuki’s approach to the abc conjecture translates the problem into a question about elliptic curves, a special type of cubic equation in two variables, x and y. The translation, which was well known before Mochizuki’s work, is simple — you associate each abc equation with the elliptic curve whose graph crosses the x-axis at a, b and the origin — but it allows mathematicians to exploit the rich structure of elliptic curves, which connect number theory to geometry, calculus and other subjects. (This same translation is at the heart of Andrew Wiles’ 1994 proof of Fermat’s Last Theorem.)

The

abc

conjecture then boils down to proving a certain inequality between two quantities associated with the elliptic curve. Mochizuki’s work translates this inequality into yet another form, which, Stix said, can be thought of as comparing the volumes of two sets. Corollary 3.12 is where Mochizuki presents his proof of this new inequality, which, if true, would prove the

abc

conjecture.

The proof, as Scholze and Stix describe it, involves viewing the volumes of the two sets as living inside two different copies of the real numbers, which are then represented as part of a circle of six different copies of the real numbers, together with mappings that explain how each copy relates to its neighbors along the circle. To keep track of how the volumes of sets relate to one another, it’s necessary to understand how volume measurements in one copy relate to measurements in the other copies, Stix said.

“If you have an inequality of two things but the measuring stick is sort of shrunk by a factor which you don’t control, then you lose control over what the inequality actually means,” Stix said.

It is at this crucial spot in the argument that things go wrong, Scholze and Stix believe. In Mochizuki’s mappings, the measuring sticks are locally compatible with one another. But when you go around the circle, Stix said, you end up with a measuring stick that looks different from if you had gone around the other way. The situation, he said, is akin to Escher’s famous winding staircase, which climbs and climbs only to somehow end up below where it started.

This incompatibility in the volume measurements means that the resulting inequality is between the wrong quantities, Scholze and Stix assert. And if you adjust things so the volume measurements are globally compatible, then the inequality becomes meaningless, they say.

Scholze and Stix have “identified a way that the argument can’t possibly work,” said Kiran Kedlaya, a mathematician at the University of California, San Diego, who has studied Mochizuki’s papers in depth. “So if the argument is to be correct, it has to do something different, and something a lot more subtle” than what Scholze and Stix describe.

Something more subtle is exactly what the proof does, Mochizuki contends. Scholze and Stix err, he wrote, in making arbitrary identifications between mathematical objects that should be regarded as distinct. When he told colleagues the nature of Scholze and Stix’s objections, he wrote, his descriptions “were met with a remarkably unanimous response of utter astonishment and even disbelief (at times accompanied by bouts of laughter!) that such manifestly erroneous misunderstandings could have occurred.”

Mathematicians will now have to absorb Scholze and Stix’s argument and Mochizuki’s response. But Scholze hopes that, in contrast with the situation for Mochizuki’s original series of papers, this should not be a protracted process, since the gist of his and Stix’s objection is not highly technical. Other number theorists “would have totally been able to follow the discussions that we had had this week with Mochizuki,” he said.

Mochizuki sees things very differently. In his view, Scholze and Stix’s criticism stems from a “lack of sufficient time to reflect deeply on the mathematics under discussion,” perhaps coupled with “a deep sense of discomfort, or unfamiliarity, with new ways of thinking about familiar mathematical objects.”

Mathematicians who are already skeptical of Mochizuki’s abc proof may well consider Scholze and Stix’s report the end of the story, said Kim. Others will want to study the new reports for themselves, an activity that Kim himself has commenced. “I don’t think I can completely avoid the need to check more carefully for myself before making up my mind,” he wrote in an email.

In the past couple of years, many number theorists have given up on trying to understand Mochizuki’s papers. But if Mochizuki or his followers can provide a thorough and coherent explanation for why Scholze and Stix’s picture is too simplistic (assuming that it is), “this might go a long way towards relieving some of the fatigue and maybe giving people more willingness to look into this thing again,” Kedlaya said.

In the meantime, Scholze said, “I think this should not be considered a proof until Mochizuki does some very substantial revisions and explains this key step much better.” Personally, he said, “I didn’t really see a key idea that would get us closer to the proof of the abc conjecture.”

Regardless of the eventual outcome of this discussion, the pinpointing of such a specific part of Mochizuki’s argument should lead to greater clarity, Kim said. “What Jakob and Peter have done is an important service to the community,” he said. “Whatever happens, I’m pretty confident that the reports will be progress of a definite sort.”

Read More

Hexbyte  Hacker News  Computers Whatever Happened to the Semantic Web?

Hexbyte Hacker News Computers Whatever Happened to the Semantic Web?

Hexbyte Hacker News Computers

In 2001, Tim Berners-Lee, inventor of the World Wide Web, published an article
in Scientific American. Berners-Lee, along with two other researchers, Ora
Lassila and James Hendler, wanted to give the world a preview of the
revolutionary new changes they saw coming to the web. Since its introduction
only a decade before, the web had fast become the world’s best means for
sharing documents with other people. Now, the authors promised, the web would
evolve to encompass not just documents but every kind of data one could
imagine.

They called this new web the Semantic Web. The great promise of the Semantic
Web was that it would be readable not just by humans but also by machines.
Pages on the web would be meaningful to software programs—they would have
semantics—allowing programs to interact with the web the same way that people
do. Programs could exchange data across the Semantic Web without having to be
explicitly engineered to talk to each other. According to Berners-Lee, Lassila,
and Hendler, a typical day living with the myriad conveniences of the Semantic
Web might look something like this:

The entertainment system was belting out the Beatles’ “We Can Work It Out”
when the phone rang. When Pete answered, his phone turned the sound down by
sending a message to all the other local devices that had a volume control.
His sister, Lucy, was on the line from the doctor’s office: “Mom needs to see
a specialist and then has to have a series of physical therapy sessions.
Biweekly or something. I’m going to have my agent set up the appointments.”
Pete immediately agreed to share the chauffeuring. At the doctor’s office,
Lucy instructed her Semantic Web agent through her handheld Web browser. The
agent promptly retrieved the information about Mom’s prescribed treatment
within a 20-mile radius of her home and with a rating of excellent or very
good on trusted rating services. It then began trying to find a match between
available appointment times (supplied by the agents of individual providers
through their Web sites) and Pete’s and Lucy’s busy schedules.
1

The vision was that the Semantic Web would become a playground for intelligent
“agents.” These agents would automate much of the work that the world had only
just learned to do on the web.

Hexbyte  Hacker News  Computers

For a while, this vision enticed a lot of people. After new technologies such
as AJAX led to the rise of what Silicon Valley called Web 2.0, Berners-Lee
began referring to the Semantic Web as Web 3.0. Many thought that the Semantic
Web was indeed the inevitable next step. A New York Times article published in
2006 quotes a speech Berners-Lee gave at a conference in which he said that the
extant web would, twenty years in the future, be seen as only the “embryonic”
form of something far greater.2 A venture capitalist, also quoted in the
article, claimed that the Semantic Web would be “profound,” and ultimately “as
obvious as the web seems obvious to us today.”

Of course, the Semantic Web we were promised has yet to be delivered. In 2018,
we have “agents” like Siri that can do certain tasks for us. But Siri can only
do what it can because engineers at Apple have manually hooked it up to a
medley of web services each capable of answering only a narrow category of
questions. An important consequence is that, without being large and important
enough for Apple to care, you cannot advertise your services directly to Siri
from your own website. Unlike the physical therapists that Berners-Lee and his
co-authors imagined would be able to hang out their shingles on the web, today
we are stuck with giant, centralized repositories of information. Today’s
physical therapists must enter information about their practice into Google or
Yelp, because those are the only services that the smartphone agents know how
to use and the only ones human beings will bother to check. The key difference
between our current reality and the promised Semantic future is best captured
by this throwaway aside in the excerpt above: “…appointment times (supplied
by the agents of individual providers through their Web sites)…

In fact, over the last decade, the web has not only failed to become the
Semantic Web but also threatened to recede as an idea altogether. We now hardly
ever talk about “the web” and instead talk about “the internet,” which as of
2016 has become such a common term that newspapers no longer capitalize it. (To
be fair, they stopped capitalizing “web” too.) Some might still protest that
the web and the internet are two different things, but the distinction gets
less clear all the time. The web we have today is slowly becoming a glorified
app store, just the easiest way among many to download software that
communicates with distant servers using closed protocols and schemas, making it
functionally identical to the software ecosystem that existed before the web.
How did we get here? If the effort to build a Semantic Web had succeeded, would
the web have looked different today? Or have there been so many forces working
against a decentralized web for so long that the Semantic Web was always going
to be stillborn?

To some more practically minded engineers, the Semantic Web was, from the
outset, a utopian dream.

The basic idea behind the Semantic Web was that everyone would use a new set of
standards to annotate their webpages with little bits of XML. These little bits
of XML would have no effect on the presentation of the webpage, but they could
be read by software programs to divine meaning that otherwise would only be
available to humans.

The bits of XML were a way of expressing metadata about the webpage. We are
all familiar with metadata in the context of a file system: When we look at a
file on our computers, we can see when it was created, when it was last
updated, and whom it was originally created by. Likewise, webpages on the
Semantic Web would be able to tell your browser who authored the page and
perhaps even where that person went to school, or where that person is
currently employed. In theory, this information would allow Semantic Web
browsers to answer queries across a large collection of webpages. In their
article for Scientific American, Berners-Lee and his co-authors explain that
you could, for example, use the Semantic Web to look up a person you met at a
conference whose name you only partially remember.

Cory Doctorow, a blogger and digital rights activist, published an influential
essay in 2001 that pointed out the many problems with depending on voluntarily
supplied metadata. A world of “exhaustive, reliable” metadata would be
wonderful, he argued, but such a world was “a pipe-dream, founded on
self-delusion, nerd hubris, and hysterically inflated market
opportunities.”3 Doctorow had found himself in a series of debates over the
Semantic Web at tech conferences and wanted to catalog the serious issues that
the Semantic Web enthusiasts (Doctorow calls them “semweb hucksters”) were
overlooking.4 The essay, titled “Metacrap,” identifies seven problems, among
them the obvious fact that most web users were likely to provide either no
metadata at all or else lots of misleading metadata meant to draw clicks. Even
if users were universally diligent and well-intentioned, in order for the
metadata to be robust and reliable, users would all have to agree on a single
representation for each important concept. Doctorow argued that in some cases a
single representation might not be appropriate, desirable, or fair to all
users.

Indeed, the web had already seen people abusing the HTML tag
(introduced at least as early as HTML 4) in an attempt to improve the
visibility of their webpages in search results. In a 2004 paper, Ben Munat,
then an academic at Evergreen State College, explains how search engines once
experimented with using keywords supplied via the tag to index
results, but soon discovered that unscrupulous webpage authors were including
tags unrelated to the actual content of their webpage.5 As a result, search
engines came to ignore the tag in favor of using complex algorithms to
analyze the actual content of a webpage. Munat concludes that a
general-purpose Semantic Web is unworkable, and that the focus should be on
specific domains within medicine and science.

Others have also seen the Semantic Web project as tragically flawed, though
they have located the flaw elsewhere. Aaron Swartz, the famous programmer and
another digital rights activist, wrote in an unfinished book about the
Semantic Web published after his death that Doctorow was “attacking a
strawman.”6 Nobody expected that metadata on the web would be thoroughly
accurate and reliable, but the Semantic Web, or at least a more realistically
scoped version of it, remained possible. The problem, in Swartz’ view, was the
“formalizing mindset of mathematics and the institutional structure of
academics” that the “semantic Webheads” brought to bear on the challenge. In
forums like the World Wide Web Consortium (W3C), a huge amount of effort and
discussion went into creating standards before there were any applications out
there to standardize. And the standards that emerged from these “Talmudic
debates” were so abstract that few of them ever saw widespread adoption. The
few that did, like XML, were “uniformly scourges on the planet, offenses
against hardworking programmers that have pushed out sensible formats (like
JSON) in favor of overly-complicated hairballs with no basis in reality.” The
Semantic Web might have thrived if, like the original web, its standards were
eagerly adopted by everyone. But that never happened because—as has been
discussed
on this
blog before—the putative benefits of something like XML are not easy to sell to
a programmer when the alternatives are both entirely sufficient and much easier
to understand.

Building the Semantic Web

If the Semantic Web was not an outright impossibility, it was always going to
require the contributions of lots of clever people working in concert.

The long effort to build the Semantic Web has been said to consist of four
phases.7 The first phase, which lasted from 2001 to 2005, was the golden age
of Semantic Web activity. Between 2001 and 2005, the W3C issued a slew of new
standards laying out the foundational technologies of the Semantic future.

The most important of these was the Resource Description Framework (RDF). The
W3C issued the first version of the RDF standard in 2004, but RDF had been
floating around since 1997, when a W3C working group introduced it in a draft
specification. RDF was originally conceived of as a tool for modeling metadata
and was partly based on earlier attempts by Ramanathan Guha, an Apple engineer,
to develop a metadata system for files stored on Apple computers.8 The
Semantic Web working groups at W3C repurposed RDF to represent arbitrary kinds
of general knowledge.

RDF would be the grammar in which Semantic webpages expressed information. The
grammar is a simple one: Facts about the world are expressed in RDF as triplets
of subject, predicate, and object. Tim Bray, who worked with Ramanathan Guha on
an early version of RDF, gives the following example, describing TV shows and
movies: 9

@prefix rdf:  .

@prefix ex:  .


ex:vincent_donofrio ex:starred_in ex:law_and_order_ci .

ex:law_and_order_ci rdf:type ex:tv_show .

ex:the_thirteenth_floor ex:similar_plot_as ex:the_matrix .

The syntax is not important, especially since RDF can be represented in a
number of formats, including XML and JSON. This example is in a format called
Turtle, which expresses RDF triplets as straightforward sentences terminated by
periods. The three essential sentences, which appear above after the @prefix
preamble, state three facts: Vincent Donofrio starred in Law and Order, Law
and Order
is a type of TV Show, and the movie The Thirteenth Floor has a
similar plot as The Matrix. (If you don’t know who Vincent Donofrio is and
have never seen The Thirteenth Floor, I, too, was watching Nickelodeon and
sipping Capri Suns in 1999.)

Other specifications finalized and drafted during this first era of Semantic
Web development describe all the ways in which RDF can be used. RDF in
Attributes (RDFa) defines how RDF can be embedded in HTML so that browsers,
search engines, and other programs can glean meaning from a webpage. RDF Schema
and another standard called OWL allows RDF authors to demarcate the boundary
between valid and invalid RDF statements in their RDF documents. RDF Schema and
OWL, in other words, are tools for creating what are known as ontologies,
explicit specifications of what can and cannot be said within a specific
domain. An ontology might include a rule, for example, expressing that no
person can be the mother of another person without also being a parent of that
person. The hope was that these ontologies would be widely used not only to
check the accuracy of RDF found in the wild but also to make inferences about
omitted information.

In 2006, Tim Berners-Lee posted a short article in which he argued that the
existing work on Semantic Web standards needed to be supplemented by a
concerted effort to make semantic data available on the web.10 Furthermore,
once on the web, it was important that semantic data link to other kinds of
semantic data, ensuring the rise of a data-based web as interconnected as the
existing web. Berners-Lee used the term “linked data” to describe this ideal
scenario. Though “linked data” was in one sense just a recapitulation of the
original vision for the Semantic Web, it became a term that people could rally
around and thus amounted to a rebranding of the Semantic Web project.

Berners-Lee’s article launched the second phase of the Semantic Web’s
development, where the focus shifted from setting standards and building toy
examples to creating and popularizing large RDF datasets. Perhaps the most
successful of these datasets was DBpedia, a giant
repository of RDF triplets extracted from Wikipedia articles. DBpedia, which
made heavy use of the Semantic Web standards that had been developed in the
first half of the 2000s, was a standout example of what could be accomplished
using the W3C’s new formats. Today DBpedia describes 4.58 million entities and
is used by organizations like the NY Times, BBC, and IBM, which employed
DBpedia as a knowledge source for IBM Watson, the Jeopardy-winning artificial
intelligence system.

Hexbyte  Hacker News  Computers

The third phase of the Semantic Web’s development involved adapting the W3C’s
standards to fit the actual practices and preferences of web developers. By
2008, JSON had begun its meteoric rise to popularity. Whereas XML came packaged
with a bunch of associated technologies of indeterminate purpose (XLST, XPath,
XQuery, XLink), JSON was just JSON. It was less verbose and more readable. Manu
Sporny, an entrepreneur and member of the W3C, had already started using JSON
at his company and wanted to find an easy way for RDFa and JSON to work
together.11 The result would be JSON-LD, which in essence was RDF reimagined
for a world that had chosen JSON over XML. Sporny, together with his CTO, Dave
Longley, issued a draft specification of JSON-LD in 2010. For the next few
years, JSON-LD and an updated RDF specification would be the primary focus of
Semantic Web work at the W3C. JSON-LD could be used on its own or it could be
embedded within a