Hexbyte  Hacker News  Computers How to teach Git

Hexbyte Hacker News Computers How to teach Git

Hexbyte Hacker News Computers

Hexbyte  Hacker News  Computers

The problem I found

Some of my professional experiences have involved participating in cross-functional areas, so I knew all my colleagues’ way of working. I remember a company which just started using Git a few weeks before I joined.

I found post-its on screens with 3 steps: first add, second commit, third push.


Hexbyte  Hacker News  Computers

They didn’t know the reason why those steps. They only knew that they should follow them in order not to get into trouble. However, problems happened frequently, so I decided to prepare a workshop about Git.

The idea

I love to have maps in my mind. I don’t write “mind maps”, because they are a well-known type of diagrams. Now I’m talking about having frames, structures or any kind of graphical representation in the mind. For example, I start learning addition by imagining dice in my mind.

So I prepared some drawings. It’s not necessary to be able to see the drawings to understand this post. I include an explanation for each of them, because of my awareness with accessibility.

Furthermore, in this case, it’s very important to teach the vocabulary. Otherwise they won’t understand the messages from Git. The drawings are a good way to introduce them that vocabulary.

A distributed version control system


Hexbyte  Hacker News  Computers

The general drawing contains 4 areas distributed as follows:

  • The development environment with:
    • Working directory
    • Staging area or index
    • Local repository
  • A server with:
    • Remote repository

At that time, you can explain the benefits of a distributed version control system.

Cloning a repository


Hexbyte  Hacker News  Computers

When cloning a repository, the data from the remote repository travel to 2 areas:

  • Working directory
  • Local repository

Making changes in the working directory


Hexbyte  Hacker News  Computers

There are 2 types of files in the working directory:

  • Tracked: files that Git knows about.
  • Untracked: files that have still not been added, so Git doesn’t know about.

Updating the remote repository


Hexbyte  Hacker News  Computers

As changes are ready in the working directory, they must be added in the staging area.

When there is a set of changes with a single purpose in the staging area, it’s the time to create a commit with a message about that purpose in the local repository.

When there are one or several commits in the local repository ready to be shared with the rest of the world, they must be pushed to the remote repository.

At that time, you can talk about the different states of a file in the development environment: modified, staged and committed.


Hexbyte  Hacker News  Computers

Furthermore, you can explain:

  • how to show the changes of a file in the working directory: git diff
  • how to show the changes of a file in the staging area: git diff --staged
  • how a file can be changed in the working directory after being added to the staging area
  • etc.

Updating the development environment

Fetching


Hexbyte  Hacker News  Computers

When executing git fetch, the data from remote repository only travel to the local repository.

Pulling


Hexbyte  Hacker News  Computers

When executing git pull, the data from remote repository travel to 2 areas:

  • To local repository: fetch
  • To working directory: merge

If you take care the commit history, consider the use of git pull --rebase.
Instead of fetch + merge, it consists of fetch + rebase.
Your local commits will be replayed and you won’t see the known diamond shape in commit history.


Hexbyte  Hacker News  Computers

Next steps

You can add another area in the development environment to explain stashing: dirty working directory.

If people internalize these concepts, it will be easier for you to go a step further with branches, commit history, rebasing, etc. because you will have built a solid basis.

Friendly reminder

I’ve worked with other version control systems (Visual SourceSafe, TFS and Subversion) and, in my humble experience, a lack of knowledge can be harmful with both an old tool and a new one. Don’t only focus on choosing a tool, but also on mastering it.

Further reading

Received feedback

Read More

Hexbyte  Hacker News  Computers Kenya to teach Mandarin Chinese in primary school

Hexbyte Hacker News Computers Kenya to teach Mandarin Chinese in primary school

Hexbyte Hacker News Computers

Kenya will teach Mandarin in classrooms  in a bid to improve job competitiveness and facilitate better trade and connection with China.

The country’s curriculum development institute (KICD) has said the design and scope of the mandarin syllabus have been completed and will be rolled in out in 2020. Primary school pupils from grade four (aged 10) and onwards will be able to take the course, the head of the agency Julius Jwan told Xinhua news agency. Jwan said the language is being introduced given Mandarin’s growing global rise, and the deepening political and economic connections between Kenya and China.

“The place of China in the world economy has also grown to be so strong that Kenya stands to benefit if its citizens can understand Mandarin,” Jwan noted. Kenya follows in the footsteps of South Africa which began teaching the language in schools in 2014 and Uganda which is planning mandatory Mandarin lessons for high school students.

Kenya is currently in the midst of rolling out a new educational curriculum to improve educational quality and focus on skills that would make graduates more employable in the labor market. Just last year, education officials rolled out the roadmap for the first pilot of the new curricula for students in pre-school and standards one and two.

Then education secretary Fred Matiang’i said the syllabus would be tested in order to see what to improve before they are fully implemented across all primary and secondary classes. Mandarin is set to be taught alongside local languages besides other foreign ones including French and Arabic.

Hexbyte  Hacker News  Computers

As officials in Nairobi deliberated introducing the language in schools, they received support from Beijing over the past few years. A delegation of Chinese scholars helped with developing the courses, while scholarships were doled out to Kenyan graduate students to study in Chinese universities.

The beneficence is strategic for China, which has lent billions of dollars to Kenya, built a railway between its two major cities, held major cultural festivals in the east African state, and whose companies are involved in constructing everything from highways to apartments. Long before their prevalence across Africa, China set up Africa’s first Confucius Institute at the University of Nairobi. 

Sign up to the Quartz Africa Weekly Brief here for news and analysis on African business, tech and innovation in your inbox

Read More

Hexbyte  Tech News  Wired How to Teach Artificial Intelligence Some Common Sense

Hexbyte Tech News Wired How to Teach Artificial Intelligence Some Common Sense

Hexbyte Tech News Wired

Five years ago, the coders at DeepMind, a London-based artificial intelligence company, watched excitedly as an AI taught itself to play a classic arcade game. They’d used the hot technique of the day, deep learning, on a seemingly whimsical task: mastering Breakout,1 the Atari game in which you bounce a ball at a wall of bricks, trying to make each one vanish.

1 Steve Jobs was working at Atari when he was commissioned to create 1976’s Breakout, a job no other engineer wanted. He roped his friend Steve Wozniak, then at Hewlett-­Packard, into helping him.

Deep learning is self-education for machines; you feed an AI huge amounts of data, and eventually it begins to discern patterns all by itself. In this case, the data was the activity on the screen—blocky pixels representing the bricks, the ball, and the player’s paddle. The DeepMind AI, a so-called neural network made up of layered algorithms, wasn’t programmed with any knowledge about how Breakout works, its rules, its goals, or even how to play it. The coders just let the neural net examine the results of each action, each bounce of the ball. Where would it lead?

To some very impressive skills, it turns out. During the first few games, the AI flailed around. But after playing a few hundred times, it had begun accurately bouncing the ball. By the 600th game, the neural net was using a more expert move employed by human Breakout players, chipping through an entire column of bricks and setting the ball bouncing merrily along the top of the wall.

“That was a big surprise for us,” Demis Hassabis, CEO of DeepMind, said at the time. “The strategy completely emerged from the underlying system.” The AI had shown itself capable of what seemed to be an unusually subtle piece of humanlike thinking, a grasping of the inherent concepts behind Breakout. Because neural nets loosely mirror the structure of the human brain, the theory was that they should mimic, in some respects, our own style of cognition. This moment seemed to serve as proof that the theory was right.

Then, last year, computer scientists at Vicarious, an AI firm in San Francisco, offered an interesting reality check. They took an AI like the one used by DeepMind and trained it on Breakout. It played great. But then they slightly tweaked the layout of the game. They lifted the paddle up higher in one iteration; in another, they added an unbreakable area in the center of the blocks.

A human player would be able to quickly adapt to these changes; the neural net couldn’t. The seemingly supersmart AI could play only the exact style of Breakout it had spent hundreds of games mastering. It couldn’t handle something new.

“We humans are not just pattern recognizers,” Dileep George, a computer scientist who cofounded Vicarious, tells me. “We’re also building models about the things we see. And these are causal models—we understand about cause and effect.” Humans engage in reasoning, making logi­cal inferences about the world around us; we have a store of common-sense knowledge that helps us figure out new situations. When we see a game of Breakout that’s a little different from the one we just played, we realize it’s likely to have mostly the same rules and goals. The neural net, on the other hand, hadn’t understood anything about Breakout. All it could do was follow the pattern. When the pattern changed, it was helpless.

Deep learning is the reigning monarch of AI. In the six years since it exploded into the mainstream, it has become the dominant way to help machines sense and perceive the world around them. It powers Alexa’s speech recognition, Waymo’s self-driving cars, and Google’s on-the-fly translations. Uber is in some respects a giant optimization problem, using machine learning to figure out where riders will need cars. Baidu, the Chinese tech giant, has more than 2,000 engineers cranking away on neural net AI. For years, it seemed as though deep learning would only keep getting better, leading inexorably to a machine with the fluid, supple intelligence of a person.

But some heretics argue that deep learning is hitting a wall. They say that, on its own, it’ll never produce generalized intelligence, because truly humanlike intelligence isn’t just pattern recognition. We need to start figuring out how to imbue AI with everyday common sense, the stuff of human smarts. If we don’t, they warn, we’ll keep bumping up against the limits of deep learning, like visual-recognition systems that can be easily fooled by changing a few inputs, making a deep-learning model think a turtle is a gun. But if we succeed, they say, we’ll witness an explosion of safer, more useful devices—health care robots that navigate a cluttered home, fraud detection systems that don’t trip on false positives, medical breakthroughs powered by machines that ponder cause and effect in disease.

But what does true reasoning look like in a machine? And if deep learning can’t get us there, what can?

Beth Holzer

Gary Marcus is a pensive, bespectacled 48-year-old professor of psychology and neuroscience at New York University, and he’s probably the most famous apostate of orthodox deep learning.

Marcus first got interested in artificial intelligence in the 1980s and ’90s, when neural nets were still in their experimental phase, and he’s been making the same argument ever since. “It’s not like I came to this party late and want to pee on it,” Marcus told me when I met him at his apartment near NYU. (We are also personal friends.) “As soon as deep learning erupted, I said ‘This is the wrong direction, guys!’ ”

Back then, the strategy behind deep learning was the same as it is today. Say you wanted a machine to teach itself to recognize daisies. First you’d code some algorithmic “neurons,” connecting them in layers like a sandwich (when you use several layers, the sandwich gets thicker or deep—hence “deep” learning). You’d show an image of a daisy to the first layer, and its neurons would fire or not fire based on whether the image resembled the examples of daisies it had seen before. The signal would move on to the next layer, where the process would be repeated. Eventually, the layers would winnow down to one final verdict.

At first, the neural net is just guessing blindly; it starts life a blank slate, more or less. The key is to establish a useful feedback loop. Every time the AI misses a daisy, that set of neural connections weakens the links that led to an incorrect guess; if it’s successful, it strengthens them. Given enough time and enough daisies, the neural net gets more accurate. It learns to intuit some pattern of daisy-­ness that lets it detect the daisy (and not the sunflower or aster) each time. As the years went on, this core idea—start with a naive network and train by repetition—was improved upon and seemed useful nearly anywhere it was applied.

But Marcus was never convinced. For him, the problem is the blank slate: It assumes that humans build their intelligence purely by observing the world around them, and that machines can too. But Marcus doesn’t think that’s how humans work. He walks the intellectual path laid down by Noam Chomsky,2 who argued that humans are born wired to learn, programmed to master language and interpret the physical world.

2 In 1975 the psycholo­gist Jean Piaget and the linguist Noam Chomsky met in France for what would prove to be a historic debate. Grossly simplified, Piaget argued that human brains are blank-slate self-­learning machines, and Chomsky that they are endowed with some preprogrammed smarts.

For all their supposed braininess, he notes, neural nets don’t appear to work the way human brains do. For starters, they’re much too data-hungry. In most cases, each neural net requires thousands or millions of examples to learn from. Worse, each time you want a neural net to recognize a new type of item, you have to start from scratch. A neural net trained to recognize only canaries isn’t of any use in recognizing, say, birdsong or human speech.

“We don’t need massive amounts of data to learn,” Marcus says. His kids didn’t need to see a million cars before they could recognize one. Better yet, they can generalize; when they see a tractor for the first time, they understand that it’s sort of like a car. They can also engage in counterfactuals. Google Translate can map the French equivalent of the English sentence “The glass was pushed, so it fell off the table.” But it doesn’t know what the words mean, so it couldn’t tell you what would happen if the glass weren’t pushed. Humans, Marcus notes, grasp not just the patterns of grammar but the logic behind it. You could give a young child a fake verb like pilk, and she’d likely be able to reason that the past tense would be pilked. She hasn’t seen that word before, of course. She hasn’t been “trained” on it. She has just intuited some logic about how language works and can apply it to a new situation.

“These deep-learning systems don’t know how to integrate abstract knowledge,” says Marcus, who founded a company that created AI to learn with less data (and sold the company to Uber in 2016).

Earlier this year, Marcus published a white paper on arXiv, arguing that, without some new approaches, deep learning might never get past its current limitations. What it needs is a boost—rules that supplement or are built in to help it reason about the world.

Read More

Hexbyte  Hacker News  Computers New schemes teach the masses to build AI

Hexbyte Hacker News Computers New schemes teach the masses to build AI

Hexbyte Hacker News Computers

OVER THE past five years researchers in artificial intelligence have become the rock stars of the technology world. A branch of AI known as deep learning, which uses neural networks to churn through large volumes of data looking for patterns, has proven so useful that skilled practitioners can command high six-figure salaries to build software for Amazon, Apple, Facebook and Google. The top names can earn over $1m a year.

Get our daily newsletter

Upgrade your inbox and get our Daily Dispatch and Editor’s Picks.

The standard route into these jobs has been a PhD in computer science from one of America’s elite universities. Earning one takes years and requires a disposition suited to academia, which is rare among more normal folk. Graduate students are regularly lured away from their studies by lucrative jobs.

That is changing. This month fast.ai, an education non-profit based in San Francisco, kicked off the third year of its course in deep learning. Since its inception it has attracted more than 100,000 students, scattered around the globe from India to Nigeria. The course and others like it come with a simple proposition: there is no need to spend years obtaining a PhD in order to practise deep learning. Creating software that learns can be taught as a craft, not as a high intellectual pursuit to be undertaken only in an ivory tower. Fast.ai’s course can be completed in just seven weeks.

Demystifying the subject, to make it accessible to anyone who wants to learn how to build AI software, is the aim of Jeremy Howard, who founded fast.ai with Rachel Thomas, a mathematician. He says school mathematics is sufficient. “No. Greek. Letters,” Mr Howard intones, thumping the table for punctuation.

It is working. A graduate from fast.ai’s first year, Sara Hooker, was hired into Google’s highly competitive AI residency programme after finishing the course, having never worked on deep learning before. She is now a founding member of Google’s new AI research office in Accra, Ghana, the firm’s first in Africa. In Bangalore, some 2,400 people are members of AI Saturdays, which follows the course together as a gigantic study group. Andrei Karpathy, one of deep learning’s foremost practitioners, recommends the course.

Fast.ai’s is not the only alternative AI programme. AI4ALL, another non-profit venture, works to bring AI education to schoolchildren in the United States that would otherwise not have access to it. Andrew Ng, another well-known figure in the field, has started his own online course, deeplearning.ai.

Mr Howard’s ambitions run deeper than loosening the AI labour market. His aim is to spread deep learning into many hands, so that it may be applied in as diverse a set of fields by as diverse a group of people as possible. So far, it has been controlled by a small number of mostly young white men, almost all of whom have been employed by the tech giants. The ambition, says Mr Howard, is for AI training software to become as easy to use and ubiquitous as sending an email on a smartphone.

Some experts worry that this will serve only to create a flood of dodgy AI systems which will be useless at best and dangerous at worst. An analogy may allay those concerns. In the earliest days of the internet, only a select few nerds with specific skills could build applications. Not many people used them. Then the invention of the world wide web led to an explosion of web pages, both good and bad. But it was only by opening up to all that the internet gave birth to online shopping, instant global communications and search. If Mr Howard and others have their way, making the development of AI software easier will bring forth a new crop of fruit of a different kind.