Hexbyte – Glen Cove – News Mastering Photography: Nature vs. Nurture -Hexbyte Glen Cove News

Hexbyte – Glen Cove – News Mastering Photography: Nature vs. Nurture -Hexbyte Glen Cove News

Hexbyte – Glen Cove – News

I have three kids and they are all so different, one is quite shy, one is super confident and the third never stops talking – they have all been raised the same so I often wonder about ‘nature or nurture’.

I was recently in the Faroe Islands and had planned to make a video on the rules of photography but ended up discussing the nature vs. nurture argument for photography, which actually produced a more compelling result.

Evening rays on one of the many amazing fjords in the Faroe Islands.

Ever since I can remember, I have been fascinated by photos and rich visual communication. At school I had difficulties with English, reading and writing and it was only when I was in my twenties that I found out I was dyslexic. Although reading was a struggle, I loved to look at photo books and would spend hours studying them, pondering over the photos, and trying to work out why they looked so good.

I took up photography at the age of 13 and found it to be a great way to express myself – I was much more comfortable producing a photograph that told a story, rather than the written word. Even though I enjoyed being creative through photography, I didn’t have a natural talent for it and my early results left a lot to be desired!

Jump forward over 30 years and I now consider myself to be a competent landscape photographer. Practice doesn’t make perfect, and there is always something new to learn; but I feel that I can usually find a good composition when I go out shooting; and I now have a portfolio of photographs of which I am proud and which people are willing to buy so that they too can enjoy the images.

Spring – Printed on Fotospeed NST bright white paper.

So if I didn’t have a natural talent for photography, have I managed to nurture what little artistic talent I did have to make myself into a better photographer?

Ansel Adams once said, “There are no rules for good photographs. There are just good photographs.”

Although I agree with the sentiment behind this quote, I have actually spent the last few years building my YouTube channel and trying to explain to people what makes a good photo and how they can improve their photography. There are certain rules, or maybe best to call them guidelines, that apply to landscape photography that usually help us to achieve better results.

Ansel Adams once said, “There are no rules for good photographs. There are just good photographs”.

Photography isn’t quite like the art of drawing a cartoon or painting a picture. It is a combination of technical knowhow and artistic interpretation. If you just have one part of the puzzle, you aren’t likely to get the best results. Don’t get me wrong, give an accomplished artist a camera phone and they would likely produce something superior to a less artistic person. But a non-artist can also produce a great photo.

My wife, Ann is a prime example – she can’t even draw a straight line but she is actually great at finding unique and interesting compositions for photographs (probably a result of spending hours out on location with me and listening to me rambling on about composition and getting excited about great light).

When it comes down to it, there are four elements that you need to master in photography and these are subject, light, composition and timing which are discussed in my video on the four elements of landscape photography. It is really only the composition element that is the artistic one – how you go about placing all the elements in the scene in the most pleasing way; or perhaps more importantly, what do you leave out?

Passing Storm, Faroe Islands.

So, can you learn this? Is there a set of rules for you to follow to improve your composition? Is there a limit to how good you can get by learning such rules and can you become more artistic?

Take this image for example – can you say what makes it a good photo or what can be improved?

Essence of the Faroes.

Try it with a friend. Critically consider ten photos that aren’t yours and explain to each other why they are good or bad, what you like and what you think could be improved – you’ll be surprised how useful an exercise this is.

In this video I consider a number of photographs and explain why I think one of the keys to becoming a great photographer is to study accomplished images and try to work out what makes them so good.

Even if you have all the elements in the same place, you also need to have patience (a quality that I wasn’t born with, but which I have learned to master) in order to wait for the best conditions and get the timing right. Light can make an enormous difference to a shot. Take a look at the two images above. The only difference is time. The light has changed significantly and the photographer moved, but it was just a case of waiting for the right moment.

I explore ‘nature or nurture’ a bit more in the video below, where I also discuss light and simplicity in more detail. What are your thoughts on nature vs. nurture? Let me know in the comments!

Read More

Hexbyte  Tech News  Wired Sophie Turner and Maisie Williams Aren’t Married

Hexbyte Tech News Wired Sophie Turner and Maisie Williams Aren’t Married

Hexbyte Tech News Wired

Sophie Turner and Maisie Williams are best friends, and pretty much have been since they took on the roles of Sansa and Arya Stark on Game of Thrones. Their fast friendship has often led people to wonder if they’re romantically involved. The truth is they’re not, but Turner understands why people think that.

“People do think that we’re a couple, sometimes,” Turner says in the WIRED Autocomplete Interview above. “It’s beautiful. It’s the purest friendship I’ve ever had in my life.”

So, for whoever asked the internet if Turner and Williams are married, we’re sorry if you’re disappointed with that answer. And if you’re wondering who Turner is married to, it’s Joe Jonas. They got hitched last month. But even so, Turner still thinks Williams is “that bitch.” Find out what else Turner and her Dark Phoenix costar Jessica Chastain had to say in response to the web’s most searched questions about them in the interview above.


More Great WIRED Stories

Read More

Hexbyte  Tech News  Wired James Holzhauer’s ‘Jeopardy!’ Game Monday Is a Must-See

Hexbyte Tech News Wired James Holzhauer’s ‘Jeopardy!’ Game Monday Is a Must-See

Hexbyte Tech News Wired

Hexbyte  Tech News  Wired

James Holzhauer, a gambler by trade, approaches Jeopardy! like a sport.

Jeopardy Productions, Inc.

It’s over. Thirty-three games, more than 1,100 correct responses, and $2,464,216 dollars after first taking the Jeopardy! contestant podium, James Holzhauer lost. While his run failed to match Ken Jennings’ for either longevity or earnings—he fell just $56,484 short—Holzhauer has left as indelible a mark on the game. How did he do it? By not treating Jeopardy! like a game at all.

“To me, that’s the story,” says Buzzy Cohen, winner of the 2017 Jeopardy! Tournament of Champions. “When he showed up he had a plan, he had practiced, as opposed to just walking into the studio and saying ‘all right, here it goes,’ which is how most of us do it.”

In fairness, the playbook Holzhauer drew from is well-worn. Most Jeopardy! clues are assigned a static dollar value, based on difficulty. But three so-called Daily Doubles, hidden from view, allow contestants to wager any amount of their winnings to that point. “My approach isn’t complicated: Get some money, hit the Daily Doubles, bet big, and hope I run hot,” Holzhauer told WIRED in an email early on in his streak.

What sounds simple in theory becomes less so under the stage lights. And playing Jeopardy! to win is a different animal from playing to maximize your winnings. Even Jennings has acknowledged that during his own 75-game streak, he was “playing a game show like I had on my couch.”

Holzhauer, a gambler by trade, approached Jeopardy! like a sport. The distinction matters. Think of it like bowling: You know that to roll a strike, you need to knock down all of the pins. You’ve even bowled plenty of strikes yourself. But to string 12 strikes together requires preparation, dedication, and endurance. Holzhauer exhibited all three.

“He’s a bit of a perfect storm, where he has the sports gambling and sabermetrics background, but also has been doing trivia for a long time,” says Cohen. “It’s not like he just picked it up.”

You can slice Holzhauer’s Jeopardy! dominance any number of ways. He holds the 16 highest-scoring games in Jeopardy! history, and won more money faster than had previously seemed possible. Even his loss is a testament to how he played the game: Holzhauer finally went down not because he whiffed on an outlandish wager, as some had suspected he might. He didn’t flail on the Final Jeopardy round. In fact, he barely missed any responses at all. In most ways, it was a routine Holzhauer game.

But Holzhauer was up against Emma Boettcher, a user experience librarian who knew the playbook—and executed against it—as well as he did. Holzhauer led off the game by finding the first Daily Double, betting the max, and getting it correct. Business as usual. But in the Double Jeopardy round, Boettcher took control of the board early, hunted for Daily Doubles, and found both before Holzhauer could. When it mattered most, she went all in.

Which raises interesting questions about what, if anything, Holzhauer’s reign means for Jeopardy!’s future. Will he be its Dick Fosbury, transforming how the next generation of players wager and win? Or are he and Boettcher outliers, staccato blasts before the usual rhythm again takes hold.

“It’s sort of like, Steph Curry came along and is draining threes. But if a player tomorrow decides to just shoot threes, you’re going to have a lot of misses,” says Cohen.

Rumors of Holzhauer’s loss began circulating online Friday, and it was confirmed after the episode aired this morning in Montgomery, Alabama, per the Washington Post.

As for Holzhauer, don’t expect him to be separated from his buzzer for long. Jeopardy! has long invested in the afterlives of its most successful contestants, regularly running tournaments where luminaries like Jennings and Cohen and Julia Collins compete against one another. It’s a place where everyone knows the playbook, and everyone knows the answers. He’ll fit right in.

And who knows? With any luck, he’ll face off against Brad Rutter, whose all-time Jeopardy! winnings of $4,688,436 dwarf even Jennings’. Rutter has also never lost a game of Jeopardy! to a human opponent. (Damn you, Watson!) Which is to say, there are plenty of records left for Holzhauer to beat. And plenty of contestants—starting with Boettcher—who know how to beat his.


More Great WIRED Stories

Read More

Hexbyte  News  Computers bollu/bollu.github.io

Hexbyte News Computers bollu/bollu.github.io

Hexbyte News Computers

Contents of pixel-druid.com, mirrored at bollu.github.io

The idea is for the website to contain blog posts, along with visualizations of
math / graphics / programming.

The former has been semi-forced thanks to GSoC, as for the latter, it remains
to be seen. I’m hopeful, though 🙂

The classic explanation of word2vec, in skip-gram, with negative sampling,
in the paper and countless blog posts on the internet is as follows:

while(1) {
   1. vf = vector of focus word
   2. vc = vector of focus word
   3. train such that (vc . vf = 1)
   4. for(0 <= i <= negative samples):
           vneg = vector of word *not* in context
           train such that (vf . vneg = 0)
}

Indeed, if I google "word2vec skipgram", the results I get are:

The original word2vec C implementation does not do what's explained above,
and is drastically different. Most serious users of word embeddings, who use
embeddings generated from word2vec do one of the following things:

  1. They invoke the original C implementation directly.
  2. They invoke the gensim implementation, which is transliterated from the
    C source to the extent that the variables names are the same.

Indeed, the gensim implementation is the only one that I know of which
is faithful to the C implementation
.

The C implementation

The C implementation in fact maintains two vectors for each word, one where
it appears as a focus word, and one where it appears as a context word.
(Is this sounding familiar? Indeed, it appears that GloVe actually took this
idea from word2vec, which has never mentioned this fact!)

The setup is incredibly well done in the C code:

  • An array called syn0 holds the vector embedding of a word when it occurs
    as a focus word. This is random initialized.
https://github.com/tmikolov/word2vec/blob/20c129af10659f7c50e86e3be406df663beff438/word2vec.c#L369
  for (a = 0; a < vocab_size; a++) for (b = 0; b < layer1_size; b++) {
    next_random = next_random * (unsigned long long)25214903917 + 11;
    syn0[a * layer1_size + b] = 
       (((next_random & 0xFFFF) / (real)65536) - 0.5) / layer1_size;
  }
  • Another array called syn1neg holds the vector of a word when it occurs
    as a context word. This is zero initialized.
https://github.com/tmikolov/word2vec/blob/20c129af10659f7c50e86e3be406df663beff438/word2vec.c#L365
for (a = 0; a < vocab_size; a++) for (b = 0; b < layer1_size; b++)
  syn1neg[a * layer1_size + b] = 0;
  • During training (skip-gram, negative sampling, though other cases are
    also similar), we first pick a focus word. This is held constant throughout
    the positive and negative sample training. The gradients of the focus vector
    are accumulated in a buffer, and are applied to the focus word
    after it has been affected by both positive and negative samples.
if (negative > 0) for (d = 0; d < negative + 1; d++) {
  // if we are performing negative sampling, in the 1st iteration,
  // pick a word from the context and set the dot product target to 1
  if (d == 0) {
    target = word;
    label = 1;
  } else {
    // for all other iterations, pick a word randomly and set the dot
    //product target to 0
    next_random = next_random * (unsigned long long)25214903917 + 11;
    target = table[(next_random >> 16) % table_size];
    if (target == 0) target = next_random % (vocab_size - 1) + 1;
    if (target == word) continue;
    label = 0;
  }
  l2 = target * layer1_size;
  f = 0;

  // find dot product of original vector with negative sample vector
  // store in f
  for (c = 0; c < layer1_size; c++) f += syn0[c + l1] * syn1neg[c + l2];

  // set g = sigmoid(f) (roughly, the actual formula is slightly more complex)
  if (f > MAX_EXP) g = (label - 1) * alpha;
  else if (f < -MAX_EXP) g = (label - 0) * alpha;
  else g = (label - expTable[(int)((f + MAX_EXP) * (EXP_TABLE_SIZE / MAX_EXP / 2))]) * alpha;

  // 1. update the vector syn1neg,
  // 2. DO NOT UPDATE syn0
  // 3. STORE THE syn0 gradient in a temporary buffer neu1e
  for (c = 0; c < layer1_size; c++) neu1e[c] += g * syn1neg[c + l2];
  for (c = 0; c < layer1_size; c++) syn1neg[c + l2] += g * syn0[c + l1];
}
// Finally, after all samples, update syn1 from neu1e
https://github.com/tmikolov/word2vec/blob/20c129af10659f7c50e86e3be406df663beff438/word2vec.c#L541
// Learn weights input -> hidden
for (c = 0; c < layer1_size; c++) syn0[c + l1] += neu1e[c];

Why random and zero initialization?

Once again, since none of this actually explained in the original papers
or on the web, I can only hypothesize.

My hypothesis is that since the negative samples come from all over the text
and are not really weighed by frequency, you can wind up picking any word,
and more often than not, a word whose vector has not been trained much at all.
If this vector actually had a value, then it could move the actually important
focus word randomly.

The solution is to set all negative samples to zero, so that only vectors
that have occured somewhat frequently
will affect the representation of
another vector.

It's quite ingenious, really, and until this, I'd never really thought of
how important initialization strategies really are.

Why I'm writing this

I spent two months of my life trying to reproduce word2vec, following
the paper exactly, reading countless articles, and simply not succeeding.
I was unable to reach the same scores that word2vec did, and it was not
for lack of trying.

I could not have imagined that the paper would have literally fabricated an
algorithm that doesn't work, while the implementation does something completely
different.

Eventually, I decided to read the sources, and spent three whole days convinced
I was reading the code wrong since literally everything on the internet told me
otherwise.

I don't understand why the original paper and the internet contain zero
explanations of the actual mechanism behind word2vec, so I decided to put
it up myself.

This also explains GloVe's radical choice of having a separate vector
for the negative context --- they were just doing what word2vec does, but
they told people about it :).

Is this academic dishonesty? I don't know the answer, and that's a heavy
question. But I'm frankly incredibly pissed, and this is probably the last
time I take a machine learning paper's explanation of the algorithm
seriously again --- from next time, I read the source first.

This is a section that I'll update as I learn more about the space, since I'm studying
differential geometry over the summer, I hope to know enough about "sympletic manifolds".
I'll make this an append-only log to add to the section as I understand more.

31st May
  • To perform hamiltonian monte carlo, we use the hamiltonian and its derivatives to provide
    a momentum to our proposal distribution --- That is, when we choose a new point from the
    current point, our probability distribution for the new point is influenced by our
    current momentum

  • For some integral necessary within this scheme, Euler integration doesn't cut it
    since the error diverges to infinity

  • Hence, we need an integrator that guarantees that the energy of out system is conserved.
    Enter the leapfrog integrator. This integrator is also time reversible -- We can run it
    forward for n steps, and then run it backward for n steps to arrive at the same state.
    Now I finally know how Braid was implemented, something that bugged the hell out of 9th grade me
    when I tried to implement Braid-like physics in my engine!

  • The actual derivation of the integrator uses Lie algebras, Sympletic geometry, and other
    diffgeo ideas, which is great, because it gives me motivation to study differential geometry :)

  • Original paper: Construction of higher order sympletic integrators

We create a simple monad called PL which allows for a single operation: sampling
from a uniform distribution. We then exploit this to implement MCMC using metropolis hastings,
which is used to sample from arbitrary distributions. Bonus is a small library to render sparklines
in the CLI.

For next time:

  • Using applicative to speed up computations by exploiting parallelism
  • Conditioning of a distribution wrt a variable

Source code

{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE StandaloneDeriving #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE UndecidableInstances #-}
{-# LANGUAGE DeriveFunctor #-}
import System.Random
import Data.List(sort, nub)
import Data.Proxy
import Control.Monad (replicateM)
import qualified Data.Map as M


-- | Loop a monadic computation.
mLoop :: Monad m =>
      (a -> m a) -- ^ loop
      -> Int -- ^ number of times to run
      -> a -- initial value
      -> m a -- final value
mLoop _ 0 a = return a
mLoop f n a = f a >>= mLoop f (n - 1)


-- | Utility library for drawing sparklines

-- | List of characters that represent sparklines
sparkchars :: String
sparkchars = "_▁▂▃▄▅▆▇█"

-- Convert an int to a sparkline character
num2spark :: RealFrac a => a -- ^ Max value
  -> a -- ^ Current value
  -> Char
num2spark maxv curv =
   sparkchars !!
     (floor $ (curv / maxv) * (fromIntegral (length sparkchars - 1)))

series2spark :: RealFrac a => [a] -> String
series2spark vs =
  let maxv = if null vs then 0 else maximum vs
  in map (num2spark maxv) vs

seriesPrintSpark :: RealFrac a => [a] -> IO ()
seriesPrintSpark = putStrLn . series2spark

-- Probabilites
-- ============
type F = Float
-- | probablity density
newtype P = P { unP :: Float } deriving(Num)

-- | prob. distributions over space a
newtype D a = D { runD :: a -> P }

uniform :: Int -> D a
uniform n =
  D $ _ -> P $ 1.0 / (fromIntegral $ n)

(>$<) :: Contravariant f => (b -> a) -> f a  -> f b
(>$<) = cofmap

instance Contravariant D where
  cofmap f (D d) = D (d . f)

-- | Normal distribution with given mean
normalD :: Float ->  D Float
normalD mu = D $ f -> P $ exp (- ((f-mu)^2))

-- | Distribution that takes on value x^p for 1 <= x <= 2.  Is normalized
polyD :: Float -> D Float
polyD p = D $ f -> P $ if 1 <= f && f <= 2 then (f ** p) * (p + 1) / (2 ** (p+1) - 1) else 0

class Contravariant f where
  cofmap :: (b -> a) -> f a -> f b

data PL next where
    Ret :: next -> PL next -- ^ return  a value
    Sample01 :: (Float -> PL next) -> PL next -- ^ sample uniformly from a [0, 1) distribution

instance Monad PL where
  return = Ret
  (Ret a) >>= f = f a
  (Sample01 float2plnext) >>= next2next' =
      Sample01 $ f -> float2plnext f >>= next2next'

instance Applicative PL where
    pure = return
    ff <*> fx = do
        f <- ff
        x <- fx
        return $ f x

instance Functor PL where
    fmap f plx = do
         x <- plx
         return $ f x

-- | operation to sample from [0, 1)
sample01 :: PL Float
sample01 = Sample01 Ret


-- | Run one step of MH on a distribution to obtain a (correlated) sample
mhStep :: (a -> Float) -- ^ function to score sample with, proportional to distribution
  -> (a -> PL a) -- ^ Proposal program
  -> a -- current sample
  -> PL a
mhStep f q a = do
 	a' <- q a
 	let alpha = f a' / f a -- acceptance ratio
 	u <- sample01
 	return $ if u <= alpha then a' else a

-- Typeclass that can provide me with data to run MCMC on it
class MCMC a where
    arbitrary :: a
    uniform2val :: Float -> a

instance MCMC Float where
	arbitrary = 0
	-- map [0, 1) -> (-infty, infty)
	uniform2val v = tan (-pi/2 + pi * v)


{-
-- | Any enumerable object has a way to get me the starting point for MCMC
instance (Bounded a, Enum a) => MCMC a where
     arbitrary = toEnum 0
     uniform2val v = let
        maxf = fromIntegral . fromEnum $ maxBound
        minf = fromIntegral . fromEnum $ minBound
        in toEnum $ floor $ minf + v * (maxf - minf)
-}


-- | Run MH to sample from a distribution
mh :: (a -> Float) -- ^ function to score sample with
 -> (a -> PL a) -- ^ proposal program
 -> a -- ^ current sample
 -> PL a
mh f q a = mLoop (mhStep f q) 100  $ a

-- | Construct a program to sample from an arbitrary distribution using MCMC
mhD :: MCMC a => D a -> PL a
mhD (D d) =
    let
      scorer = (unP . d)
      proposal _ = do
        f <- sample01
        return $ uniform2val f
    in mh scorer proposal arbitrary


-- | Run the probabilistic value to get a sample
sample :: RandomGen g => g -> PL a -> (a, g)
sample g (Ret a) = (a, g)
sample g (Sample01 f2plnext) = let (f, g') = random g in sample g' (f2plnext f)


-- | Sample n values from the distribution
samples :: RandomGen g => Int -> g -> PL a -> ([a], g)
samples 0 g _ = ([], g)
samples n g pl = let (a, g') = sample g pl
                     (as, g'') = samples (n - 1) g' pl
                 in (a:as, g'')

-- | count fraction of times value occurs in list
occurFrac :: (Eq a) => [a] -> a -> Float
occurFrac as a =
    let noccur = length (filter (==a) as)
        n = length as
    in (fromIntegral noccur) / (fromIntegral n)

-- | Produce a distribution from a PL by using the sampler to sample N times
distribution :: (Eq a, Num a, RandomGen g) => Int -> g -> PL a -> (D a, g)
distribution n g pl =
    let (as, g') = samples n g pl in (D (a -> P (occurFrac as a)), g')


-- | biased coin
coin :: Float -> PL Int -- 1 with prob. p1, 0 with prob. (1 - p1)
coin p1 = do
    Sample01 (f -> Ret $ if f < p1 then 1 else 0)


-- | Create a histogram from values.
histogram :: Int -- ^ number of buckets
          -> [Float] -- values
          -> [Int]
histogram nbuckets as =
    let
        minv :: Float
        minv = minimum as
        maxv :: Float
        maxv = maximum as
        -- value per bucket
        perbucket :: Float
        perbucket = (maxv - minv) / (fromIntegral nbuckets)
        bucket :: Float -> Int
        bucket v = floor (v / perbucket)
        bucketed :: M.Map Int Int
        bucketed = foldl (m v -> M.insertWith (+) (bucket v) 1 m) mempty as
     in map snd . M.toList $ bucketed


printSamples :: (Real a, Eq a, Ord a, Show a) => String -> [a] -> IO ()
printSamples s as =  do
    putStrLn $ "***" <> s
    putStrLn $ "   samples: " <> series2spark (map toRational as)

printHistogram :: [Float] -> IO ()
printHistogram samples = putStrLn $ series2spark (map fromIntegral . histogram 10 $  samples)


-- | Given a coin bias, take samples and print bias
printCoin :: Float -> IO ()
printCoin bias = do
    let g = mkStdGen 1
    let (tosses, _) = samples 100 g (coin bias)
    printSamples ("bias: " <> show bias) tosses



-- | Create normal distribution as sum of uniform distributions.
normal :: PL Float
normal =  fromIntegral . sum <$> (replicateM 5 (coin 0.5))


main :: IO ()
main = do
    printCoin 0.01
    printCoin 0.99
    printCoin 0.5
    printCoin 0.7

    putStrLn $ "normal distribution using central limit theorem: "
    let g = mkStdGen 1
    let (nsamples, _) = samples 1000 g normal
    -- printSamples "normal: " nsamples
    printHistogram nsamples


    putStrLn $ "normal distribution using MCMC: "
    let (mcmcsamples, _) = samples 1000 g (mhD $  normalD 0.5)
    printHistogram mcmcsamples

    putStrLn $ "sampling from x^4 with finite support"
    let (mcmcsamples, _) = samples 1000 g (mhD $  polyD 4)
    printHistogram mcmcsamples

Output

***bias: 1.0e-2
   samples: ________________________________________█_█_________________________________________________________
***bias: 0.99
   samples: ████████████████████████████████████████████████████████████████████████████████████████████████████
***bias: 0.5
   samples: __█____█__███_███_█__█_█___█_█_██___████████__█_████_████_████____██_█_██_____█__██__██_██____█__█__
***bias: 0.7
   samples: __█__█_█__███_█████__███_█_█_█_██_█_████████__███████████_████_█_███_████_██__█_███__██_███_█_█__█_█
normal distribution using central limit theorem: 
_▄▇█▄_
normal distribution using MCMC: 
__▁▄█▅▂▁___
sampling from x^4 with finite support
▁▁▃▃▃▄▅▆▇█_
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
import qualified Data.Map.Strict as M

-- | This file can be copy-pasted and will run!

-- | Symbols
type Sym = String
-- | Environments
type E a = M.Map Sym a
-- | Newtype to represent deriative values
type F = Float
newtype Der = Der { under :: F } deriving(Show, Num)

infixl 7 !#
-- | We are indexing the map at a "hash" (Sym)
(!#) :: E a -> Sym -> a
(!#) = (M.!)

-- | A node in the computation graph
data Node = 
  Node { name :: Sym -- ^ Name of the node
       , ins :: [Node] -- ^ inputs to the node
       , out :: E F -> F -- ^ output of the node
       , der :: (E F, E (Sym -> Der)) 
                  -> Sym -> Der -- ^ derivative wrt to a name
       }

-- | @ looks like a "circle", which is a node. So we are indexing the map
-- at a node.
(!@) :: E a -> Node -> a 
(!@) e node = e M.! (name node)

-- | Given the current environments of values and derivatives, compute
-- | The new value and derivative for a node.
run_ :: (E F, E (Sym -> Der)) -> Node -> (E F, E (Sym -> Der))
run_ ein (Node name ins out der) = 
  let (e', ed') = foldl run_ ein ins -- run all the inputs
      v = out e' -- compute the output
      dv = der (e', ed') -- and the derivative
  in (M.insert name v e', M.insert name dv ed')  -- and insert them

-- | Run the program given a node 
run :: E F -> Node -> (E F, E (Sym -> Der))
run e n = run_ (e, mempty) n

-- | Let's build nodes
nconst :: Sym -> F -> Node
nconst n f = Node n [] (_ -> f) (_ _ -> 0)

-- | Variable
nvar :: Sym -> Node 
nvar n = Node n [] (!# n) (_ n' -> if n == n' then 1 else 0)
  
-- | binary operation
nbinop :: (F -> F -> F)  -- ^ output computation from inputs
 -> (F -> Der -> F -> Der -> Der) -- ^ derivative computation from outputs
 -> Sym -- ^ Name
 -> (Node, Node) -- ^ input nodes
 -> Node
nbinop f df

Read More

Hexbyte  News  Computers GitHub Package Registry will support Swift packages

Hexbyte News Computers GitHub Package Registry will support Swift packages

Hexbyte News Computers

Hexbyte  News  Computers GitHub and Swift

On May 10, we announced the limited beta of GitHub Package Registry, a package management service that makes it easy to publish public or private packages next to your source code. It currently supports familiar package management tools: JavaScript (npm), Java (Maven), Ruby (RubyGems), .NET (NuGet), and Docker images, with more to come.

Today we’re excited to announce that we’ll be adding support for Swift packages to GitHub Package Registry. Swift packages make it easy to share your libraries and source code across your projects and with the Swift community.

Available on GitHub, Swift Package Manager is a single cross-platform tool for building, running, testing, and packaging your Swift code. Package configurations are written in Swift, making it easy to configure targets, declare products, and manage package dependencies. Together, the Swift Package Manager and GitHub Package Registry will make it even easier for you to publish and manage your Swift packages.

It’s essential for mobile developers to have the best tools in order to be more productive. With the growth of the Swift ecosystem, we’re thrilled to work together with the team at Apple to help create new workflows for Swift developers.

Since its launch, we’ve been amazed to see your excitement to get started with GitHub Package Registry. During this beta period, we’re committed to learning from communities and ecosystems alike about how it meets your needs and what we can do to make it even better. If you haven’t done so already, you can sign up for the limited beta now.

Save your seat at GitHub Universe

Celebrate a world connected by code at our annual product and community event, November 13-14 in San Francisco.


Get tickets

Experience GitHub Satellite 2019

Our latest event spotlighted an interconnected software community. Watch the keynote, talks, and panels from your corner of the world.


Watch now

Read More

Hexbyte – Tech News – Ars Technica | Our first-look photos of Apple’s new Mac Pro and the Pro Display XDR

Hexbyte – Tech News – Ars Technica | Our first-look photos of Apple’s new Mac Pro and the Pro Display XDR

Hexbyte – Tech News – Ars Technica |

WWDC 2019 —

We took photos and asked a few questions about Apple’s new hardware.


Hexbyte - Tech News - Ars Technica | Another view of the Mac Pro

Enlarge / Okay, from this angle, it really does look like an ultra-shiny cheese grater.

Samuel Axon

SAN JOSE, Calif.—Today, Apple introduced two very expensive pieces of pro-targeted hardware: the Mac Pro, and the Pro Display XDR. While we were not offered an opportunity to get any hands-on time with them, we did see behind-closed-doors live demonstrations and get an opportunity to photograph them both.

Apple is positioning these as direct competitors to the sort of video editing bay hardware that costs tens of thousands of dollars, not as mass-market consumer products. Judged on that scale, these seem like great bargains, albeit only for a few people in specialized fields.

The big surprise is the modular Mac Pro, so let’s start there.

Hexbyte – Tech News – Ars Technica | Mac Pro

  • This is the new Mac Pro.


    Samuel Axon

  • And here’s a rear view.


    Samuel Axon

  • Let’s zoom in to see some of the ports in this configuration.


    Samuel Axon

  • There’s more to see at the bottom.


    Samuel Axon

  • On top, you’ll notice a couple more ports, a power button, and this handle. When you grab it, you can twist it to just pull the entire cover off in one motion for 360-degree access to the internals.


    Samuel Axon

  • This is what the frame looks like with nothing in it.


    Samuel Axon

  • These stands can be replaced with wheels, optionally.


    Samuel Axon

  • The cheese-grater design serves a cooling function, but it’s also a deliberate nod to the past.


    Samuel Axon

  • Okay, from this angle, it really does look like an ultra-shiny cheese grater.


    Samuel Axon

We almost couldn’t believe it when we saw it announced—it seems practically un-Apple at this point, but the Mac Pro is a tower PC with modular components. It has a cheese-grater-like design that, as noted, harkens back to the previous Mac tower from many years ago.

That grater design comes with a function, not just a form: it’s critical to the machine’s cooling system. This system-wide solution (that is to say, there’s no separate cooling on the GPU) places three giant fans on the front and a blower on the other side; there are two isolated thermal zones. There is, however, a very large, separate heatsink for the CPU. When idle, the Mac Pro is quieter than an iMac Pro. We saw it connected to two Pro Display XDR monitors playing two 6K videos, and it was inaudible to us over the quiet air conditioning vent in the room.

The Mac Pro doubles the number of PCIe expansion slots over that classic tower, with a total of eight. But the Mac Pro isn’t exactly like a PC desktop in that it’s all about modules made by Apple’s partners. You can load it up with MPX modules containing ha

Read More

Hexbyte – Tech News – Ars Technica | Answers to some of your iTunes questions: Old libraries, Windows, and more

Hexbyte – Tech News – Ars Technica | Answers to some of your iTunes questions: Old libraries, Windows, and more

Hexbyte – Tech News – Ars Technica |

WWDC 2019 —

Plus, 4K, HDR, and Dolby Atmos over HDMI.


  • Apple will replace iTunes with Music, Podcasts, and TV on Mac.

  • This is what syncing your phone in Finder will look like. It’s quite similar to the current iTunes interface.

  • Apple Music will bring in your old library, ostensibly without any issues. Ripped CDs, MP3s, and more will still be supported.

  • Dolby Atmos is supported in the new Apple TV app, but you’ll need the latest Mac laptops to send that out to your home theater system.

SAN JOSE, Calif.—After much speculation and fanfare in the press, Apple confirmed today that it will sunset iTunes in the next version of macOS and spin its functionality into three new apps—Apple Music, Apple Podcasts, and Apple TV. As we noted earlier, this marks the end of an era of sorts on the Mac—but there were plenty of unanswered questions. What features will Music retain from iTunes? And what happens to Windows users who are dependent on iTunes?

While some details are still fuzzy and will remain that way until we start digging into the beta releases, we got some broad answers from Apple on those top-level questions.

Hexbyte – Tech News – Ars Technica | Old iTunes libraries and files

Apple Music in macOS Catalina will import users’ existing music libraries from iTunes in their entirety, Apple says. That includes not just music purchased on iTunes, but rips from CDs, MP3s, and the like added from other sources.

Further, the existing feature that synced users’ non-iTunes files to the cloud will continue to work, and of course, users will still be able to buy songs from Apple. Apple is not turning Apple Music into a streaming-only experience. For the most part, the end of iTunes seems to be an end in name only: key features will be retained in the Music app.

Hexbyte – Tech News – Ars Technica | Syncing iPhones, iPads, and iPods

Apple already explained during the keynote that syncing with and managing your iOS devices from your Mac—which used to be an iTunes task—will now happen within Finder, Apple’s file-management application. When you plug your iPh

Read More

Hexbyte – News – Science/Nature | Mysterious flashes of light spotted on moon – KWQC-TV6

Hexbyte – News – Science/Nature | Mysterious flashes of light spotted on moon – KWQC-TV6

Hexbyte – News – Science/Nature |

Posted:
&nbsp|&nbsp

Updated: Mon 9:16 PM, Jun 03, 2019

(KWQC) – Stargazers report seeing short, random flashes of light coming from the surface of the moon, sometimes as often as multiple times in a week.

Scientists have been aware of the “transient lunar phenomena” for years but still do not know what is causing them.

Theories ranging from meteor impacts to seismic activity are among the possibilities debated thus far.

German astronomer Hakan Kayal, a professor of space technology at the University of Würzburg, believes a high-powered, remote-controlled telescope pointed at the moon 24 hours a day could yield the answer, according to a USA Today report.

The telescope located in Spain will take photos and videos anytime it detects a flash on the moon, thereby hopefully giving scientists more data to crack the mystery.

Kayal said interest in the moon flashes is high due to a new race to the moon that’s underway among China, India and the United States, as well as several private ventures, according to USA Today.

Read More

Hexbyte – Science and Tech iOS 13: Every new feature you need to know about now – CNET

Hexbyte – Science and Tech iOS 13: Every new feature you need to know about now – CNET

Hexbyte – Science and Tech

Hexbyte - Science and Tech apple-ios-13-wwdc-2019

Apple’s changes in iOS 13 for iPhone are coming soon.


James Martin/CNET

Soon iOS 13, Apple’s newest software for iPhones, will bring a slew of features big and small to your phone. Dark mode, new photos tools and a swipe-able keyboard are some of the bigger ones, with new Maps tools, security features and the ability to customize Memoji avatars folded in for good measure. While Apple highlighted certain features, keep in mind that the company often reserves some surprises for the iPhone reveal each September. There may be more features yet to come. 

One big change to iOS 13 is that it doesn’t directly fuel the iPad. Apple split off a new OS just for tablets called iPad OS. The new iPad OS is based on iOS for iPhone, so you’ll find similarities with the phone’s core features there.

The iOS 13 unveiling at Apple’s annual WWDC developer conference comes just weeks after Google, Silicon Valley’s other titan of tech, trickled out more details about Android Q, Apple’s chief software rival. Today with iOS 13, it’s Apple’s turn to woo app-makers and wow future buyers with everything that iPhones and iPads running iOS 13 will soon be able to do.

Hexbyte - Science and Tech http://www.cnet.com/


Now playing:
Watch this:

iOS 13 is packed with new features



3:27

Apple’s ability to engage buyers with iOS 13 is particularly important in 2019. The iPhone-maker has seen iPhone sales slow in step with competitors across the board. Meanwhile, the next iPhones will likely lag behind other Android rivals in key features like support for 5G speeds, periscope zoom and a standalone night mode for ultraclear camera shots. But over the years, Apple has proven that it can create must-have software tools and apps, like FaceTime video and iMessage.

The iOS 13 developer beta is available today, with the final version coming to iPhones this fall. Look for the public iOS 13 beta to arrive in July. (How you can download iOS 13 beta right now.) See all of today’s Apple news.

Hexbyte – Science and Tech Dark mode for all

Hexbyte – Science and Tech That swipey keyboard

Android users have been swiping their keyboards to type for years, through a number of third-party apps, like Swype. At long last, Apple has added the ability, letting you trace a word to spell it out. 

Apple calls it QuickPath typing. In theory, it’s faster and just as accurate as pecking away at the virtual keyboard, and you still get spelling suggestions as you go along.

The feature is especially useful for one-handed typing.

Hexbyte - Science and Tech 2019-06-03-15-15-14

Now you can swipe in addition to typing.


Apple

Hexbyte – Science and Tech Portrait lighting for photos, rotate a video

A new photos tab gives you access to some of the new tools Apple’s adding here. For example, you can now remove duplicate photos and highlight best shots.

Portrait lighting, the tool within your iPhone’s native camera app, adds more lighting effects to smooth your skin — you can also change the intensity and location of your light for portrait lighting.

More editing filters add accents called vignette, vibrance, auto enhance and noise reduction.

Photo editing gets a boost, too, with a new ability to adjust pictures by tapping and dragging with your finger. The editing tools also come to video, which means — yes — you can rotate a video if you accidentally shoot it in the wrong orientation. You can apply the new filters and video effects as well.

Other new camera features in iOS 13

  • The photo apps will automatically organize photos by year, month and date, which will make it easier for you to find photos.
  • Live photos and videos play as you scroll.
  • View photos based on each day, month or year.
Hexbyte - Science and Tech ios13-photo-1

New photo tolls will come to your iPhone in iOS 13.


James Martin/CNET

Hexbyte – Science and Tech Find My Phone and Find My Friends joins forces

The rumors were right. Apple folded Find my Phone and Find My Friends into a single app called Find My. While locating nearby friends is fine, the real value is in finding your lost or missing devices (e.g., the iPhone that fell behind the couch) even when they’re offline, using a Bluetooth beacon.

The tool is encrypted and anonymous, Apple says, and it won’t let phone thieves install or reboot your iPhone unless you activate it. 

Hexbyte - Science and Tech http://www.cnet.com/


Now playing:
Watch this:

Dark Mode comes to Apple iOS



2:45

Hexbyte – Science and Tech Sign in with Apple won’t share your email address

A new privacy feature called Sign in with Apple logs you into accounts and apps without having to add your email address, which Apple says will protect users from third-party apps track that want to them. 

This is Apple’s version of logging in with Facebook and Google, with one major exception. Those tools can be used to track you online, but Apple’s version will use your iPhone or iPad to authenticate your credentials when you log in. You tap to authenticate with Face ID without revealing any personal information about yourself. 

You can also choose to share or hide your email address, and can ask Apple to create a random email for the app or service that forwards to your actual email address, therefore masking your real identity without making you use a junk account.

Apple also blocks apps to track your location from Wi-Fi and Bluetooth, and lets you decide if you’d like apps to ask your permission each time it requests your location data.

Hexbyte - Science and Tech ios13-memoji-messages

iOS 13 will add Memoji avatars to Messages.


James Martin/CNET

Hexbyte – Science and Tech Siri finds a new voice

Siri, Apple’s new voice assistant, gets an audio update in iOS 13. Instead of clipped voices, Apple is hoping the new Siri sounds smoother and more natural to your ears. Using AI software (a neural talk-to-speak network, specifically), Siri will speak with fewer gaps and non-human sounding modulations.

iOS 13’s Siri also works better with AirPods, the HomePodCarPlay and Safari:

  • Create personalized shortcuts using a new Shortcuts app.
  • Suggested automations so you can customize your and create a template.
  • Siri reads messages as soon as they arrive and you can instantly respond.
  • Share a movie or song with friend with one tap.
  • Hand off a phone call or music from your iPhone to your HomePod.
  • CarPlay: Siri smart suggestions work here, like suggesting you open your garage door when you get close to home.
  • Siri Suggestions comes to the Safari browser.

Hexbyte - Science and Tech ios13-siri

James Martin/CNET

Hexbyte – Science and Tech Memoji avatars come to Messages, stickers

Apple’s Messages apps will now get support for Memoji profiles, which puts a thumbnail of your Memoji (an emoji of your face) into the Messages app. New controls let you go in depth with customization, makeup, even adding braces to your teeth. You also get a sticker pack across your iOS 13 devices.

iMessages will also now work on Dual SIM phones (unfortunately, we don’t have more detail than that).

Hexbyte – Science and Tech New apps in iOS 13

  • Mail: Gets rich new fonts.
  • Notes: A new gallery view, support for shared folders.
  • Reminders: You can add details for when and where to remind you of an item.
  • Smart lists: Will let you tag a person in order to trigger sending a notification to another person, for example when you set up a time to talk.
  • Maps: Gets Apple Carplay support by the end of 2019. You’ll be able to see roads, beaches, parks and buildings, tag a place for favorites. Collections will give you a list of favorites to share with friends. Look-around will give you a high-def 3D view of the area. Landscape view will smoothly move down the street, letting you tap labels to learn more about new places.

Hexbyte - Science and Tech http://www.cnet.com/


Now playing:
Watch this:

Apple reveals new camera and Photos upgrades



5:47

Hexbyte – Science and Tech More new iOS 13 features

  • Send call spam straight to voice mail and silence unknown callers.
  • Mute thread in Mail.
  • Add attachments to events in Calendar.
  • Time-synced lyrics when you play music.
  • Support for 3D AR apps like Minecraft Earth, coming to iOS 13 this summer.
  • Face ID unlocking is now 30 percent faster.
  • Apps launch 2x faster.
  • Downloads are 50% smaller and updates 60% smaller.
  • Low data mode
  • 38 new language keyboards
  • Language selection per app

$999

Hexbyte - Science and Tech http://www.cnet.com/

CNET may get a commission from retail offers.





Read More