Bookmarks for July 20th through July 24th

These are my links for July 20th through July 24th:

  • Ask HN: Best-architected open-source business applications worth studying? | Hacker News
  • Monospaced Programming Fonts with Ligatures | Hacker News
  • The language of choice – Propositional logic was discovered by Stoics around 300 B.C., only to be abandoned in later antiquity and rebuilt in the 19th century by George Boole’s successors. One of them, Charles Peirce, saw its significance for what we now call logic circuits, yet that discovery too was forgotten until the 1930s. In the ’50s John McCarthy invented conditional expressions, casting the logic into the form we’ll study here; then in 1986 Randal Bryant repeated one of McCarthy’s constructions with a crucial tweak that made his report “for many years the most cited paper in all of computer science, because it revolutionized the data structures used to represent Boolean functions” (Knuth).1 Let’s explore and code up some of this heritage of millennia, and bring it to bear on a suitable challenge: playing tic-tac-toe.

    Then we’ll tackle a task that’s a little more practical: verifying a carry-lookahead adder circuit. Supposedly logic gets used all the time for all kinds of serious work, but for such you’ll have to consult the serious authors; what I can say myself, from working out the code to follow, is that the subject offers a fun playground plus the most primitive form of the pun between meaning and mechanism.

    You’re encouraged to read with this article’s code cloned and ready

 

Bookmarks for July 18th through July 19th

These are my links for July 18th through July 19th:

  • Dice-O-Matic hopper and elevator – GamesByEmail
  • Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation | The Computational Propaganda Project
  • The limitations of deep learning – The limitations of deep learning
    Mon 17 July 2017
    By Francois Chollet
    In Essays.
    This post is adapted from Section 2 of Chapter 9 of my book, Deep Learning with Python (Manning Publications). It is part of a series of two posts on the current limitations of deep learning, and its future.

    This post is targeted at people who already have significant experience with deep learning (e.g. people who have read chapters 1 through 8 of the book). We assume a lot of pre-existing knowledge.

    Deep learning: the geometric view

    The most surprising thing about deep learning is how simple it is. Ten years ago, no one expected that we would achieve such amazing results on machine perception problems by using simple parametric models trained with gradient descent. Now, it turns out that all you need is sufficiently large parametric models trained with gradient descent on sufficiently many examples. As Feynman once said about the universe, "It's not complicated, it's just a lot of it".

 

Bookmarks for July 16th