From Gates to Gradients

How logic circuits become learning machines

Machine learning can feel like magic - or like impenetrable mathematics. Neither is true. This interactive guide builds from something concrete (logic gates, the AND/OR/NOT you might remember from school) to something powerful (neural networks that can learn almost any pattern). The key insight: they're all doing the same thing - dividing space into regions.

InteractiveEducationMachine learningMxwll
CategoryInteractive Design
Audience
Approach
Adaptability
TechnologyReact, TypeScript, SVG
DataProcedurally generated datasets
logic_to_ml_progression.png

The same classification problem solved three ways: Boolean logic (sharp quadrants), decision tree (axis-aligned rectangles), neural network (smooth curves).

00AND0ABOut
Circuit
Instructions
Click the input nodes (circles) to toggle between 0 and 1. Watch how signals propagate through the circuit.
Truth Table
ABOut
000
010
100
111

Logic gates are the atoms of computation. AND outputs 1 only when both inputs are 1. OR outputs 1 if either input is 1. From these primitives, you can build any computable function — including the computer you're using right now.

Feature 1Feature 2
Dataset
Model
0.05
What you're seeing
The background colour shows what the model would predict for any new point in that region. Blue regions predict blue class. Pink regions predict pink class.
Accuracy
93%
Try this
Switch to XOR pattern with a Linear model. The accuracy drops to ~50% — no straight line can separate the classes. Now try Polynomial or k-NN.
Class 1
Class 0

Classifiers divide feature space into regions. A linear model can only draw straight boundaries. Some patterns need curves — and some need the flexibility of neural networks.

XOR: Sharp quadrants
Progression
Boolean Logic

Logic gates implement exact functions. AND, OR, NOT combine to create any computable function. The decision boundary is perfectly sharp — a point is either in or out. You write the rules by hand.

Key properties
Explicit rules
Learned from data
Handles uncertainty

The progression from Boolean logic to neural networks is one of increasing flexibility. Logic gates implement fixed rules. Decision trees learn axis-aligned splits. Neural networks learn smooth, arbitrary boundaries — but the core operation (combining inputs to produce outputs) remains the same.

The Challenge

Machine learning has a reputation problem. To non-specialists, it appears as either impenetrable mathematics or magical black boxes. Neither framing helps people understand what these systems actually do, why they work, or where they fail. For Mxwll, we wanted to build a bridge from the familiar (logic gates, the kind you might have encountered in a physics class) to the unfamiliar (neural networks, the kind that power modern AI).

Background

Boolean logic is the foundation of all digital computation. AND gates output 1 only if both inputs are 1. OR gates output 1 if either input is 1. NOT gates flip the input. From these primitives, you can build any computable function - including arithmetic, memory, and entire computers.

The key insight connecting logic to machine learning is the concept of a decision boundary. A Boolean function divides its input space into regions that map to 0 or 1. A machine learning classifier does the same thing - but instead of the boundary being hand-designed, it is learned from examples.

The simplest ML classifier (logistic regression) learns a single straight line to separate classes. More complex models (polynomial features, neural networks) learn curved or wiggly boundaries. The tradeoff is always between flexibility and the risk of overfitting - memorising the training data rather than learning the underlying pattern.