DIFFERENTIABLE LOGIC GATES

STATUS: [AI RESEARCH] // FRAMEWORK: PYTORCH
PYTORCH PYTHON NEURAL NETS RESEARCH

// ABSTRACT

This research project explores the intersection of neural networks and symbolic logic. We propose an interpretable neural architecture that learns to approximate Boolean logic gates (AND, OR, XOR, NOT) through gradient descent. By using temperature annealing, the network transitions from continuous, differentiable functions to discrete, hard logic gates.

// METHODOLOGY

The core innovation involves a custom activation function that mimics the behavior of logic gates. During training, a temperature parameter is gradually lowered, sharpening the activation curve until it behaves like a step function. This allows the network to be trained with standard backpropagation while converging to a symbolic logic circuit.

// RESULTS

The model successfully learned to replicate complex boolean circuits from input-output pairs alone. This demonstrates potential for neuro-symbolic AI, where neural networks can learn interpretable, rule-based logic that can be extracted and verified.

// PUBLICATION

Read the full research paper on Overleaf.

READ PAPER
< RETURN TO BASE