This demo runs Graph Neural Networks (GNNs) entirely in your browser - no server required!
GNNs are a type of neural network designed to work with graph-structured data (nodes connected by edges),
like social networks, molecules, or knowledge graphs.
What you'll see: We create a small graph with 5 nodes and run different GNN architectures
to predict which "class" each node belongs to. This is called node classification -
a fundamental task in graph machine learning.
Backend Status
Status:Initializing...
Backend:-
WebGPU:-
WASM:-
Choose a Demo
Click on a GNN architecture to learn how it works and see it in action:
GCN - Graph Convolutional Network
The classic GNN. Each node aggregates features from its neighbors, weighted by node degree. Simple but powerful.
Kipf & Welling 2017
GAT - Graph Attention Network
Uses attention to learn which neighbors are most important. Different heads capture different relationship types.
Velickovic et al. 2018
GraphSAGE
Samples and aggregates neighbor features. Great for large graphs and can generalize to unseen nodes.
Hamilton et al. 2017
Performance Benchmark
Test inference speed on graphs of different sizes (10 to 500 nodes). See how BrowserGNN scales.
Speed Test
Real-World Data Demos
NEW! These demos use real datasets with actual names and ground-truth labels,
not synthetic test data. See GNNs solving actual problems!
Key technique: These demos use structural feature engineering -
computing node features from graph topology (BFS distances, degree centrality) rather than arbitrary one-hot encoding.
This dramatically improves accuracy from ~47% to 92%+!
Karate Club (GCN)
Zachary's famous 1977 study: 34 karate club members split into 2 factions. Can GCN predict who went where?
Real Social Network
Karate Club (GAT)
Same dataset with Graph Attention - which friendships matter most for predicting loyalty?
Attention Analysis
Karate Club (GraphSAGE)
GraphSAGE on the karate club - sample-and-aggregate approach to community detection.
Sampling-Based
Molecule: Caffeine
Predict atom properties in a caffeine molecule. Atoms as nodes, bonds as edges!
Chemistry
Why Structural Features Work
Instead of arbitrary one-hot encoding, we compute meaningful features from graph structure:
Feature
Formula
What It Captures
closeness_MrHi
1 / (1 + dist_to_node0)
How close to faction leader #1
closeness_Officer
1 / (1 + dist_to_node33)
How close to faction leader #2
degree
num_friends / max_degree
How connected (bridge members)
bias
(d2 - d1) / (d1 + d2 + 1)
Which leader is closer (+/−)
Result: 94%+ accuracy vs. ~47% with one-hot features!
Output Console
Welcome to BrowserGNN!
Click one of the demo buttons above to run a Graph Neural Network.
What will happen:
1. We create a graph with 5 nodes and 10 edges
2. Each node has 3 initial features (random values)
3. The GNN processes the graph through multiple layers
4. Output: probability of each node belonging to Class 0 or Class 1
This is node classification - predicting labels for nodes
based on their features AND their connections to other nodes.
Model Architecture
Select a demo to see the neural network layers used.
Waiting...
Run a demo to see model details
Input Graph
Node (with ID)
Edge (connection)
Node Input Features (3 values per node)
Node 0:[1.0, 0.5, 0.2]
Node 1:[0.8, 0.3, 0.9]
Node 2:[0.2, 0.7, 0.4]
Node 3:[0.5, 0.1, 0.8]
Node 4:[0.9, 0.6, 0.3]
How to Interpret Results
Understanding the Output
After running a demo, you'll see predictions like:
Node 0: Class 0: 0.62, Class 1: 0.38
This means:
62% probability Node 0 belongs to Class 0
38% probability Node 0 belongs to Class 1
The model would predict Class 0 for this node
Key insight: Connected nodes tend to get similar predictions!
This is because GNNs aggregate information from neighbors.