Skip to main contentSkip to user menuSkip to navigation

Discriminative vs Generative Models

Understanding the fundamental differences between discriminative and generative approaches in machine learning, their trade-offs, and when to use each.

30 min readIntermediate
Not Started
Loading...

Fundamental Differences

Discriminative Models

Learn: P(y|x) - Conditional probability

Goal: Find decision boundary between classes

Question: "What is the label given the input?"

Approach: Direct mapping from features to labels

Examples: SVM, Logistic Regression, Random Forest

Generative Models

Learn: P(x,y) - Joint probability distribution

Goal: Model the data generation process

Question: "How is this data generated?"

Approach: Model distribution, then infer labels

Examples: Naive Bayes, VAE, GAN, Hidden Markov Models

Model Performance Comparison

Compare discriminative vs generative model performance based on your dataset characteristics.

1K500K
101000

Performance Estimates

Expected Accuracy:93.0%
Training Time:1.4h
Memory Usage:2.0 GB
Interpretability:7/10
Data Efficiency:1x
Recommended Samples:50,000

Mathematical Foundation

Discriminative Approach

Objective: Learn P(y|x) directly

P(y|x) = argmax P(y|x)

For logistic regression:

P(y=1|x) = σ(wᵀx + b)

where σ(z) = 1/(1 + e⁻ᶻ)

Advantages:

  • More data efficient for classification
  • Better performance with sufficient data
  • Simpler training and inference
  • More robust to outliers

Generative Approach

Objective: Learn P(x,y) then compute P(y|x)

P(y|x) = P(x,y) / P(x)

P(x,y) = P(x|y) * P(y)

For Naive Bayes:

P(x|y) = ∏ᵢ P(xᵢ|y)

Advantages:

  • Can generate new data samples
  • Handles missing features naturally
  • Provides uncertainty quantification
  • Works with unsupervised learning

Implementation Examples

Discriminative: Support Vector Machine

from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import classification_report
import numpy as np

# Generate sample data
np.random.seed(42)
X = np.random.randn(1000, 20)  # 1000 samples, 20 features
y = (X[:, 0] + X[:, 1] > 0).astype(int)  # Simple decision boundary

# Split and scale data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

# Train discriminative model (SVM)
svm = SVC(kernel='rbf', probability=True, random_state=42)
svm.fit(X_train_scaled, y_train)

# Predictions focus only on decision boundary
y_pred = svm.predict(X_test_scaled)
y_proba = svm.predict_proba(X_test_scaled)

print("Discriminative Model (SVM) Results:")
print(classification_report(y_test, y_pred))
print(f"Decision function range: {svm.decision_function(X_test_scaled).min():.2f} to {svm.decision_function(X_test_scaled).max():.2f}")

# Key insight: SVM learns decision boundary directly
# Does not model how data is generated, only how to separate classes

Generative: Gaussian Naive Bayes

from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import classification_report
import numpy as np
import matplotlib.pyplot as plt

# Same data as discriminative example
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train generative model (Naive Bayes)
nb = GaussianNB()
nb.fit(X_train, y_train)

# Predictions based on learned distributions
y_pred = nb.predict(X_test)
y_proba = nb.predict_proba(X_test)

print("Generative Model (Naive Bayes) Results:")
print(classification_report(y_test, y_pred))

# Key insight: Model learns P(X|y) for each class
print(f"\nLearned parameters for class 0:")
print(f"Mean features: {nb.theta_[0][:5]}")  # First 5 features
print(f"Variance: {nb.var_[0][:5]}")

print(f"\nLearned parameters for class 1:")
print(f"Mean features: {nb.theta_[1][:5]}")
print(f"Variance: {nb.var_[1][:5]}")

# Generate new samples from learned distribution (generative capability)
def generate_samples(nb_model, class_label, n_samples=100):
    """Generate new samples from learned Gaussian distributions"""
    mean = nb_model.theta_[class_label]
    var = nb_model.var_[class_label]
    return np.random.normal(mean, np.sqrt(var), (n_samples, len(mean)))

# Generate synthetic data for each class
synthetic_class_0 = generate_samples(nb, 0, 50)
synthetic_class_1 = generate_samples(nb, 1, 50)

print(f"\nGenerated {len(synthetic_class_0)} samples for class 0")
print(f"Generated {len(synthetic_class_1)} samples for class 1")
print("Generative models can create new data - discriminative models cannot!")

Advanced: Variational Autoencoder (Generative)

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader, TensorDataset

class VAE(nn.Module):
    def __init__(self, input_dim=784, hidden_dim=400, latent_dim=20):
        super(VAE, self).__init__()
        
        # Encoder network - learns P(z|x)
        self.fc1 = nn.Linear(input_dim, hidden_dim)
        self.fc21 = nn.Linear(hidden_dim, latent_dim)  # Mean
        self.fc22 = nn.Linear(hidden_dim, latent_dim)  # Log variance
        
        # Decoder network - learns P(x|z)
        self.fc3 = nn.Linear(latent_dim, hidden_dim)
        self.fc4 = nn.Linear(hidden_dim, input_dim)
        
    def encode(self, x):
        h1 = F.relu(self.fc1(x))
        return self.fc21(h1), self.fc22(h1)  # mu, logvar
    
    def reparameterize(self, mu, logvar):
        std = torch.exp(0.5 * logvar)
        eps = torch.randn_like(std)
        return mu + eps * std
    
    def decode(self, z):
        h3 = F.relu(self.fc3(z))
        return torch.sigmoid(self.fc4(h3))
    
    def forward(self, x):
        mu, logvar = self.encode(x.view(-1, 784))
        z = self.reparameterize(mu, logvar)
        return self.decode(z), mu, logvar

def loss_function(recon_x, x, mu, logvar):
    # Reconstruction loss + KL divergence
    BCE = F.binary_cross_entropy(recon_x, x.view(-1, 784), reduction='sum')
    KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
    return BCE + KLD

# Training loop
model = VAE()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)

def train_vae(model, dataloader, epochs=10):
    model.train()
    for epoch in range(epochs):
        train_loss = 0
        for batch_idx, (data, _) in enumerate(dataloader):
            optimizer.zero_grad()
            recon_batch, mu, logvar = model(data)
            loss = loss_function(recon_batch, data, mu, logvar)
            loss.backward()
            optimizer.step()
            train_loss += loss.item()
        
        print(f'Epoch {epoch}: Average loss: {train_loss / len(dataloader.dataset):.4f}')

# Key generative capabilities:
def generate_new_samples(model, num_samples=64):
    """Generate completely new samples from the learned distribution"""
    model.eval()
    with torch.no_grad():
        # Sample from standard normal distribution
        z = torch.randn(num_samples, 20)  # latent_dim = 20
        samples = model.decode(z)
        return samples

# Generate new data
new_samples = generate_new_samples(model, 10)
print(f"Generated {len(new_samples)} new samples from learned distribution")
print("This is the key power of generative models!")

When to Use Each Approach

Choose Discriminative When:

Primary Goal: Classification Accuracy

You want the best possible classification performance and have sufficient labeled data.

Limited Computational Resources

Training and inference need to be fast and memory-efficient.

High-Dimensional Data

Working with images, text, or other high-dimensional inputs where modeling full distribution is expensive.

Well-Defined Decision Boundaries

The task has clear separation between classes that can be learned directly.

Choose Generative When:

Need Data Generation

You need to create synthetic training data, handle data augmentation, or generate new samples.

Missing Data Problems

You frequently have missing features and need robust imputation capabilities.

Uncertainty Quantification

You need to understand and quantify model uncertainty for decision-making.

Unsupervised Learning

You want to discover hidden patterns or cluster data without labels.

Real-World Applications

Discriminative Success Stories

ImageNet Classification

CNNs achieved superhuman accuracy by learning decision boundaries directly

Credit Card Fraud Detection

Random Forest models excel at binary fraud classification with 99.9%+ accuracy

Medical Diagnosis

SVMs and neural networks for medical imaging achieve radiologist-level performance

Generative Success Stories

GPT Language Models

Generate human-like text by modeling language distribution

StyleGAN Image Synthesis

Creates photorealistic faces and artwork from learned distributions

Drug Discovery

VAEs generate novel molecular structures for pharmaceutical research

No quiz questions available
Quiz ID "discriminative-vs-generative" not found