🔥 PyTorch
The most popular deep learning framework for research and production. Known for its dynamic computation graphs, intuitive API, and seamless transition from research to deployment.
77k+
GitHub Stars
Most starred ML framework
50k+
Research Papers
Papers using PyTorch
Facebook, Tesla
Industry Adoption
Used by major companies
GPU Native
Performance
Optimized for CUDA acceleration
Tensors & Operations
PyTorch tensor fundamentals and mathematical operations
# PyTorch Tensor Fundamentals
import torch
import torch.nn as nn
import numpy as np
# Tensor Creation and Basic Operations
def tensor_basics():
"""Demonstrate PyTorch tensor creation and operations"""
# Creating tensors
x = torch.tensor([1, 2, 3, 4, 5], dtype=torch.float32)
y = torch.zeros(3, 4) # 3x4 tensor of zeros
z = torch.randn(2, 3, 4) # Random normal distribution
# Device handling (CPU/GPU)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
x = x.to(device)
print(f"Tensor x: {x}")
print(f"Device: {x.device}")
print(f"Shape: {x.shape}")
print(f"Data type: {x.dtype}")
# Mathematical operations
a = torch.tensor([[1, 2], [3, 4]], dtype=torch.float32)
b = torch.tensor([[5, 6], [7, 8]], dtype=torch.float32)
# Element-wise operations
add_result = a + b
mul_result = a * b
# Matrix operations
matmul_result = torch.matmul(a, b)
# Broadcasting
scalar = 10
broadcast_result = a + scalar
return {
'addition': add_result,
'multiplication': mul_result,
'matrix_multiplication': matmul_result,
'broadcasting': broadcast_result
}
# Advanced Tensor Operations
def advanced_tensor_operations():
"""Advanced tensor manipulation and operations"""
# Reshaping and views
x = torch.randn(4, 6)
reshaped = x.view(2, 12) # Reshape to 2x12
flattened = x.flatten() # Flatten to 1D
# Indexing and slicing
subset = x[0:2, 1:4] # First 2 rows, columns 1-3
# Concatenation and stacking
a = torch.tensor([[1, 2], [3, 4]])
b = torch.tensor([[5, 6], [7, 8]])
concatenated = torch.cat([a, b], dim=0) # Along rows
stacked = torch.stack([a, b], dim=0) # New dimension
# Reduction operations
sum_all = x.sum()
sum_dim = x.sum(dim=1) # Sum along columns
mean_val = x.mean()
max_val, max_idx = x.max(dim=0)
# In-place operations (memory efficient)
x.add_(1) # Add 1 to all elements in-place
return {
'original_shape': x.shape,
'reshaped_shape': reshaped.shape,
'concatenated': concatenated,
'sum': sum_all.item(),
'mean': mean_val.item()
}
# Example usage
if __name__ == "__main__":
print("=== Tensor Basics ===")
basics = tensor_basics()
print("\n=== Advanced Operations ===")
advanced = advanced_tensor_operations()
Key Features
- ✓Dynamic computation graphs
- ✓GPU acceleration with CUDA
- ✓Automatic differentiation
- ✓Broadcasting and vectorization
🌟 PyTorch Ecosystem
Core Libraries
- •PyTorch Core
- •TorchVision
- •TorchAudio
- •TorchText
Deployment
- •TorchServe
- •TorchScript
- •ONNX Export
- •Mobile (iOS/Android)
Specialized
- •Lightning
- •Ignite
- •Captum
- •Fairscale
⚖️ PyTorch vs TensorFlow
Aspect | PyTorch | TensorFlow |
---|---|---|
Learning Curve | Easier, Pythonic | Steeper, more concepts |
Computation Graph | Dynamic (define-by-run) | Static (define-then-run) |
Debugging | Native Python debugging | TensorBoard, more complex |
Production Deployment | TorchServe, growing | TF Serving, mature |
Research Adoption | Dominant in research | Strong but declining |
Mobile/Edge | PyTorch Mobile | TensorFlow Lite |
💡 PyTorch Best Practices
Development
- ✓Use DataLoader for efficient data loading
- ✓Implement custom Dataset classes
- ✓Move tensors to GPU with .to(device)
- ✓Use torch.no_grad() for inference
- ✓Set model.eval() during evaluation
- ✓Clear gradients with optimizer.zero_grad()
Production
- ✓Use TorchScript for deployment
- ✓Apply model quantization for speed
- ✓Implement batch inference
- ✓Monitor GPU memory usage
- ✓Save model state_dict, not entire model
- ✓Use mixed precision training (AMP)
📝 Test Your Understanding
1 of 8Current: 0/8