Pixels2GenAI#
An open source educational platform that creates a comprehensive pathway into AI-driven generative art, bridging mathematical and visual foundations to modern creative AI techniques. This 15-module curriculum takes learners from fundamental pixel manipulation and NumPy operations through advanced generative models, neural networks, and real-time interactive systems.
Showcasing exercises from across all modules
Project at a Glance#
Creating an approachable journey into AI-driven generative art that connects visual intuition with modern machine learning practice through 15 progressive modules.
Materials welcome semi-beginners through semi-experienced programmers, with optional guidance for newcomers willing to self-study foundational topics.
Each module balances mathematical ideas, NumPy techniques, and creative coding projects so learners see how concepts translate into visuals and AI applications.
Lessons build sequentially from image fundamentals through fractals, simulations, and generative AI, allowing confidence to grow alongside complexity.
Supports programming teachers, self-learners, artists, and data scientists who want memorable exercises for classes, portfolios, or passion projects.
Ideal for course builders, independent learners, curious engineers, and creatives exploring AI-enhanced artistry across classrooms and studios.
Repository#
The source code is available on GitHub:
https://github.com/burakkagann/Pixels2GenAI
Clone the repository:
git clone https://github.com/burakkagann/Pixels2GenAI.git
cd Pixels2GenAI
Installation
System Requirements:
Python 3.11.9 (recommended)
For neural network modules (7+): NVIDIA GPU recommended but not required
For diffusion models (Module 12): 8GB RAM minimum, GPU strongly recommended
Option 1: Using pyproject.toml (Recommended)
# Core dependencies (Modules 0-6)
pip install .
# With machine learning packages (Modules 7-13)
pip install .[ml]
# All optional dependencies
pip install .[all]
Option 2: Using requirements.txt
pip install -r requirements.txt
Learning Modules#
Module 0: Foundations & Definitions
Setting the conceptual and technical groundwork for generative art and AI.
Module 1: Pixel Fundamentals
Understanding images at the atomic level through color theory and manipulation patterns.
1.1 - Grayscale & Color Basics
1.1.2 - Color Theory Spaces1.2 - Pixel Manipulation Patterns
1.2.3 - Reaction Diffusion1.3 - Structured Compositions
Module 2: Geometry & Mathematics
Mathematical foundations for generative art through shapes, coordinates, and mathematical patterns.
2.1 - Basic Shapes & Primitives
2.1.5 - Polygons & Polyhedra2.2 - Coordinate Systems & Fields
2.3 - Mathematical Art
2.3.1 - Lissajous Curves 2.3.4 - Strange AttractorsModule 3: Transformations & Effects
Manipulating visual data through geometric transformations, masking, and artistic filters.
3.1 - Geometric Transformations
3.2 - Masking & Compositing
3.2.1 - Mask 3.2.2 - Meme Generator 3.2.3 - Shadow 3.2.4 - Blend Modes3.3 - Artistic Filters
3.3.2 - Puzzle (Array Concatenation) 3.3.4 - Voronoi Diagrams3.4 - Signal Processing
3.4.2 - Edge Detection (Sobel Operator) 3.4.3 - Contour Lines 3.4.4 - Fourier ArtModule 4: Fractals & Recursion
Self-similarity and infinite complexity through classical fractals, natural patterns, and L-systems.
4.1 - Classical Fractals
4.1.4 - Julia Sets 4.1.5 - Sierpinski4.2 - Natural Fractals
4.2.2 - Lightning Bolts 4.2.3 - Fractal Landscapes 4.2.4 - Diffusion Limited Aggregation4.3 - L-Systems
4.3.1 - Plant Generation 4.3.2 - Koch Snowflake 4.3.3 - Penrose TilingModule 5: Simulation & Emergent Behavior
Complex systems from simple rules: particle systems, flocking behavior, and physics simulations.
5.1 - Particle Systems
5.1.2 - Vortex 5.1.3 - Fireworks Simulation 5.1.4 - Fluid Simulation5.2 - Flocking & Swarms
5.2.2 - Fish Schooling 5.2.3 - Ant Colony Optimization5.3 - Physics Simulations
5.3.1 - Bouncing Ball Animation 5.3.2 - N-Body Planet Simulation 5.3.4 - Cloth Rope Simulation 5.3.5 - Magnetic Field Visualization5.4 - Growth & Morphogenesis
5.4.1 - Eden Growth Model 5.4.2 - Differential Growth 5.4.3 - Space Colonization Algorithm 5.4.4 - Turing PatternsModule 6: Noise & Procedural Generation
Controlled randomness for natural effects: noise functions, terrain, textures, and wave patterns.
6.1 - Noise Functions
6.1.2 - Simplex Noise 6.1.3 - Worley Noise 6.1.4 - Colored Noise6.2 - Terrain Generation
6.2.1 - Height Maps 6.2.2 - Erosion Simulation 6.2.3 - Cave Generation 6.2.4 - Island Generation6.3 - Texture Synthesis
6.3.1 - Marble Wood Textures 6.3.2 - Cloud Generation 6.3.3 - Abstract Patterns 6.3.4 - Procedural Materials6.4 - Wave & Interference Patterns
6.4.1 - Moire Patterns 6.4.2 - Wave Interference 6.4.3 - Cymatics VisualizationModule 7: Classical Machine Learning
Traditional ML for creative applications: clustering, classification, and statistical methods.
7.1 - Clustering & Segmentation
7.1.1 - KMeans Clustering 7.1.2 - Meanshift Segmentation 7.1.3 - DBSCAN Pattern Detection7.2 - Classification & Recognition
7.2.1 - Decision Tree Classifier 7.2.2 - Random Forests 7.2.3 - SVM Style Detection7.3 - Dimensionality Reduction
7.3.1 - PCA Color Palette 7.3.2 - t-SNE Visualization 7.3.3 - UMAP Visualizations7.4 - Statistical Methods
7.4.1 - Monte Carlo Sampling 7.4.2 - Markov Chains 7.4.3 - Hidden Markov ModelsModule 8: Animation & Time
Adding the fourth dimension: animation fundamentals, organic motion, and cinematic effects.
8.1 - Animation Fundamentals
8.1.1 - Image Transformations 8.1.2 - Easing Functions 8.1.3 - Interpolation Techniques 8.1.4 - Sprite Sheets8.2 - Organic Motion
8.2.1 - Flower Assembly 8.2.3 - Walk Cycles 8.2.4 - Breathing Pulsing8.3 - Cinematic Effects
8.3.3 - Particle Text Reveals 8.3.4 - Morphing Transitions8.4 - Generative Animation
8.4.1 - Music Visualization 8.4.3 - Animated Fractals 8.4.2 - Data Driven AnimationModule 9: Introduction to Neural Networks
Bridge to modern AI: neural network fundamentals, architectures, and training dynamics.
9.1 - Neural Network Fundamentals
9.2 - Network Architectures
9.2.1 - Feedforward Networks 9.2.2 - Convolutional Networks Visualization 9.2.3 - Recurrent Networks for Sequences9.3 - Training Dynamics
9.3.1 - Loss Landscape Visualization 9.3.2 - Gradient Descent Animation 9.3.3 - Overfitting Underfitting Demos9.4 - Feature Visualization
9.4.1 - DeepDream Implementation 9.4.2 - Feature Map Art 9.4.3 - Network Attention VisualizationModule 10: TouchDesigner Fundamentals
Real-time visual programming: TD environment, NumPy integration, and interactive controls.
10.1 - TD Environment & Workflow
10.1.2 - Python Integration Basics 10.1.3 - Performance Monitoring10.2 - Recreating Static Exercises
10.2.1 - Core Exercises Realtime 10.2.2 - Boids Flocking in TouchDesigner 10.2.3 - Planet Simulation TD 10.2.4 - Fractals Realtime10.3 - NumPy to TD Pipeline
10.3.1 - Script Operators 10.3.2 - Array Processing 10.3.3 - Custom Components10.4 - Interactive Controls
10.4.1 - UI Building 10.4.2 - Parameter Mapping 10.4.3 - Preset SystemsModule 11: Interactive Systems
Sensors and real-time response: input devices, computer vision, and physical computing.
11.1 - Input Devices
11.1.1 - Webcam Processing 11.1.2 - Audio Reactivity 11.1.3 - MIDI OSC Control 11.1.4 - Kinect Leap Motion11.2 - Computer Vision in TD
11.2.1 - Motion Detection 11.2.2 - Blob Tracking 11.2.4 - Optical Flow11.3 - Physical Computing
11.3.1 - Arduino Integration 11.3.2 - DMX Lighting Control 11.3.3 - Projection Mapping Basics11.4 - Network Communication
11.4.1 - Multi Machine Setups 11.4.2 - WebSocket WebRTC 11.4.3 - Remote Control InterfacesModule 12: Generative AI Models
Modern generative techniques: GANs, VAEs, diffusion models, and language models for art.
12.1 - Generative Adversarial Networks
12.2 - Variational Autoencoders
12.3 - Diffusion Models
12.4 - Bridging Paradigms
12.4.1 - Neural Style Transfer 12.4.2 - VQ-VAE and VQ-GAN12.5 - Personalization & Efficiency
12.6 - Transformer Generation
12.6.1 - Taming Transformers 12.6.2 - Diffusion Transformer (DiT)12.7 - Modern Frontiers
Module 13: AI + TouchDesigner Integration
Combining AI with real-time systems: ML models in TD, real-time effects, and hybrid pipelines.
13.1 - ML Models in TD
13.1.1 - MediaPipe Integration 13.1.2 - RunwayML Bridge 13.1.3 - ONNX Runtime13.2 - Real-time AI Effects
13.2.1 - Style Transfer Live 13.2.2 - Realtime Segmentation 13.2.3 - Pose Driven Effects13.3 - Generative Models Live
13.3.1 - GAN Inference Optimization 13.3.2 - Latent Space Navigation UI 13.3.3 - Model Switching Systems13.4 - Hybrid Pipelines
13.4.1 - Preprocessing TD 13.4.2 - Python ML Processing 13.4.3 - Post Processing ChainsModule 14: Data as Material
Information visualization and sonification: data sources, visualization techniques, and physical sculptures.
14.1 - Data Sources
14.1.1 - APIs and Data Scraping 14.1.2 - Sensor Networks 14.1.3 - Social Media Streams 14.1.4 - Environmental Data14.2 - Visualization Techniques
14.2.1 - Network Graphs 14.2.2 - Flow Visualization 14.2.3 - Multidimensional Scaling 14.2.4 - Time Series Art14.3 - Sonification
14.3.1 - Data Sound Mapping 14.3.2 - Granular Synthesis 14.3.3 - Rhythmic Patterns14.4 - Physical Data Sculptures
14.4.1 - 3D Printing Preparation 14.4.2 - Laser Cutting Patterns 14.4.3 - CNC ToolpathsModule 15: Capstone Project - Eternal Flow
Synthesis of all learned concepts: StyleGAN-based evolving Ebru marbling artwork for projection display.