# Functional Programming & Language Theory

The theory behind functional programming and how it shapes language design.

## Overview

Functional programming treats computation as the evaluation of mathematical functions, avoiding mutable state and side effects. Understanding its theoretical underpinnings — lambda calculus, higher-order functions, lazy evaluation, and the connection to logic — illuminates why entire families of languages are designed the way they are, and why FP ideas keep infiltrating mainstream languages.

This topic sits at the intersection of practical software design (why FP reduces complexity) and deep theory (Curry-Howard, type systems, formal semantics).

## Why It Matters

- **Complexity management** — Mutable state is the root of most incidental complexity; FP limits it by default
- **Modularity** — Higher-order functions and lazy evaluation enable modular program composition in ways OOP cannot
- **Foundation for formal methods** — Pure functions are far easier to reason about and verify formally
- **Language literacy** — Haskell, OCaml, Clojure, Elm, and features in Rust/Go/Python/Java all draw from this tradition
- **Paradigm fluency** — Understanding declarative, concurrent, relational, and constraint paradigms in a unified framework

## Key Concepts

- **Lambda calculus** — The mathematical foundation; every FP language is a lambda calculus variant
- **Higher-order functions** — Functions as first-class values; map, filter, fold as modularity primitives
- **Lazy evaluation** — Separating generation from consumption; infinite structures
- **Referential transparency** — Substituting a call with its result never changes meaning
- **Algebraic data types** — Sums and products as the basis for data modeling
- **Currying and partial application** — Transforming multi-argument functions
- **Monads and effects** — Managing side effects without abandoning purity
- **Curry-Howard correspondence** — Programs are proofs; types are propositions

## Subtopics

- Lambda calculus and the foundations of computation
- Hindley-Milner type inference
- Lazy evaluation and codata
- Monads, applicatives, and the algebra of effects
- The Curry-Howard isomorphism

## Questions

- What does "Out of the Tar Pit" prescribe vs. what does industry actually adopt?
- How do monads relate to algebraic effects — which model is better?
- Why did lazy evaluation fall out of mainstream languages despite its theoretical elegance?
- How far can you push the Curry-Howard correspondence in practice?
- What paradigms from CTM haven't yet made it into mainstream languages?

## Connections

- [Type Theory](../type-theory/index.md) — FP and type theory are deeply intertwined; Curry-Howard connects them to logic
- [Formal Methods](../formal-methods/index.md) — Pure functions are tractable for formal verification; Dijkstra's derivation style
- [Systems Programming](../systems-programming/index.md) — Rust adopts affine types and functional patterns for safety

## Recommended Reading Path

*From the learning plan — roughly ordered from accessible to theoretical:*

1. [Out of the Tar Pit](out-of-the-tar-pit.md) — Motivates why state is the core complexity problem
2. [Why Functional Programming Matters](why-fp-matters.md) — HOFs and lazy eval as modularity tools
3. [Propositions as Types](../type-theory/propositions-as-types.md) — The Curry-Howard bridge to logic
4. [TAPL](../type-theory/tapl.md) — The formal type-system foundation
5. Then: [SICP](sicp.md) and [CTM](ctm.md) for broader paradigm understanding

## Notes

*Working understanding to be added after engagement with material*

---

**Status**: `#not-started`
**See also**: [Resources](RESOURCES.md)
