Purpose
I like to imagine a single threaded UI application running on a single CPU core.
Anytime we need to do a long running computation, our UI would become completely unresponsive until the long running work has finished, freeing up the single thread.
The whole point of concurrency is to define a set of rules to safely manage parallel streams of work, so long running tasks can be done in parallel while keeping the UI responsive.
Concurrency vs. Parallelism
Concurrency is like a single chef switching between cooking multiple dishes at once.
Parallelism is like having two chefs cooking two separate dishes at the same time.
Each CPU core can run one thread or task at a time, enabling true parallelism across multiple cores. However, systems like GCD and Swift Concurrency leverage the concept of virtual threads (or lightweight tasks), using concurrency to interleave the execution of multiple tasks on a single thread — giving the illusion of simultaneous progress.
What's wrong with GCD?
GCD (Grand Central Dispatch) is a low-level concurrency framework that uses dispatch queues to schedule work onto a shared pool of system-managed threads.
Here’s my (possibly naive) understanding:
-
Each dispatch queue doesn’t “own” threads — instead, GCD schedules tasks from multiple queues onto a global thread pool.
-
If many tasks are dispatched simultaneously (especially blocking tasks), GCD may spin up more threads than there are CPU cores, leading to thread explosion.
-
GCD doesn’t optimize for task dependencies, which means
- Engineers must carefully design queue usage to avoid blocking and contention.
- It’s easy to accidentally oversubscribe the system with too much concurrent work.
The Swift Concurrency Model
Swift Concurrency flips the whole GCD model from a procedural paradigm to a declarative one.
Instead of manually managing code isolation and optimization of task dependencies, instead engineers focus on annotating their code correctly and let Swift concurrency handle the rest of the optimization and safety.