Not every differential equation gives up its secrets to pencil-and-paper methods. Some are too complicated to solve exactly; others have solutions so messy they’re practically useless. That’s where numerical methods come in—techniques for building approximate solutions when analytic ones aren’t possible.
It seems relatively tame, but the nonlinear term, \(y^2\text{,}\) makes solving it substantially more challenging. An exact solution to (24) exists, but it ain’t pretty as you can see below.
In this chapter, we’ll explore what it means to approximate a solution and why doing so is often the only practical choice. We’ll start with Euler’s method, a simple but powerful idea that moves step by step along the solution curve. From there, we’ll peek ahead to more advanced algorithms, like the Runge–Kutta methods, which computers use to tackle even tougher problems.
Along the way, we’ll confront an important truth: many differential equations simply cannot be expressed in neat “closed form.” But we’re not helpless—we can “trade” the exact solution for an approximate one that’s good enough for analysis, prediction, and real-world applications. By the end of this chapter, you’ll understand the thinking behind numerical methods and be ready to start building these approximations yourself.