Boring work of staggering effectiveness

Right along the lines of super unexciting infrastructure fixes to crucial bridges, railways, pipes and water mains is the capping of methane-spewing oil wells, of which we have a leaky and abundant surplus:

Curtis Shuck calls the well a “super emitter,” one of many in a wheat field not far from the Canadian border, a part of Montana known as the “golden triangle” for its bountiful crops. Aside from the scattered rusty pipes and junked oil tanks, the field is splendid and vast, its horizon interrupted intermittently by power lines and grain bins. On these plains, Shuck says, you can watch your dog run away for a week.

He is a former oil and gas executive who nowadays leads a small nonprofit — the result of a personal epiphany — and is tackling global warming one well at a time. That is the approach of his Well Done Foundation, plugging this and then other orphaned sites and trapping the methane underground. The effort started in Montana in 2019 but will expand to other states before the fall.

“When we’re done, it will be like this well was never here,” Shuck said, standing upwind as cement was pumped hundreds of feet down, through a series of pipes stuck in the 7½-inch-wide hole like a straw in a juice box.

30K to cap a well. Well done, Well Done. Plant trees, install solar farms, wind farms, stop dumping sewage, limit runoff, cut back on steaks (sorry! but do), refurbish the train lines, live close to work. Listen to ‘Trane while you walk. Live a little.

What’s it going to take? All of it, every last all of it. Everything.

Image: Abandoned oil storage tanks left behind in Montana. (Adrián Sanchez-Gonzalez for The Washington Post)

Tanker blinkers

It is very difficult to report on Climate Change. It even difficult to write about reporting on climate change. For example:

On the NYT Climate and Environment page right now has these as their stories:

Fossil Fuels Are to Blame for Soaring Methane Levels, Study Shows

Bezos Commits $10 Billion to Address Climate Change

Both are serious stories and neither can be taken as straight news as they scream out for flame and snark – not even looking at you, twitter. But it points up the challenges of treating climate developments as new when they have existed for more more than a decade and are only being admitted into polite, gray lady discourse. The very idea that plutocratic climate funds are any kind of answer to anything is almost as ludicrous as the story a little farther down the page about damming the North Sea to combat sea level rise. I’m sure they meant the other ‘damning,’ and perhaps could have used them interchangeably.

This is not [only] a complaint. That these stories are being reported out, written and published is something – it’s just an incomplete something. We probably need to cross reference these stories to get a more accurate picture. True multi-media. Bezos’ billions could go to greenlight feature films of stories about what’s happening. You can’t turn the tanker without starting to turn. The. Tanker.

 

(Bringing) Order to Disorder

The 2010 Fields Medals were carelessly handed out yesterday, in an utterly random fashion – I think they drew the names out a hat. The only requirements for the controversial prize is that winners are under forty years old and demonstrate some unquestionably innovative mathematical calculation that fundamentally alters our understanding of the world.

Take this winner, for instance, Cedric Villani of France, who calculated the rate at which entropy is increasing – there seems to be some sort of throttle on the rate at which the world is falling apart.

Cedric Villani works in several areas of mathematical physics, and particularly in the rigorous theory of continuum mechanics equations such as the Boltzmann equation.

Imagine a gas consisting of a large number of particles traveling at various velocities. To begin with, let us take a ridiculously oversimplified discrete model and suppose that there are only four distinct velocities that the particles can be in, namely {v_1, v_2, v_3}, and {v_4}. Let us also make the homogeneity assumption that the distribution of velocities of the gas is independent of the position; then the distribution of the gas at any given time {t} can then be described by four densities {f(t,v_1), f(t,v_2), f(t,v_3), f(t,v_4)} adding up to {1}, which describe the proportion of the gas that is currently traveling at velocities {v_1}, etc..

If there were no collisions between the particles that could transfer velocity from one particle to another, then all the quantities {f(t,v_i)} would be constant in time: {frac{partial}{partial t} f(t,v_i) = 0}. But suppose that there is a collision reaction that can take two particles traveling at velocities {v_1, v_2} and change their velocities to {v_3, v_4}, or vice versa, and that no other collision reactions are possible. Making the heuristic assumption that different particles are distributed more or less independently in space for the purposes of computing the rate of collision, the rate at which the former type of collision occurs will be proportional to {f(t,v_1) f(t,v_2)}, while the rate at which the latter type of collision occurs is proportional to {f(t,v_3) f(t,v_4)}. This leads to equations of motion such as

displaystyle  frac{partial}{partial t} f(t,v_1) = kappa ( f(t,v_3) f(t,v_4) - f(t,v_1) f(t,v_2) )

for some rate constant {kappa > 0}, and similarly for {f(t,v_2)}{f(t,v_3)}, and {f(t,v_4)}. It is interesting to note that even in this simplified model, we see the emergence of an “arrow of time”: the rate of a collision is determined by the density of the initialvelocities rather than the final ones, and so the system is not time reversible, despite being a statistical limit of a time-reversible collision from the velocities {v_1,v_2} to {v_3,v_4} and vice versa.

To take a less ridiculously oversimplified model, now suppose that particles can take a continuum of velocities, but we still make the homogeneity assumption the velocity distribution is still independent of position, so that the state is now described by a density function {f(t,v)}, with {v} now ranging continuously over {{bf R}^3}. There are now a continuum of possible collisions, in which two particles of initial velocity {v', v'_*} (say) collide and emerge with velocities {v, v_*}. If we assume purely elastic collisions between particles of identical mass {m}, then we have the law of conservation of momentum

displaystyle  mv' + mv'_* = mv + mv_*

and conservation of energy

displaystyle  frac{1}{2} m |v'|^2 + frac{1}{2} m |v'_*|^2 = frac{1}{2} m |v|^2 + frac{1}{2} m |v'|^2

some simple Euclidean geometry shows that the pre-collision velocities {v', v'_*} must be related to the post-collision velocities {v, v_*} by the formulae

displaystyle  v' = frac{v+v_*}{2} + frac{|v-v_*|}{2} sigma; quad v'_* = frac{v+v_*}{2} - frac{|v-v_*|}{2} sigma      (1)

for some unit vector {sigma in S^2}. Thus a collision can be completely described by the post-collision velocities {v,v_* in {bf R}^3} and the pre-collision direction vector {sigma in S^2}; assuming Galilean invariance, the physical features of this collision can in fact be described just using the relative post-collision velocity {v-v_*} and the pre-collision direction vector {sigma}. Using the same independence heuristics used in the four velocities model, we are then led to the equation of motion

displaystyle  frac{partial}{partial t} f(t,v) = Q(f,f)(t,v)

where {Q(f,f)} is the quadratic expression

displaystyle  Q(f,f)(t,v) := int_{{bf R}^3} int_{S^2} (f(t,v') f(t,v'_*) - f(t,v) f(t,v_*)) B(v-v_*,sigma) dv_* dsigma

for some Boltzmann collision kernel {B(v-v_*,sigma) > 0}, which depends on the physical nature of the hard spheres, and needs to be specified as part of the dynamics. Here of course {v', v'_*} are given by (1).

If one now allows the velocity distribution to depend on position {x in Omega} in a domain{Omega subset {bf R}^3}, so that the density function is now {f(t,x,v)}, then one has to combine the above equation with a transport equation, leading to the Boltzmann equation

displaystyle  frac{partial}{partial t} f + v cdot nabla_x f = Q(f,f),

together with some boundary conditions on the spatial boundary {partial Omega} that will not be discussed here.

One of the most fundamental facts about this equation is the Boltzmann H theorem, which asserts that (given sufficient regularity and integrability hypotheses on {f}, and reasonable boundary conditions), the {H}-functional

displaystyle  H(f)(t) := int_{{bf R}^3} int_Omega f(t,x,v) log f(t,x,v) dx dv

is non-increasing in time, with equality if and only if the density function {f} is Gaussian in {v} at each position {x} (but where the mass, mean and variance of the Gaussian distribution being allowed to vary in {x}). Such distributions are known asMaxwellian distributions.

From a physical perspective, {H} is the negative of the entropy of the system, so the H theorem is a manifestation of the second law of thermodynamics, which asserts that the entropy of a system is non-decreasing in time, thus clearly demonstrating the “arrow of time” mentioned earlier.

There are considerable technical issues in ensuring that the derivation of the H theorem is actually rigorous for reasonable regularity hypotheses on {f} (and on {B}), in large part due to the delicate and somewhat singular nature of “grazing collisions” when the pre-collision and post-collision velocities are very close to each other. Important work was done by Villani and his co-authors on resolving this issue, but this is not the result I want to focus on here. Instead, I want to discuss the long-time behaviour of the Boltzmann equation.

As the {H} functional always decreases until a Maxwellian distribution is attained, it is then reasonable to conjecture that the density function {f} must converge (in some suitable topology) to a Maxwellian distribution. Furthermore, even though the{H} theorem allows the Maxwellian distribution to be non-homogeneous in space, the transportation aspects of the Boltzmann equation should serve to homogenise the spatial behaviour, so that the limiting distribution should in fact be a homogeneous Maxwellian. In a remarkable 72-page tour de forceDesvilletes and Villani showed that (under some strong regularity assumptions), this was indeed the case, and furthermore the convergence to the Maxwellian distribution was quite fast, faster than any polynomial rate of decay in fact. Remarkably, this was alarge data result, requiring no perturbative hypotheses on the initial distribution (although a fair amount of regularity was needed). As is usual in PDE, large data results are considerably more difficult due to the lack of perturbative techniques that are initially available; instead, one has to primarily rely on such tools as conservation laws and monotonicity formulae. One of the main tools used here is a quantitative version of the H theorem (also obtained by Villani), but this is not enough; the quantitative bounds on entropy production given by the H theorem involve quantities other than the entropy, for which further equations of motion (or more precisely, differential inequalities on their rate of change) must be found, by means of various inequalities from harmonic analysis and information theory. This ultimately leads to a finite-dimensional system of ordinary differential inequalities that control all the key quantities of interest, which must then be solved to obtain the required convergence.

Gee… talk about your run-of-the-mill finite-dimensional systems of ordinary differential inequalities. I mean, tell us something we don’t know, Monsieur medal winner.