A Financial Choice, Act II

In early September 2008, I drove down to Charleston to visit a cousin who had recently suffered a terrible accident. Throughout the drive I listened to extended public radio reports on an evolving calamity: the collapse of Lehman Brothers financial services firm. The horror that the government was going to allow such a large firm to go under was decorated with the baroque gadgetry of terms that would become more familiar in the coming years: credit default swaps, subprime mortgage lending, tranches, CDOs. The gore and detail of the cover that had been constructed around scams and fraud at the broadest level was audible in the voices of interviewers and guests. There was a tinge of disbelief within their attempts to explain what these terms meant and how they had gotten us all (!) into so much peril. It was as close to 1929 as we had come and potentially far worse – so extensively had the giant vampire squid of financial engineering welded its tentacles to every sector. Housing, banking, investing, construction, debt, bonds… this is business America now, and every other activity is vulnerable to its caprice. It was the stretch run of a presidential election as well; one candidate tried to suspend the campaign, the other fortunately tried to hold things together.

And he did mange to hold things together, despite rather obvious at the time challenges he personally faced. But the Lehman moment got everyone’s attention, everyone who mattered. $700 billion for Troubled Asset Relief (TARP), $250 billion for Capital Purchase(CPP), in addition to billions more in government-backed guarantees to individual banks. And eventually, in July 2010, the Dodd-Frank Wall Street Reform and Consumer Protection Act was enacted. It seemed the public assistance required to save the vampire from itself had sealed the argument in favor of financial reform.

Yesterday, the Republican House of Representatives passed the Financial Choice Act and can you guess what it does? Right! Overturns Dodd-Frank. And not only is it a bad idea to weaken a law that requires stronger banks,

The bill also offers the wrong kind of relief. During the last crisis, all kinds of financial activity — including insurance, money-market funds and speculative trading at banks — depended on government support. That’s why Dodd-Frank placed limits on banks’ trading operations and provided added oversight for all systemically important institutions, and why regulators require them to have enough cash on hand to survive a panic.
Those provisions aren’t perfect — simpler and more effective options exist — but the Choice Act just scraps them. What’s more, it would eliminate the Office of Financial Research, created to give regulators the data they need to see what’s going on in markets and institutions. The law would leave regulators in the dark, and taxpayers implicitly or explicitly backing much of the financial sector.

If you didn’t click, that’s coming from fcking Bloomberg. The financial industry doesn’t even think it’s a good idea. In trying to undo more Obama-era legislation, the know-nothings in Congress are re-setting the table for our financial catastrophe guests. Sure, certain things could make Dodd-Frank unnecessary. But unfortunately, none of the thousands of people, firms, funds and frauds who populate this sector care about a stronger financial system or its being more competitive. It’s the logic of business – the democracy, whiskey, sexy of fools.
Image: Detail from The Garden of Earthly Delights, by Hieronymus Bosch, ca. 1500

Holy Chapter 11, Batman!

“That’s right, Robin.”
The Affordable Care Act has driven down personal bankruptcies by 50% since 2010:

As legislators and the executive branch renew their efforts to repeal and replace the Affordable Care Act this week, they might want to keep in mind a little-known financial consequence of the ACA: Since its adoption, far fewer Americans have taken the extreme step of filing for personal bankruptcy.

Filings have dropped about 50 percent, from 1,536,799 in 2010 to 770,846 in 2016 (see chart, below). Those years also represent the time frame when the ACA took effect. Although courts never ask people to declare why they’re filing, many bankruptcy and legal experts agree that medical bills had been a leading cause of personal bankruptcy before public healthcare coverage expanded under the ACA. Unlike other causes of debt, medical bills are often unexpected, involuntary, and large.

Emphasis added. There’s no way to be flippant about this. President Obama took a beating over this for years. Congresspeople lost their jobs, and knew they would, for voting for it. Still, they did it. This is what the whole thing is about – all the politicking, all the voting, all the clever name-calling and ratfcking. It mobilized the entire right-wing firmament because they are expressly against this. This!
This is socialism?

(Bringing) Order to Disorder

The 2010 Fields Medals were carelessly handed out yesterday, in an utterly random fashion – I think they drew the names out a hat. The only requirements for the controversial prize is that winners are under forty years old and demonstrate some unquestionably innovative mathematical calculation that fundamentally alters our understanding of the world.

Take this winner, for instance, Cedric Villani of France, who calculated the rate at which entropy is increasing – there seems to be some sort of throttle on the rate at which the world is falling apart.

Cedric Villani works in several areas of mathematical physics, and particularly in the rigorous theory of continuum mechanics equations such as the Boltzmann equation.

Imagine a gas consisting of a large number of particles traveling at various velocities. To begin with, let us take a ridiculously oversimplified discrete model and suppose that there are only four distinct velocities that the particles can be in, namely {v_1, v_2, v_3}, and {v_4}. Let us also make the homogeneity assumption that the distribution of velocities of the gas is independent of the position; then the distribution of the gas at any given time {t} can then be described by four densities {f(t,v_1), f(t,v_2), f(t,v_3), f(t,v_4)} adding up to {1}, which describe the proportion of the gas that is currently traveling at velocities {v_1}, etc..

If there were no collisions between the particles that could transfer velocity from one particle to another, then all the quantities {f(t,v_i)} would be constant in time: {frac{partial}{partial t} f(t,v_i) = 0}. But suppose that there is a collision reaction that can take two particles traveling at velocities {v_1, v_2} and change their velocities to {v_3, v_4}, or vice versa, and that no other collision reactions are possible. Making the heuristic assumption that different particles are distributed more or less independently in space for the purposes of computing the rate of collision, the rate at which the former type of collision occurs will be proportional to {f(t,v_1) f(t,v_2)}, while the rate at which the latter type of collision occurs is proportional to {f(t,v_3) f(t,v_4)}. This leads to equations of motion such as

displaystyle  frac{partial}{partial t} f(t,v_1) = kappa ( f(t,v_3) f(t,v_4) - f(t,v_1) f(t,v_2) )

for some rate constant {kappa > 0}, and similarly for {f(t,v_2)}{f(t,v_3)}, and {f(t,v_4)}. It is interesting to note that even in this simplified model, we see the emergence of an “arrow of time”: the rate of a collision is determined by the density of the initialvelocities rather than the final ones, and so the system is not time reversible, despite being a statistical limit of a time-reversible collision from the velocities {v_1,v_2} to {v_3,v_4} and vice versa.

To take a less ridiculously oversimplified model, now suppose that particles can take a continuum of velocities, but we still make the homogeneity assumption the velocity distribution is still independent of position, so that the state is now described by a density function {f(t,v)}, with {v} now ranging continuously over {{bf R}^3}. There are now a continuum of possible collisions, in which two particles of initial velocity {v', v'_*} (say) collide and emerge with velocities {v, v_*}. If we assume purely elastic collisions between particles of identical mass {m}, then we have the law of conservation of momentum

displaystyle  mv' + mv'_* = mv + mv_*

and conservation of energy

displaystyle  frac{1}{2} m |v'|^2 + frac{1}{2} m |v'_*|^2 = frac{1}{2} m |v|^2 + frac{1}{2} m |v'|^2

some simple Euclidean geometry shows that the pre-collision velocities {v', v'_*} must be related to the post-collision velocities {v, v_*} by the formulae

displaystyle  v' = frac{v+v_*}{2} + frac{|v-v_*|}{2} sigma; quad v'_* = frac{v+v_*}{2} - frac{|v-v_*|}{2} sigma      (1)

for some unit vector {sigma in S^2}. Thus a collision can be completely described by the post-collision velocities {v,v_* in {bf R}^3} and the pre-collision direction vector {sigma in S^2}; assuming Galilean invariance, the physical features of this collision can in fact be described just using the relative post-collision velocity {v-v_*} and the pre-collision direction vector {sigma}. Using the same independence heuristics used in the four velocities model, we are then led to the equation of motion

displaystyle  frac{partial}{partial t} f(t,v) = Q(f,f)(t,v)

where {Q(f,f)} is the quadratic expression

displaystyle  Q(f,f)(t,v) := int_{{bf R}^3} int_{S^2} (f(t,v') f(t,v'_*) - f(t,v) f(t,v_*)) B(v-v_*,sigma) dv_* dsigma

for some Boltzmann collision kernel {B(v-v_*,sigma) > 0}, which depends on the physical nature of the hard spheres, and needs to be specified as part of the dynamics. Here of course {v', v'_*} are given by (1).

If one now allows the velocity distribution to depend on position {x in Omega} in a domain{Omega subset {bf R}^3}, so that the density function is now {f(t,x,v)}, then one has to combine the above equation with a transport equation, leading to the Boltzmann equation

displaystyle  frac{partial}{partial t} f + v cdot nabla_x f = Q(f,f),

together with some boundary conditions on the spatial boundary {partial Omega} that will not be discussed here.

One of the most fundamental facts about this equation is the Boltzmann H theorem, which asserts that (given sufficient regularity and integrability hypotheses on {f}, and reasonable boundary conditions), the {H}-functional

displaystyle  H(f)(t) := int_{{bf R}^3} int_Omega f(t,x,v) log f(t,x,v) dx dv

is non-increasing in time, with equality if and only if the density function {f} is Gaussian in {v} at each position {x} (but where the mass, mean and variance of the Gaussian distribution being allowed to vary in {x}). Such distributions are known asMaxwellian distributions.

From a physical perspective, {H} is the negative of the entropy of the system, so the H theorem is a manifestation of the second law of thermodynamics, which asserts that the entropy of a system is non-decreasing in time, thus clearly demonstrating the “arrow of time” mentioned earlier.

There are considerable technical issues in ensuring that the derivation of the H theorem is actually rigorous for reasonable regularity hypotheses on {f} (and on {B}), in large part due to the delicate and somewhat singular nature of “grazing collisions” when the pre-collision and post-collision velocities are very close to each other. Important work was done by Villani and his co-authors on resolving this issue, but this is not the result I want to focus on here. Instead, I want to discuss the long-time behaviour of the Boltzmann equation.

As the {H} functional always decreases until a Maxwellian distribution is attained, it is then reasonable to conjecture that the density function {f} must converge (in some suitable topology) to a Maxwellian distribution. Furthermore, even though the{H} theorem allows the Maxwellian distribution to be non-homogeneous in space, the transportation aspects of the Boltzmann equation should serve to homogenise the spatial behaviour, so that the limiting distribution should in fact be a homogeneous Maxwellian. In a remarkable 72-page tour de forceDesvilletes and Villani showed that (under some strong regularity assumptions), this was indeed the case, and furthermore the convergence to the Maxwellian distribution was quite fast, faster than any polynomial rate of decay in fact. Remarkably, this was alarge data result, requiring no perturbative hypotheses on the initial distribution (although a fair amount of regularity was needed). As is usual in PDE, large data results are considerably more difficult due to the lack of perturbative techniques that are initially available; instead, one has to primarily rely on such tools as conservation laws and monotonicity formulae. One of the main tools used here is a quantitative version of the H theorem (also obtained by Villani), but this is not enough; the quantitative bounds on entropy production given by the H theorem involve quantities other than the entropy, for which further equations of motion (or more precisely, differential inequalities on their rate of change) must be found, by means of various inequalities from harmonic analysis and information theory. This ultimately leads to a finite-dimensional system of ordinary differential inequalities that control all the key quantities of interest, which must then be solved to obtain the required convergence.

Gee… talk about your run-of-the-mill finite-dimensional systems of ordinary differential inequalities. I mean, tell us something we don’t know, Monsieur medal winner.