Sunday, 28 November 2021

Exact and closed differential forms

This post explains exact and closed differential forms. The most general definitions work with general differential forms on m-dimensional manifolds. But let us begin by looking at 1-forms (line integrals). Let's begin with a definition.
Definition: a line integral \(\int\omega\) over a domain \(D\) is exact if \(\int\omega\) is dependent only on its initial and end point (independent of the path taken).
Definition: a line integral \(\int\omega\) over a domain \(D\) is closed if \(\int\omega\) it integrates to zero over every closed loop.
Theorem: Assume that the vector field (1-form) has continuous partials and the domain is simply connected (the condition can be relaxed for some directions, but we assume so for simplicity.) The following are equivalent:
  1. A line integral is exact.
  2. The line integral is closed.
  3. If the line integral's vector field has continuous partial derivatives, the 1-form (vector field) \(\omega\) is a gradient (i.e. has an anti-dervative)

The two non-trivial directions:
(1)=>(3): This follows because we can construct the anti-derivative (with some arbitrary starting point \(x_0\)) $$ F(\vec{x}) = \int_{\vec{x_0}}^{\vec{x}} \omega $$ The proof follows similar to the traditional fundamental theorem of calculus.
(2)=>(1): This follows from Green's theorem.

More:
One dimension
In one dimension fundamental theorem of calculus tells us that every continuous function is an exact integral.
Two dimensions In two dimensions, if a vector field is exact (i.e. is a gradient), then its mixed derivatives must be equal. i.e. if $$ \vec{F}(x,y) = \nabla U(x,y) $$ then, denoting the \(x\) and \(y\) components of \(F\) by \(P\) and \(Q\): $$ \frac{\partial P}{\partial y} = \frac{\partial Q} {\partial x} $$ This condition here is called closed.
If Green's theorem applies, then closed would imply exactness. Green's theorem in two dimensions require simple connectedness and the vector field being C1. If the domain is made small enough, then the domain can always be made simply connected. This means that closed really means locally exact. The intuition behind the simply connected requirement is exact because simply connectedness allows us to break the domain down to very small boxes where the vector field is locally exact.
When integrating a closed vector field around a very small circle, Green's theorem applies and all line integral over closed path gives zero. We call this vector field irrotational because it measures the infinitisimal tendency for the fluid to rotate if the vector field describes the fluid's velocity.
Of course there are vector fields that are not defined on simply connected regions that are still exact. The standard \(F(\vec{r}) = \frac{1}{|r|}\hat{r}\) fits this. The potential in two dimensions is \(\ln(|r|)\).
There are also vector fields that meets the closed condition who are not exact. Consider $$ \vec{F}(x,y) = \left(\frac{-y}{x^2 + y^2}, \frac{x}{x^2 + y^2} \right) $$
Three dimensions
The analogue of Green's theorem becomes Stoke's theorem in 3D. Here, Stoke's theorem applies on a closed curve as long as there exists an enclosed surface with no discontinuities in the C1 vector field. So in 3D there is much more leeway to applying the "closed=>exact" logic.
Of course this all make sense because simply connectedness in 3D is an easier condition to meet in 3D than 2D. One would need a line of discontinuity, for example we could simply extend the above vector field to $$ \vec{F}(x,y,z) = \left(\frac{-y}{x^2 + y^2}, \frac{x}{x^2 + y^2}, 0 \right) $$
Complex Analysis and Cauchy's Theorem
Normally when applying the two dimensional Green theorem to functions it typically requires that the partial derivatives are continuous. Cauchy's integral theorem says that holomorphic functions are exact without a priori requiring continuous partials.

Sunday, 14 July 2019

Mobius strip parametrization

Recently I thought about a way to parametrize the Mobius strip embedded in 3D.

The intuition was the following:

If someone were to look "down" along the circle of the strip, they would see a circle rotating by 180 degrees.  A circle of this kind in 2D is very simply parameterized by:

$$
x_c(r,\psi) = r \cos(\psi) \\
y_c(r,\psi) = r \sin(\psi)
$$

Now we need to wrap this around a circle.  This would mean that the plane the half-circle sits in should be normal to the tangent vector of the circle in 3D.

I choose to wrap my Mobius strip around the axis z with overall radius $R$.  So I have the parameterization

$$
x = R \cos (\phi) \\
y = R \sin (\phi) \\
z = 0
$$

It's not hard to see that at every point on this circle, two basis vector for the plane normal to the tangent vector is given by:

$$
\hat e_1 = (\cos \phi, \sin \phi, 0) \\
\hat e_2 = (0, 0, 1)
$$

With offset
$$
(R \cos \phi, R \sin \phi)
$$

Together, using the equations for \((x_c, y_c)\), I get the overall parametrization:

$$
x_c(r,\phi)\hat e _1 (2\phi) + y_c(r,\phi) \hat e_2 (2\phi) + (R \cos \phi, R \sin \phi, 0)
$$

Using matplotlib, I have something like:



import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import interactive
import math
import matplotlib
matplotlib.use('Qt5Agg')

R = 4

def mobius(r, phi):
    x_ = r * math.cos(phi)
    e1 = np.array([math.cos(2*phi), math.sin(2*phi), 0])
    y_ = r * math.sin(phi)
    e2 = np.array([0, 0, 1])
    return x_*e1 + y_*e2 + np.array([R*math.cos(2*phi), R*math.sin(2*phi),0])


# Generate torus mesh
phi = np.linspace(-0.5*np.pi, 0.5*np.pi, 50)
r = np.linspace(-1, 1, 50)
r, phi = np.meshgrid(r, phi)
X = np.ndarray(r.shape)
Y = np.ndarray(r.shape)
Z = np.ndarray(r.shape)
for (x,y) , r_scalar in np.ndenumerate(r):
    phi_scalar = phi[x,y]
    result = mobius(r_scalar, phi_scalar)
    X[x,y] = result[0];
    Y[x,y] = result[1];
    Z[x,y] = result[2];

# Display the mesh
# plt.switch_backend('Qt5Agg')
# %matplotlib qt5
fig = plt.figure()
ax = fig.gca(projection = '3d')
ax.set_xlim3d(-5, 5)
ax.set_ylim3d(-5, 5)
ax.set_zlim3d(-1, 1)
ax.plot_surface(X, Y, Z, color = 'w', rstride = 1, cstride = 1)
plt.show()
Output looks like:

Tuesday, 16 October 2018

A intuitive explanation for the chain rule

The chain allows us to take derivative of composition of two functions.  It has the form
$$(f\circ g)'(x)=f'(g(x))g'(x)$$

Intuitively, why should it be a product of the two function's individual derivatives?

We can answer that by looking at Taylor expansions.
If we wiggle \(x\) in \(g(x)\) we get
$$
g(x+\delta)\approx g(x)+g'(x)\delta
$$

That means if we wiggle \(x\) by \(\delta\), f gets wiggled by \(g'(x)\delta\).

A function \f(y)\ would change in a very similar way if we wiggle y:
$$
f(y+\delta') \approx f(y) + f'(y)\delta'
$$

We replace \(\delta'\) by \(g'(x)\delta\) because that is how much the "y" is perturbed if we perturb \(x\) by \(\delta\)

Overall, we would get
$$
f(g(x+\delta)) \approx f(g(x))+f'(g(x))g'(x)\delta
$$

Thursday, 28 December 2017

Permutations and n-dimensional rotations

The question that motivates today's post is, how do we define rotations in n-dimensions?

Let's the standard Euclidean bases as \((e_1, e_2\ ...)\).

In two dimensions, a sequence of counter-clockwise rotations takes the standard basis thru the following sequence:

$$(e_1, e_2), (e_2, -e_1), (-e_1, e_2), (-e_2, e_1)$$

So each rotation switches the basis and negates the y-axis.

Similar situation happens in 3-dimensional rotation along the three axes of rotations.

This means it make sense to think of rotations (counter-clockwise) as operations that preserve an alternating tensor product.  WLOG, we can take that alternating product to be the determinant of all the basis vectors.

But rotation through the origin also has the property that it preserves distances.  Namely an isometry.

$$\| Rx \| = \|x\|$$

We can quickly derive the following properties:
If \(x \cdot y = 0 \Rightarrow Rx \cdot Ry = 0 \).
\( \| Re_i\| = 1 \Rightarrow [R^tR]_{(i,i)} = \pm1 \)

This means that \(R^tR\) must be a diagonal matrix with \(\pm1\) on the diagonal.  Hence its determinant is also \(\pm1\).  Restricting it to +1 would make the isometry a proper rotation.  If we allow -1, it is not hard to see that it would introduce reflections into R.

This all make sense.  If we "locally rotate" two axes in n-dimensions, which means fix all other n-2 axes, and only rotate the remaining 2 in the ordinary sense, we are actually preserving the sign of the determinant.  This means that our intuitive sense of 2-dimensional rotation has been successfully extended to n-dimensions.

Monday, 13 November 2017

Two failed modifications of Paxos: towards a better understanding of the protocol

Two failed attempts at modifying Paxos.

The Paxos algorithm solves the following problem:
There are n processes.  Each process starts with a random value of 0 or 1.
Processed communicate via messages.  Messages might be lost or arbitrarily delayed without bound.
At some time in the future, all processes need to decide on one of the two value.  There must be a way to find out when the decision has been made and what value it is.  Furthermore, to prevent trivial solutions, the decision outcome must be a subset of the starting values.  (I.e., if all processes start with 0, they cannot ever decide on 1).  Once a decision is reached, it must never change.

The following algorithm solves it assuming that a majority of servers can communicate with each other.  If they cannot, it just means that a decision will not be reached (liveness), there won't be a case where a decision is reached but later changed (safety).

Each machine keeps a local counter counter and also is assigned a unique server_id offline before everything starts.
Each server tries to make proposals:
1. Send a "prepare-to-propose" message to all machines (or a majority) containing its ballot number which is the pair (counter, server_id).  Increase its counter.
2. When it receives reply from a majority, do the following:
Each reply should contain a (max_ballot_accepted, and accepted_value) but might be null.  If everyone in the majority is null, then send propose message containing the ballot (counter, server_id) with any value.  However, if not everyone in the majority replied null, then sort by max_ballot_accepted, and propose that particular accepted_value.

Each server should also listen for prepare-to-propose message and proposals.  It should also keep track of (max_ballot_accepted), max_ballot_promised, and accepted_value.
1. Upon receiving prepare-to-propose, if the ballot is smaller than max_ballot_promised, ignore, if it is greater then, update max_ballot_promised and reply with (max_ballot_accepted, and accepted_value).
2. Upon receiving a proposal, check to see if it is greater or equal to max_ballot_promised, if so, accept the proposal and update our state.

When is consensus reached.  Consensus is reached when a majority of acceptors accepted the same ballot.  We can find out when that happens if all the processes for example, send out a status update whenever a proposal is accepted.  Of course, the status update might be lost and there is no way of knowing whether the message is lost.  But we are guaranteed that the consensus value will never change once it is changed (safety).

Some intuition on why it works:
1. The prepare-to-propose serves as a barrier to ensure that any accepted ballots will be "known" by later ballots.
2. By only proposing the largest ballot from a majority, we ensure that newer proposed values will remain the consensus value if consensus have been reached.

Once some majority M has accepted a ballot, then any proposer proposing higher ballot numbers must not have sent the "prepare-to-propose" message to this majority until after the acceptance.  Otherwise the prepare-to-propose message servers as an obstacle to getting this particular ballot accepted in at least one common member.
Lower ballot numbers also suffer the same issue.
Now, after receiving reply to prepare-to-propose messsages from a majority M', M' has at least one member in common with M.  We can do a strong induction on the ballot number.  If the current ballot is n, we can assume all proposed ballots between the original accepted ballot and n-1 (inclusive) all propose the same consensus value.  The common member between M' and M would have replied a ballot number at least as great as the original accepted ballot number.  The proposer must propose the value corresponding to the largest ballot number, hence it must also propose the consensus value.

To help us understand better, let's ask whether we can modify the protocol in two ways.
1. Instead of sending out a "prepare-to-propose" message which extracts a promise not to accept lower numbers, can we simply query for proposed values instead of asking for a promise.  We will block out lower ballots at the acceptance step.
The issue is that ballots with the same number might override the consensus value.
We could reach a state each row represents one process, their accepted ballot and value)
1, v1
1, v1
1, v1
2, v2
null
Consider the following sequence of messages (p1 stands for process one).
p1.PrepareToPropose(ballot 1).SendTo(1, 2, 3)
p2.PrepareToPropose(ballot 2).SendTo(1, 2, 3)
p1.Received.PrepareResponse(null, null, null).Propose(1, v1).SendTo(1, 2, 3)
(p1, p2, p3).Accept(ballot 1, v1)
p2.Received.PrepareResponse(null, null, null).Propose(2, v2).SendTo(4) // messages to other members of the majority are lost
p4.Accept(ballot 2, v2)

This is happening because p2 does not know that ballot 1 is already in progress.

2. Is it possible to declare that consensus is reached as soon as a majority has picked the same value?  Right now, we require majority to pick the same ballot.
No.  After majority picked the same value, the consensus value might not remain the same.
We again describe accepted ballot, accepted number, put also show the internal "promise" counter.
To simplify we only show global state transitions.  The user can fill in the messages and lost messages.

1: v1 (1)
1: v1 (1)
2: v2 (2)
2: v2 (2)
null (2) (proposal is lost, but prepare was received)

1: v1 (3) // proposal three only reaches the fifth process
1: v1 (3)
2: v2 (2)
2: v2 (2)
3: v1 (3) // here, we are wrong to think that consensus has been reached

Proposer now prepares and gets response from p1, p2, p3.  It will now propose v2.

4: v2 (4)
4: v2 (4)
4: v2 (4)
2: v2 (2)
3: v1 (3) // consensus value changed

Sunday, 18 September 2016

Reducible state space Markov chains

While reviewing a paper last night, I started thinking about reducible state space Markov chains.  Most undergraduate probability students are fed theorems about irreducible Markov chains.  To summarize these theorems.


  • If two states communicate, then they are either both recurrent or both transient.
  • If a MC is irreducible and all states are positive recurrent, then there exists a unique stationary distribution.
  • If a MC is irreducible, aperiodic, and has a stationary distribution, then it converges to that.

We can ask a set of questions about reducible Markov chains to test our intuition for the subject.
  • Let's setup a trivial 4 state Markov chain where the first two and last two states form separate communicating classes.
  • How many stationary distributions exist?
  • Do limiting distributions exist?
  • Does the limit distribution depend on the initial distribution?  How many possibilities are there?

Let's try to answer them.
  • There are an infinite number of stationary distributions.  If you take the stationary distribution of the first class and the stationary distribution of the second class, then any linear combination (equal to 1) of these two would be a stationary distribution would be a stat. distribution of the full 4-state Markov chain.
  • Limiting distributions do exist.  Let's think about the intuition here.  If the total prob. mass on the first two states is m_0 and the rest is 1-m_0, then starting from this initial distribution, each iteration of the chain would "stir" this vector but only stir within their respective classes.  So the first two states of this vector would converge to its limit distribution and same with the last two.  This means the overall limit distribution would be a mixture of the individual limit distributions weighted by the (m_0, 1-m_0) vector.

Wednesday, 19 December 2012

Course Catalog: Classical Composition

Description: This is a course on classical music composition that will teach the principles of melody, harmony, and orchestration.  However, this course does not teach you to form you musical ideas.  That comes from your talent.  If you don't have it, you can become an orchestrator.  The distinguished alumni of this course usually breaks with the principle we teach them and write great music nevertheless.  (Shostakovich, Debussy)

Course Credit: None

Pre-Requisites: No technical pre-requisite.  However, one should try to live a life of calamity, solitude, and mental instability through the achievement of at least one of the following:


  1. Unfulfilling / unrequited love for a string of women who are outside of your social class, for example members of the aristocracy or the royals.  See Beethoven.
  2. If you are homosexual, the unfortunate need to oppress your orientation due to social pressures of your society.  See Tchaikovsky.
  3. Have a high infant mortality rate.  See Bach.
  4. Losing of either vision or hearing.  (See Beethoven or Rodrigo)
  5. Live under a repressive Communist dictatorship (Shostakovich)
  6. Lastly, having your wife engaging in an affair with one of your best friend.  (Schumann)