The question that motivates today's post is, how do we define rotations in n-dimensions?
Let's the standard Euclidean bases as \((e_1, e_2\ ...)\).
In two dimensions, a sequence of counter-clockwise rotations takes the standard basis thru the following sequence:
$$(e_1, e_2), (e_2, -e_1), (-e_1, e_2), (-e_2, e_1)$$
So each rotation switches the basis and negates the y-axis.
Similar situation happens in 3-dimensional rotation along the three axes of rotations.
This means it make sense to think of rotations (counter-clockwise) as operations that preserve an alternating tensor product. WLOG, we can take that alternating product to be the determinant of all the basis vectors.
But rotation through the origin also has the property that it preserves distances. Namely an isometry.
$$\| Rx \| = \|x\|$$
We can quickly derive the following properties:
If \(x \cdot y = 0 \Rightarrow Rx \cdot Ry = 0 \).
\( \| Re_i\| = 1 \Rightarrow [R^tR]_{(i,i)} = \pm1 \)
This means that \(R^tR\) must be a diagonal matrix with \(\pm1\) on the diagonal. Hence its determinant is also \(\pm1\). Restricting it to +1 would make the isometry a proper rotation. If we allow -1, it is not hard to see that it would introduce reflections into R.
This all make sense. If we "locally rotate" two axes in n-dimensions, which means fix all other n-2 axes, and only rotate the remaining 2 in the ordinary sense, we are actually preserving the sign of the determinant. This means that our intuitive sense of 2-dimensional rotation has been successfully extended to n-dimensions.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment