`apt-cache policy `*package-name*

(shows installed & candidate versions)`dlocate -S `*file-name*

(searches installed packages)`apt-file search `*file-name*

(searches candidate packages)`dpkg -L `*file-name*

(for installed & candidate packages)
]]>My solution is to co-opt the IJKL key-group, using Caps-Lock as the modifier. Put the following in `~/.Xmodmap`

:

`keycode 31 = i I Up`

keycode 44 = j J Left

keycode 45 = k K Down

keycode 46 = l L Right

```
```

`keysym Caps_Lock = Mode_switch`

This should set things correctly every time your X session starts. To immediately see effects, run `xmodmap ~/.Xmodmap`

. If you mess something up, `setxkbmap -layout us`

(or other codes for other layouts) will set it right… until `~/.Xmodmap`

is reloaded, anyway.

$$\begin{align}

Z(J)&=\int D\phi\,e^{i\int d^4x\,\{\frac{1}{2}[(\partial \phi)^2-m^2\phi^2]+J\phi\}}\\

&=\int D\phi\,e^{i\int d^4x\,\{-\frac{1}{2}\phi(\partial^2+m^2)\phi+J\phi\}}.

\end{align}$$

(Integration by parts allows us to replace \((\partial\phi)^2\) with \(\phi(\partial^2\phi)\) in the exponent’s integrand.)

The exponent here is really \(\frac{i}{2}\phi\cdot A\cdot\phi+iJ\cdot\phi\), where we’re thinking of \(\phi\) and \(J\) as vectors in the space of field configurations (something like “\(\mathbb{R}^{1,3}\rightarrow\mathbb{R}\)”) and \(A=-(\partial^2+m^2)\) is an operator on this space. Our “path integral” is really just an integral over all field configurations (or at least those which vanish at infinity), so we are in the same situation as Appendix 1 of I.2 — computing a Gaussian integral over a vector space.

And we have a solution to this integral:

$$Z(J)=\left(\frac{(2\pi i)^N}{\det[A]}\right)^{1/2}e^{-(i/2)J\cdot A^{-1}\cdot J}.$$

The coefficient on the left is kind of mysterious: \(N\) is the “dimension” of the vector space, which should be infinity. Perhaps some clever limit arguments could make sense of it (make a lattice out of space), but we don’t really care; we’ll just call it \(Z(J=0)\). Then we have:

$$Z(J)=Z(J=0)e^{i W(J)}$$

where

$$W(J)=-\frac{1}{2}\int d^4x\,d^4y\,J(x)A^{-1}(x,y)J(y).$$

By translation invariance, \(A^{-1}(x,y)\) must be some function \(D(x-y)\). Since it’s the inverse of \(A=-(\partial^2+m^2)\), we must have $$ -(\partial^2+m^2)D(x-y)=\delta^{(4)}(x-y)$$ (the Dirac delta is the identity operator on continuous space). So \(D(x)\), called the free propagator, is the Green’s function for the Klein-Gordon equation.

That was all prelude. The real topic of this blog post is: how do we determine \(D(x)\)? Zee doesn’t really motivate his work here, so I thought I would give it a try.

The Klein-Gordon equation \(-(\partial^2+m^2)\phi=0\) has a solution \(\phi_k(x)=e^{ikx}\) for every 4-vector \(k\) with \(k^2=m^2\). (You are probably more familiar thinking of this as \(\phi_{\omega,\vec{k}}=e^{i(\omega t – \vec{k}\cdot\vec{x})}\) for \(\omega^2 = \vec{k}^2+m^2\).) That’s great, but we have a source term (that Dirac delta) on the right-hand side, so we want \(\phi\)s with \(-(\partial^2+m^2)\phi\neq0\). One way to do this is to take wave solutions with \(k^2\neq m^2\):

$$\begin{align}

-(\partial^2+m^2)e^{ikx} &= (k^2-m^2)e^{ikx} \\

-(\partial^2+m^2)\frac{e^{ikx}}{k^2-m^2} &= e^{ikx}.

\end{align}$$

Huh; so we have the power to create any wave on the right-hand side, as long as it has \(k^2\neq m^2\). Using linearity & Fourier analysis, we can combine these to form (almost) any source term. In particular, since

$$\delta^{(4)}(x) = \int\frac{d^4k}{(2\pi)^4}e^{ikx},$$

we will find:

$$-(\partial^2+m^2)\int\frac{d^4k}{(2\pi)^4}\,\frac{e^{ikx}}{k^2-m^2} = \delta^{(4)}(x).$$

So we have a solution for the free propagator:

$$D(x)=\int\frac{d^4k}{(2\pi)^4}\,\frac{e^{ikx}}{k^2-m^2}.$$

But there’s a flaw with this line of thought: When \(k^2=m^2\) (a “Minkowski hyperboloid” of vectors), the integrand is undefined. Is the integral as a whole still defined? I would like to think that it is, that this is simply a measure-0 set we can integrate around. For their part, QFT people like to compute the integral by pushing the denominator away from zero *in the complex plane* (important, since the denominator can be either positive or negative!) and then taking this push to zero. That is:

$$D(x)=\lim_{\epsilon\rightarrow0}\int\frac{d^4k}{(2\pi)^4}\,\frac{e^{ikx}}{k^2-m^2+i\epsilon}.$$

Taking the \(\lim\) as implied whenever we have an \(\epsilon\), and separating \(k\) into space and time components, we have:

$$\begin{align}

D(x) &= \int\frac{d^3\vec{k}}{(2\pi)^3}\int\frac{d\omega}{2\pi}\,\frac{e^{i\omega t}e^{-i\vec{k}\vec{x}}}{\omega^2-\vec{k}^2-m^2+i\epsilon} \\

&= \int\frac{d^3\vec{k}}{(2\pi)^3}e^{-i\vec{k}\vec{x}}\int\frac{d\omega}{2\pi}\,\frac{e^{i\omega t}}{\omega^2-\omega_k^2+i\epsilon}

\end{align}$$

(where \(\omega_k^2 = \vec{k}^2-m^2\), as before.) That inside integral looks like fun; let’s do it.

[I finally figured out how to actually do contour integration, and it's pretty awesome. I'm too lazy to write it up here, though.]

And a conceptual note which doesn’t deserve its own post: I keep on getting confused while playing with path integrals, thinking I’m accidentally doing quantum *mechanics* because I’m connecting classical field configurations to classical field configurations. There are some pretty easy mnemonic tricks to convince myself that this is wrong, such as remembering that these classical field configurations could be made of real numbers for all I care, or some far more exotic species. Meanwhile, a QM wave function really has to made of complex numbers.

But there’s something deeper going on here, which is that the path integral formulation really is fundamentally quasi-classical. It operates on a single, privileged basis of the quantum field space — the classical field space. And to clarify, this has nothing to do with “position”. Even when we go to momentum space we are simply reparameterizing the basis of classical fields. In table form:

Physical space | Classical field space | Quantum field space | |
---|---|---|---|

A \(X\)-valued field on Minkowski space | \(\mathbb{R^{3}}\) | \(\mathbb{R^{3}}\rightarrow X\) | \((\mathbb{R^{3}}\rightarrow X)\rightarrow \mathbb{C}\) |

QM in 3 dimensions | \(\mathbb{R^{0}}\) | \(\mathbb{R^{0}}\rightarrow \mathbb{R}^3\) | \((\mathbb{R^{0}}\rightarrow \mathbb{R}^3)\rightarrow \mathbb{C}\) |

I’m a little unclear as to the best way to work time into this picture… Especially for relativistic fields, it makes sense to think of the field as being defined over all of spacetime, instead of just on a single space-slice. But this breaks the analogy with quantum mechanics, and it makes the concept of a “path” more amorphous.

]]>Solution I found (finally!) was:

- Install NumPy. (I can’t quite remember how to do this. Just do what the instructions say.)
- Install OpenCV. Per the instructions, the steps are as follows:
cd ~/<my_working_directory> svn co https://code.ros.org/svn/opencv/trunk cd ~/<my_working_directory>/trunk/opencv # directory should contain INSTALL, CMakeLists.txt etc. mkdir release cd release cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_PYTHON_SUPPORT=ON -D PYTHON_EXECUTABLE=/usr/bin/python2.6 .. sudo make install

Note the BUILD_PYTHON_SUPPORT and PYTHON_EXECUTABLE flags to

`cmake`

. In the output to`cmake`

, you should see:`Python: ON`

Python interpreter: /usr/bin/python2.6

Python numpy: YESunder “Interfaces:”.

- Test the install. Launch Python, and run:
import numpy import cv

Nothing bad should happen!

(Make sure the “release” directory is nice and clean before you run cmake & make. At one point, my release directory was in some weird state so that after I ran make, “import cv” gave me a segfault. Clearing out the directory and rerunning the install process made everything work!)

]]>Let X be an N×N matrix of integers. Write a program that will print the number of the first all-zero row of X, if any.

The reason why this is (supposedly) hard without GOTO is that it involves breaking/continuing out of a nested loop; most structured languages only support breaking/continuing the smallest loop a statement is contained in. So, all Rubin does with his GOTO is simulate a labelled continue statement. Personally, I think labelled break/continue statements are a very natural feature for a structured language to have*, and I am thoroughly unconvinced by Rubin’s claim that this is somehow a general defense of GOTO.

But Dijkstra’s response takes a different, more conceptual approach. Dijkstra claims that the problem’s statement immediately calls for a “bounded linear search” inside of a “bounded linear search”. Since it is well-understood how to construct a loop to perform a bounded linear search, a GOTO-less solution becomes apparent.

Well, at least in the abstract. I realized I didn’t really know how to implement this in Python. One important ingredient was a for-loop which only keeps running as long as some condition is satisfied (and its iterator hasn’t run out). I didn’t want to do this by putting a conditional “break” at the end of the loop: that would replace loop semantics with tangled procedural flow. And I certainly didn’t want to use a while loop and iterate the iterator by hand: that’s just ugly. But I realized that I should be able to make an iterator which “filters” another iterator by a condition. Turns out it already exists, as `itertools.takewhile`

. With that, we have:

from itertools import takewhile def first_zero_row(arr): row_found = False for i, row in enumerate(takewhile(lambda x: not row_found, arr)): cell_found = False for cell in takewhile(lambda x: not cell_found, row): if cell != 0: cell_found = True if not cell_found: row_found = True return i if row_found else None

Honestly, though, any time you see something like `lambda x: not row_found`

you know you’re being a bit too clever for your own good. Also, although this is a straightforward translation of the solution Dijkstra presents, it obfuscates the underlying concept, of the “nested bounded linear search”. We have the luxury of functional programming in Python; we can encode things very, very cleanly:

def BLS(iter, cond): return next((x for x in iter if cond(x)), None) def first_zero_row(arr): found = BLS(enumerate(arr), lambda (i, row): not BLS(row, lambda cell: cell != 0)) return found[0] if found else None

`first_zero_row`

‘s practically a one-liner! An inscrutable, nested-lambda’d one-liner, but these are the sacrifices we make for conceptual purity.

Of course, if I were actually solving this problem for non-academic reasons, I’d just say:

def first_zero_row(arr): return next((i for i, row in enumerate(arr) if all(cell == 0 for cell in row)), None)

But where’s the fun in that?

* – Although, regrettably, Python doesn’t support them; see Guido’s rejection to the relevant PEP.

]]>I was trying to get parameter feedback to work between Traktor and my BCR2000, and couldn’t really find an explanation online. I eventually figured it out, though, so here is the fruit of my labor, for our Googling posterity.

__On the BCR2000:__

Set each controller which you want to feedback-control to “Relative 2″ mode. This is explained on p14-15 of the manual, but essentially, it goes like:

- Hold down EDIT while twiddling your control, until the LED display registers, then release.
- Turn Push Encoder 6 (Mode) until the LED display says “rEL2″, for Relative 2.
- Press EXIT.

__In Traktor:__

Well, this is the obnoxious part. For each controller, you must add two entries to the Assignment Table in the Controller Manager — one for input and one for output:

- Input:
- Add using “Add In…”.
- Train using “Learn”.
**CRUX:**Set Type of Controller to “Encoder”, Interaction Mode to “Relative”, and Enc.-Mode. to “3Fh/41h”.

- Output:
- Add using “Add Out…”.
- Set the device mapping to whatever Traktor detected when you did your learning above.

And I think that’s it. Note that you can now do such things as adjust the sensitivity of the knob from within Traktor (since Traktor is handling the internal state of the controller.)

]]>Let’s read a bit about it from Wikipedia, the Free Encyclopedia:

Every comic contains at least three hidden comments (easter eggs)… The third is found in the RSS feed of the comic and the archive page, being, essentially, the comic’s title.

So I noticed a while ago that the way this “archive title” is used has changed substantially over time. Back in the old days, it was short and sweet, e.g. “*lesbians!*“. But at some point, it started getting larger and larger until, by the end of 2006, you had such monstrosities as “*alternate ending: god says ‘T-REX SOMEONE HAS ALREADY OPENED A STORE CALLED THE RELATIONSHOP’ and t-rex yells to utahraptor that God says the idea is already taken, but that they’re going back in time to prevent the idea from being preemptively stolen, and god’s all ‘I NEVER SAID THAT BUT DAAAMN LET’S GO BACK IN FRIGGIN’ TIME’ and then they all meet marty mcfly*“. Hope I didn’t give away any spoilers there?

Anyway, I finally got around to graphing it! Binned by month, with error bars on the means:

Woaaaah that’s actually more interesting than I was expecting. What kind of transition is going on here? Starting towards the end of 2005, and continuing into 2006, the archive-title length undergoes a precipitous rise. It saturates by the arrival of 2007… Why the sudden rise? And more importantly, *why the saturation*? With continued exponential rise, we could have had 10^{23}-character archive titles. Some mysterious force is at play here.

We may never fully understand the cause for this curve. Or maybe I’ll email Ryan North and ask what’s up.

*A completely tangential note on procedure:* There’s something interesting going on here philosophically, in that I’m assuming there is some True Perfect Archive-Title Length for any given month — the error bars represent how certain I am of what this Platonic character-count is. As weird as this sounds, assuming the existence of a hidden average is exactly what you do when you imagine that you can describe the growth of the title-text as a nice smooth curve. That’s not what what the raw data looks like *at all*. It looks like a huge mess of points. But we can imagine that these points are just noisy reflections of some more meaningful underlying moving average. That’s what I’m talking about here.

You can use this to estimate the speed of a plane. In particular if you know

- \(v_s\) — the speed of sound, and
- \(\theta\) — the angle between the visual and acoustic images of the plane,

you can find the speed of the plane: it’s just \(v_s\theta\)! (Assuming you measure \(\theta\) in radians.) I should really make a sketch, but here’s the idea:

Suppose it takes time \(T\) for the sound from the plane to reach the ground. Then, calling the height of the plane \(h\), we have:

- \(v_s T = h\).

In this time, the plane moves an angle \(\theta\), as seen from the ground. This angle corresponds to an actual distance of \(h \theta\) (if we measure \(\theta\) in radians). So:

- \(v_p T = h\theta\).

If we divide these two formulas, woah, \(T\) and \(h\) cancel out! All you’re left with is

- \(v_p/v_s = \theta\),

or

- \(v_p = v_s\theta\).

Nifty. There’s probably a neat practical “rule-of-thumb” version of this people could use (with their thumbs?) to quickly time planes without having to use radians or anything like that. I am, as of yet, too lazy to come up with it.

[Note: Speed of sound varies with altitude, going down to about 295 m/s at cruising altitude.]

]]>Orbital angular momentum should come naturally: Our standard way of understanding electron states in atoms is with *orbitals*, which are chosen to have well-defined angular momentum quantum number *l* and magnetic quantum number *m*. (Essentially, *l* is the magnitude of the angular momentum vector and *m* is the *z*-component of the angular momentum. Of course this is kind of a lie but whatever.)

But every electron has not only orbital angular momentum from its motion around the nucleus, but also spin angular momentum, which is intrinsic. This spin always has *l _{s}*=1/2, so along any given axis

The question is: what happens when we add **l**+**s**? That is, what is the *total* angular momentum of the electron? We’ll call this **j**. By looking at vector magnitudes, we see that the angular momentum quantum number *j* must lie between \(l+\frac{1}{2}\) and \(|l-\frac{1}{2}|\) (note the careful absolute value…). Since *j* has to be an integer or half-integer, we see that we must have *j*=*l*±1/2, unless *l*=0 in which case *j*=1/2. For each of these *j* values, we have the standard set of *m _{j}* values from +

What if we have an atom with a bunch of electrons in it? What does its angular momentum look like? Well, if a subshell (fixed *n* & *l*) is full, we know that it contributes neither orbital nor spin angular momentum — everything cancels out. So we need only look at partially-filled subshells. We fill such subshells according to Hund’s rules. Rules 1 & 2 tell you how to place your spins, rule 3 tells you how to get a total angular momentum *J* from this.

__Ex: a P subshell__

m_{l} |
L |
S |
J |
||||
---|---|---|---|---|---|---|---|

+1 | 0 | -1 | |||||

^{2}P_{1/2} |
↑ | 1 | 1/2 | 1/2 | (less than half filled; lowest J value is lower-energy) | ||

^{3}P_{0} |
↑ | ↑ | 1 | 1 | 0 | (“) | |

^{4}S_{3/2} |
↑ | ↑ | ↑ | 0 | 3/2 | 3/2 | (half filled; L=0, only one possibility) |

^{3}P_{2} |
↑↓ | ↑ | ↑ | 1 | 1 | 2 | (more than half filled; highest J value is lower-energy) |

^{2}P_{3/2} |
↑↓ | ↑↓ | ↑ | 1 | 1/2 | 3/2 | (“) |

^{1}S_{0} |
↑↓ | ↑↓ | ↑↓ | 0 | 0 | 0 | (“) |

We denote electron configurations with term symbols. They look like ^{2S+1}*L _{J}*.

Example: The base state of neutral helium is 1s^{2}, with spins like ↑↓ in the 1s subshell. This has zero total spin and zero total orbital spin, so it’s ^{1}S_{0}, a singlet. The first excited state of helium is 1s^{1}2s^{1}, with two up spins. This has total spin 1 and zero total orbital spin, so it’s ^{3}S_{1}, a triplet. The next excited state is 1s^{↑}2s^{↓} with term symbol ^{1}S_{0}, a singlet. Hund’s rules are what tell us this is a higher-energy excited state.

Completely unrelatedly, here‘s *Joel on Software* on ‘People who are Smart but don’t Get Things Done’:

]]>These kind of people can be identified because they love to point out the theoretical similarity between two widely divergent concepts. For example, they will say, “Spreadsheets are really just a special case of programming language,” and then go off for a week and write a thrilling, brilliant whitepaper about the theoretical computational linguistic attributes of a spreadsheet as a programming language. Smart, but not useful.