OpenCV + Python 2.6 (on OS X)

So, I was trying to get OpenCV and NumPy to work together on my Mac (10.6). Problem was, NumPy wanted to live on Python 2.6, and OpenCV wanted to live on Python 2.7. Much misery was had.

Solution I found (finally!) was:

1. Install NumPy. (I can’t quite remember how to do this. Just do what the instructions say.)
2. Install OpenCV. Per the instructions, the steps are as follows:
cd ~/<my_working_directory>
svn co https://code.ros.org/svn/opencv/trunk
cd ~/<my_working_directory>/trunk/opencv # directory should contain INSTALL, CMakeLists.txt etc.
mkdir release
cd release
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_PYTHON_SUPPORT=ON -D PYTHON_EXECUTABLE=/usr/bin/python2.6 ..
sudo make install

Note the BUILD_PYTHON_SUPPORT and PYTHON_EXECUTABLE flags to cmake. In the output to cmake, you should see:

Python: ON Python interpreter: /usr/bin/python2.6 Python numpy: YES

under “Interfaces:”.

3. Test the install. Launch Python, and run:
import numpy
import cv

(Make sure the “release” directory is nice and clean before you run cmake & make. At one point, my release directory was in some weird state so that after I ran make, “import cv” gave me a segfault. Clearing out the directory and rerunning the install process made everything work!)

solving the “Rubin problem”

In “GOTO Considered Harmful” Considered Harmful (a response to Dijkstra’s famous letter), Frank Rubin gives an example of a very simple problem which he claims is best solved with the judicious use of a GOTO statement (*gasp*):

Let X be an N×N matrix of integers. Write a program that will print the number of the first all-zero row of X, if any.

The reason why this is (supposedly) hard without GOTO is that it involves breaking/continuing out of a nested loop; most structured languages only support breaking/continuing the smallest loop a statement is contained in. So, all Rubin does with his GOTO is simulate a labelled continue statement. Personally, I think labelled break/continue statements are a very natural feature for a structured language to have*, and I am thoroughly unconvinced by Rubin’s claim that this is somehow a general defense of GOTO.

But Dijkstra’s response takes a different, more conceptual approach. Dijkstra claims that the problem’s statement immediately calls for a “bounded linear search” inside of a “bounded linear search”. Since it is well-understood how to construct a loop to perform a bounded linear search, a GOTO-less solution becomes apparent.

Well, at least in the abstract. I realized I didn’t really know how to implement this in Python. One important ingredient was a for-loop which only keeps running as long as some condition is satisfied (and its iterator hasn’t run out). I didn’t want to do this by putting a conditional “break” at the end of the loop: that would replace loop semantics with tangled procedural flow. And I certainly didn’t want to use a while loop and iterate the iterator by hand: that’s just ugly. But I realized that I should be able to make an iterator which “filters” another iterator by a condition. Turns out it already exists, as itertools.takewhile. With that, we have:

from itertools import takewhile

def first_zero_row(arr):
row_found = False
for i, row in enumerate(takewhile(lambda x: not row_found, arr)):
cell_found = False
for cell in takewhile(lambda x: not cell_found, row):
if cell != 0: cell_found = True
if not cell_found: row_found = True
return i if row_found else -1


Honestly, though, any time you see something like lambda x: not row_found you know you’re being a bit too clever for your own good. Also, although this is a straightforward translation of the solution Dijkstra presents, it obfuscates the underlying concept, of the “nested bounded linear search”. We have the luxury of functional programming in Python; we can encode things very, very cleanly:

def BLS(iter, cond):
for i, item in enumerate(iter):
if cond(item):
return i
return -1

def first_zero_row(arr):
return BLS(arr,
lambda row: -1 == BLS(row,
lambda cell: cell != 0))


first_zero_row‘s a one-liner! An inscrutable, nested-lambda’d one-liner, but these are the sacrifices we make for conceptual purity.

Of course, if I were actually solving this problem for non-academic reasons, I’d just say:

def first_zero_row(arr):
zero_rows = [i for i, row in enumerate(arr)
if all(cell == 0 for cell in row)]
return zero_rows[0] if zero_rows else -1


But where’s the fun in that?

* – Although, regrettably, Python doesn’t support them; see Guido’s rejection to the relevant PEP.

parameter feedback with Traktor & BCR2000

I was trying to get parameter feedback to work between Traktor and my BCR2000, and couldn’t really find an explanation online. I eventually figured it out, though, so here is the fruit of my labor, for our Googling posterity.

On the BCR2000:
Set each controller which you want to feedback-control to “Relative 2″ mode. This is explained on p14-15 of the manual, but essentially, it goes like:

• Hold down EDIT while twiddling your control, until the LED display registers, then release.
• Turn Push Encoder 6 (Mode) until the LED display says “rEL2″, for Relative 2.
• Press EXIT.

In Traktor:
Well, this is the obnoxious part. For each controller, you must add two entries to the Assignment Table in the Controller Manager — one for input and one for output:

• Input:
• Train using “Learn”.
• CRUX: Set Type of Controller to “Encoder”, Interaction Mode to “Relative”, and Enc.-Mode. to “3Fh/41h”.
• Output:
• Set the device mapping to whatever Traktor detected when you did your learning above.

And I think that’s it. Note that you can now do such things as adjust the sensitivity of the knob from within Traktor (since Traktor is handling the internal state of the controller.)

quantitative Qwantzology

You guys know Dinosaur Comics, right? Cool.

Every comic contains at least three hidden comments (easter eggs)… The third is found in the RSS feed of the comic and the archive page, being, essentially, the comic’s title.

So I noticed a while ago that the way this “archive title” is used has changed substantially over time. Back in the old days, it was short and sweet, e.g. “lesbians!“. But at some point, it started getting larger and larger until, by the end of 2006, you had such monstrosities as “alternate ending: god says ‘T-REX SOMEONE HAS ALREADY OPENED A STORE CALLED THE RELATIONSHOP’ and t-rex yells to utahraptor that God says the idea is already taken, but that they’re going back in time to prevent the idea from being preemptively stolen, and god’s all ‘I NEVER SAID THAT BUT DAAAMN LET’S GO BACK IN FRIGGIN’ TIME’ and then they all meet marty mcfly“. Hope I didn’t give away any spoilers there?

Anyway, I finally got around to graphing it! Binned by month, with error bars on the means:

Woaaaah that’s actually more interesting than I was expecting. What kind of transition is going on here? Starting towards the end of 2005, and continuing into 2006, the archive-title length undergoes a precipitous rise. It saturates by the arrival of 2007… Why the sudden rise? And more importantly, why the saturation? With continued exponential rise, we could have had 1023-character archive titles. Some mysterious force is at play here.

We may never fully understand the cause for this curve. Or maybe I’ll email Ryan North and ask what’s up.

A completely tangential note on procedure: There’s something interesting going on here philosophically, in that I’m assuming there is some True Perfect Archive-Title Length for any given month — the error bars represent how certain I am of what this Platonic character-count is. As weird as this sounds, assuming the existence of a hidden average is exactly what you do when you imagine that you can describe the growth of the title-text as a nice smooth curve. That’s not what what the raw data looks like at all. It looks like a huge mess of points. But we can imagine that these points are just noisy reflections of some more meaningful underlying moving average. That’s what I’m talking about here.

timing planes with sound

If you’re standing outside and a plane passes overhead, it sounds like it’s coming from a bit further back than it looks like it’s coming from. This is because light travels practically instantaneously (c = 299,792,458 m/s = 670,616,629 mph, so you’re 0.00004 light-seconds from a plane 12000m in the air) while sound takes a while (vs = 340 m/s = 761 mph, so you’re a whopping 35 sound-seconds from the same plane).

You can use this to estimate the speed of a plane. In particular if you know

• $$v_s$$ — the speed of sound, and
• $$\theta$$ — the angle between the visual and acoustic images of the plane,

you can find the speed of the plane: it’s just $$v_s\theta$$! (Assuming you measure $$\theta$$ in radians.) I should really make a sketch, but here’s the idea:

Suppose it takes time $$T$$ for the sound from the plane to reach the ground. Then, calling the height of the plane $$h$$, we have:

$$v_s T = h$$.

In this time, the plane moves an angle $$\theta$$, as seen from the ground. This angle corresponds to an actual distance of $$h \theta$$ (if we measure $$\theta$$ in radians). So:

$$v_p T = h\theta$$.

If we divide these two formulas, woah, $$T$$ and $$h$$ cancel out! All you’re left with is

$$v_p/v_s = \theta$$,

or

$$v_p = v_s\theta$$.

Nifty. There’s probably a neat practical “rule-of-thumb” version of this people could use (with their thumbs?) to quickly time planes without having to use radians or anything like that. I am, as of yet, too lazy to come up with it.

[Note: Speed of sound varies with altitude, going down to about 295 m/s at cruising altitude.]

all about: angular momentum in atoms

On my last test, I had trouble with a couple questions because I wasn’t totally clueful about how angular momentum works in atoms. I should fix this.

Orbital angular momentum should come naturally: Our standard way of understanding electron states in atoms is with orbitals, which are chosen to have well-defined angular momentum quantum number l and magnetic quantum number m. (Essentially, l is the magnitude of the angular momentum vector and m is the z-component of the angular momentum. Of course this is kind of a lie but whatever.)

But every electron has not only orbital angular momentum from its motion around the nucleus, but also spin angular momentum, which is intrinsic. This spin always has ls=1/2, so along any given axis ms=±1/2.

The question is: what happens when we add l+s? That is, what is the total angular momentum of the electron? We’ll call this j. By looking at vector magnitudes, we see that the angular momentum quantum number j must lie between $$l+\frac{1}{2}$$ and $$|l-\frac{1}{2}|$$ (note the careful absolute value…). Since j has to be an integer or half-integer, we see that we must have j=l±1/2, unless l=0 in which case j=1/2. For each of these j values, we have the standard set of mj values from +j to -j.

What if we have an atom with a bunch of electrons in it? What does its angular momentum look like? Well, if a subshell (fixed n & l) is full, we know that it contributes neither orbital nor spin angular momentum — everything cancels out. So we need only look at partially-filled subshells. We fill such subshells according to Hund’s rules. Rules 1 & 2 tell you how to place your spins, rule 3 tells you how to get a total angular momentum J from this.

Ex: a P subshell

 ml L S J +1 0 -1 2P1/2 ↑ 1 1/2 1/2 (less than half filled; lowest J value is lower-energy) 3P0 ↑ ↑ 1 1 0 (“) 4S3/2 ↑ ↑ ↑ 0 3/2 3/2 (half filled; L=0, only one possibility) 3P2 ↑↓ ↑ ↑ 1 1 2 (more than half filled; highest J value is lower-energy) 2P3/2 ↑↓ ↑↓ ↑ 1 1/2 3/2 (“) 1S0 ↑↓ ↑↓ ↑↓ 0 0 0 (“)

We denote electron configurations with term symbols. They look like 2S+1LJ. L is given in “spectroscopic notation” (S/P/D/F/etc.). The “2S+1″ part is called spin multiplicity. A singlet state is one with total spin 0, a doublet state is one with total spin 1/2, a triplet state is one with total spin 1, etc. We’re counting the number of possible mS values here, which if l=0 is the number of split energy levels in the presence of a magnetic field.

Example: The base state of neutral helium is 1s2, with spins like ↑↓ in the 1s subshell. This has zero total spin and zero total orbital spin, so it’s 1S0, a singlet. The first excited state of helium is 1s12s1, with two up spins. This has total spin 1 and zero total orbital spin, so it’s 3S1, a triplet. The next excited state is 1s2s with term symbol 1S0, a singlet. Hund’s rules are what tell us this is a higher-energy excited state.

epiphany of the day

A tree is just a graph of a stack over time.

Completely unrelatedly, here‘s Joel on Software on ‘People who are Smart but don’t Get Things Done’:

These kind of people can be identified because they love to point out the theoretical similarity between two widely divergent concepts. For example, they will say, “Spreadsheets are really just a special case of programming language,” and then go off for a week and write a thrilling, brilliant whitepaper about the theoretical computational linguistic attributes of a spreadsheet as a programming language. Smart, but not useful.

Or: “How photons and electrons say hello”

Low energy — Photoelectric effect
This is the first one you learn: a photon knocks an electron out of its atomic orbit. It is most likely to occur at low energies… as you move up in energy it becomes more likely that the photon will be scattered rather than absorbed.

Medium energy — Compton scattering
While in the photoelectric effect the energy of the incoming photons is absorbed completely by the electrons, at higher energies the photon will instead bounce off the electron, leaving some of its energy/momentum behind in the recoil.

Using relativistic energy/momentum formulas, you can derive the wavelength shift $$\lambda’-\lambda = \frac{h}{m_e c}\left(1-\cos\theta\right)$$ (higher wavelength ⇒ lower energy).

High energy — Pair production
γ → e- + e+ looks pretty reasonable, right? If the photon had enough energy, it could account for the mass of the created electron-positron pair, and charge is certainly conserved, so why not? Well, consider this: Given the conservation of momentum, output energy will be minimized by having the two electrons each take half the photon’s original momentum. But this gives $$E_\mathrm{out}=2\sqrt{\left(\frac{pc}{2}\right)^2+(m_e c^2)^2}>pc=E_\mathrm{in}$$, so even in this best case we don’t have enough energy to support the electron’s/positron’s momentum.

All that means is that we need some other ingredient in the mix. One good option is an atomic nucleus… When the photon gets near, it can allow the nucleus to absorb some of its momentum, to make electron-positron pair production possible. This is why pair-production is a form of light-matter interaction, rather than just something light does on its own.

(And no matter what, this is definitely going to be a high-energy interaction: the incoming photon must have AT LEAST $$pc>2m_e c^2$$.)

Some people have numbers as their Twitter names. Which ones?

Click above to see full-size

The full-size image is 1000 pixels high by 1000 pixels wide. Each pixel represents a number from 0 to 999,999. The row number gives the first three digits (0-999), the column number gives the last three digits (0-999)… so the pixels are ordered like letters on a page of English text. The pixel is black if the number is taken as a Twitter name, white if the number is still available.

Let me know if you’ve got any ingenious hypotheses, or if you want the data. (Although, in a very real sense, the image above is the data…)

[Secret bonus: Here's the image in the far more hierarchically-revealing Z-order (which I was delighted to discover has a name and Wikipedia page, since otherwise I would have had a hell of a time trying to explain it).]

So the electron in the hydrogen atom is just a particle in a spherically-symmetric 1/r potential… you’ve got a ladder of energy eigenvalues indexed by a quantum number n. The nth eigenvalue has degeneracy n2, but that’s cool; picking an axis z, the total angular momentum operator L and the z-axis angular momentum operator Lz give a complete set of commuting observables (together with H), so you get yer n, l, ml eigenstates.

And you think everything’s cool, everything’s ok. Wrong, because physics gets in the way of all this math fun. A number of physical effects (in various environments & regimes) break spherical symmetry and perturb our energy levels from their sweetly degenerate state. Here are some of them.

Normal Zeeman effect
An orbital with $$L_z\neq 0$$ has a non-zero magnetic moment about the z-axis (spinning electron ⇒ little loop of current ⇒ magnetic dipole). This means that it interacts with the z-component of a magnetic field. The potential of this interaction is $$V=-\mu B_z$$ where μ is the dipole moment. How to calculate this? $$L_z = m_e v r$$ and $$\mu = I A = \frac{-e v}{2\pi r} \pi r^2 = -\frac{e}{2} v r$$: comparing these gives $$\mu = -\frac{e}{2 m_e} L_z$$. (The coefficient in front is the “Bohr magneton”, within a factor of $$\hbar$$.)

Since the spacing between Lz-values is $$\hbar$$, the spacing between the split energy levels will be $$(e\hbar B_z)/(2 m_e)$$, which is the Lamor frequency times $$\hbar$$.

Also interesting: Due to some symmetry magic, the eigenstates remain the same; only the eigenvalues change. Huh.

Anomalous Zeeman effect
Thing is, we’ve been talking about orbital angular momentum this whole time. There’s also the electron’s intrinsic “spin” angular momentum. This has can have magnitude $$\pm\hbar/2$$ in the z direction. WEIRDLY enough, however, when computing a magnetic moment from this, you have to multiply by a “g-factor” of two. This shouldn’t surprise you, though, because spin was really weird in the first place.

The end result of all of this: Each state you get from normal Zeeman splitting splits $$\pm (e\hbar B_z)/(2 m_e)$$, which was the spacing between these states in the first place. We get two extra energy levels.

Spin–orbit interaction / Fine structure
An orbiting electron sees a magnetic field produced by the movement of the nucleus. This magnetic field interacts with the spin dipole: when the field is parallel to the dipole (that is, anti-parallel to the spin), energy is minimized. Two possible values of spin ⇒ a new splitting. This produces a feature of atomic spectra known as fine structure doublets.

Stark effect
Just like an external magnetic field breaks symmetry and causes splitting in the Zeeman effect, an external electric field will cause a form of splitting known as the Stark effect. The reason why I put this in a second, subsidiary position is because it is way more complicated: our Hamiltonian now has a 1/r+z potential, which is going to completely screw up the old energy eigenstates. So you do perturbation theory, and get first/second-order corrections in the small-field limit. That’s quantum mechanics for you, I guess.

Hyperfine structure
This is real small. It includes things like electron-dipole/nuclear-dipole and electric-field/nuclear-quadrupole interactions.

Oh, and when it comes to computing spectra from these: have I mentioned selection rules? No? Oh, well those are pretty important too better look them up.