I’m recently back from my vacation in Bilbao. Aside from the usual “getting away from it all”, the first highlight was the amazing pintxos–Basque tapas like squid-and-ink croquettes and piles of jamon iberico. With a full tummy, I could handle Frank Gehry’s spectacular Guggenheim Bilbao, one of the most beautiful buildings in the world. Sheathed in somehow billowing titanium, the museum floats next to the river, but after you get used to its undoubted weirdness, you immediately see it as part of the cityscape, with people seated at the cafes that surround it, others jogging past, kids playing in the waterjets that serve as a fountain undifferentiated from the plaza around it. Inside, the glass, limestone and steel give it the beauty of a cathedral but without the hushed tones.
Also inside, permanently installed in a football-field-sized gallery (amusingly sponsored by steelmakers Arcelor), is Richard Serra’s “The Matter of Time“, a collection of the sculptor’s Torqued Ellipses, Spirals, and Snakes. Curving sections of reddened core-ten steel, the biggest problem with the display is that you’re not allowed to touch it, when what you really want to do is rub your entire body against the plates (apologies if that’s more about me than you wanted to know).
In news about another kind of big iron, the US National Energy Research Supercomputer Center has chosen Cray to supply its next major machine. Cray, absorbed into Silicon Graphics in 1996 at the start of the last tech boom/bubble, was sold off again in 2000, making ‘supercomputers’ since the early 70s, produced the fondly-remembered T3E in the 90s, back when supercomputing was largely in support of science and engineering (as opposed to serving web pages). The new system will have almost 20,000 dual-core processors, but what matters if you’re doing science (or at least the kind that I’m most interested in), is the way that those processors are wired together: we don’t want to do 20,000 individual calculations; we need to do one calculation that’s 20,000 times too big to fit on one machine. To date, NERSC has probably supported more CMB-related supercomputing than anywhere else, and we all hope that the new machine will enable us to do even more, in particular to analyze data from coming experiments like the Planck Surveyor.