Monday, July 25, 2011

News from the solar neighborhood

The past week was a remarkable one for planetary science. Not only we had a probe arriving to an asteroid but also the solar family welcomed a new member.

In one side we have that the probe Dawn arrived to asteroid Vesta and is now in orbit around it. The story behind the whole mission is quite interesting. Actually the mission was cancelled many times and was only successful after the contractor (Orbital Sciences Corporation) offered to build the probe at cost. I certainly don't know of any other cases where the contractor ends up offering a whole probe at cost.

Vesta as seen from Dawn after it entered orbit.

The scientific program of this probe is quite ambitious. It plans to visit two of the biggest members of the asteroid belt, Vesta and Ceres, which were chosen as representative of "young" and "evolved" asteroids. Additionally, Vesta is (allegedly) the source of many of the micrometeorites that produce shooting stars in the sky which made it a particularly attractive target of study. This is also the first time a probe enters around the orbit of two objects. In previous "tours" like the Voyager missions the observations were made during flyby's. A similar option was considered a few years ago for Cassini with a potential transfer to Uranus which would had required a 20+ year cruise time. Eventually, that option was discarded and it was decided to send Cassini in a collision route with Saturn.

In a different development, it turns out that Pluto (the good old planet, now dwarf planet) system has at least three moons. This was spotted with the Hubble Space Telescope and the new satellite will be most likely studied by the probe New Horizons during its expected flyby of the system. 

The Pluto system as shown by latest Hubble observations.

If you add all this with the results that are being announced from EPS-HEP there is no doubt that the past week has been one of the most exciting in a long time.

Friday, July 15, 2011

What is the relevance of the JWST?

The James Web Space Telescope, the succesor of the Hubble Space Telescope is now under the very serious treat of termination due to budget cuts. The story goes like this: last week, the House Commerce, Justice, and Science Appropriations Subcommittee recommended the cancellation of the JWST project. This was followed when the full House Science, Space and Technology Committee approved the subcommittee's plan.

From a scientific standpoint this is a total disaster. We need the JWST if we are going to push the frontiers of current research to an epoch where the first stars and galaxies formed. The scientific case has already been well explained by Julianne Dalcanton in Cosmic Variance so I don't need to repeat it again.

But, from the point of view of the average Joe in the street, what is the justification for pouring truckloads of money in the JWST? Maybe the most poetic justification reads something like it will enhance our comprehension of the origin of cosmos and the existence of life in it. But nonetheless, I find that, while completely true, this kind of answers are easy to dodge by any opportunistic politician who wants to present himself as the "hero of the community" by "saving the hard earned money of the community from falling into this resources swamp". 

Well, this is utter nonsense. Let me tell you why. The principal outcome of massive scientific collaborations is not a stream of papers with the latest and greatest results about fundamental questions but a stream of highly trained people who goes and contributes to the society in countless ways. I can not really estimate how many people went on to get a PhD based on Hubble's data or research started by Hubble's observation and while a few of them remain in the academia most have gone into all other corners of life. Many of them have joined big corporations or started their own corporation and contributed to the development of many technologies that are now commercialized and now create jobs for the people producing and manufacturing them. That is not to mention the secondary sources of income that are created when this people go and spend their income. So please keep in mind that by shooting down the JWST you are not only shooting down the few professional astronomers in your district but actually the training of some of the most skilled people in the society, the kind of people who will later produce some of the best sources of income available in society.

If you are in the USA, please take five minutes of your time and contact your representative to keep the JWST alive, the vote of the congress is still required to terminate this project. 

If history can give us some clues as what can happen, let me mention the Superconducting Supercollider, the project that was supposed to lead experimental high energy in the late 20th and early 21st century. After its cancellation not only we as scientists remain with doubts about fundamental physics as the Higgs boson or superpartners but also the region of Texas where it was going to be built entered into a local recession and furthermore, now the advance of the field is led by the CERN in Europe, ending with decades of US dominance of the field. Please don't allow this to happen again.

Wednesday, July 13, 2011

Blog v3

As anyone of you reading this has undoubtedly realized, the site has been completely overhauled. This is an attempt to bring it up to date in web standards. We now have an atom feed for updates, latex to mathml translation for maths and a new theme. You won't see it, but also the backend editor is new and is supposed to produce cleaner code. Let me know if you find any glitch with the new design.

On an unrelated thing. Neptune was discovered almost 165 years ago by John Galle who was looking for it after Adams and Le Verrier had predicted its existence based on the effect of Neptune on the orbit of Uranus. Since the orbital period of Neptune is 164.79 years we can say today that we are celebrating Neptune's first birthday!

Happy "birthday" Neptune!


Interestingly enough, from his observational notebooks we know now that Galileo had observed Neptune in his telescope which was too rudimentary to actually show its disc and reveal it as a planet. Additionally, in a strike of bad luck he just happened to observe Neptune when its proper motion was less noticeable. Nonetheless, there is some evidence that he was at least aware that it moved to respect the background stars. Unfortunately, bad weather prevented him to pursue this issue further. It is interesting how a small set of circumstances shapes history, not only of science.

Tuesday, July 12, 2011

Linux kernel 2.38 and Ubuntu Natty: The devourer of laptops

Some of you have undoubtedly installed a Linux distribution featuring the kernel 2.38 (or a latter version). This kernel comes with Ubuntu 11.4 Natty Narwhal but also in Fedora 15 and some rolling release distributions like Arch or LMDE.

While installing a new fancy distro carries the advantage of bringing you with the latest packages including the new fool-proof desktop environments that have been recently in the spotlight (unity and gnome 3) it will also bring you to one of the most annoying bugs that I have dealt with.

All the Linux kernel versions starting with 2.38 made changes to the Active-State Power Management (ASPM) that have resulted in a dramatic power consumption increase. This might be not so evident for a desktop computer but can easily trounce the charge duration of a laptop by a third, not to mention the fact that it turns it into a portable pan.

To fix this we need to enable the pcie_aspm=force option. The downside is that this might turn some systems unstable or even prevent them from booting. Use this at your own risk.

A simple way to see if this fix will work for your recently turned into pan laptop is to enable this option for a single boot. In Ubuntu you need to select your Ubuntu system and press e in the menu that allows you to choose an operating system just after turning on your computer. This allows you to edit the boot options for this session. Locate something looking like quiet splash and add pcie_aspm=force inmediately next to it, separated by a space. If your computer boots and remains stable you will notice that it will heat considerably less and that  battery life is extended.

Now, to make this change permanent we need to edit the bootloader. Open a terminal and enter:

gksu gedit /etc/default/grub

which will ask you for privileges escalation (the password of the administrator). Then, look for this line
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"

and change it to

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pcie_aspm=force"

Make sure that this is indeed how the line looks like, we don't want to screw the bootup. After that we need to update the bootloader:

sudo update-grub

And that's it, you have rescued your laptop from becoming your next broiler. Just reboot and the change must be there permanently.

Tuesday, November 23, 2010

From the quantum world to the guy next door (part 1)

This post will be a little more technical than the usual ones. Nonetheless, I believe there ia "market" for it: students who are just learning quantum mechanics and might require to dispel much of the baloney that is told about the quantum weirdness. Recently I was discussing with one of the cobloggers of astronomers in the wild about some limerick from David Morin's mechanics book:

When walking, I know that my aim
Is caused by the ghosts with my name.
And although I don't see
Where they walk next to me,
I know they're all there, just the same.

This is mentioned just as some side note when discussing the stationary action principle, specifically, in the context of learning if there is a deep reason behind it. The stationary action principle says that the quantity S, called the action and given by
\[ S= \int_{t_a}^{t_b} dt\ L(x,\dot{x};t) \]
should take a stationary value. That means that from all possible paths involved in getting from a to b, a classical particle will take the path in which the action takes a minimum value (well, actually a stationary one, usually the minimum). This is the principle behind classical mechanics. The motion of everything we can see around us, including the stars in the sky or matter around a black hole follow from this principle.

Hence, it would be great to know if there is a reason behind this principle. Well, yes, there is. It is deeply rooted in quantum mechanics and its essence is captured in above's limerick.

So, the purpose of the few next posts will be to see how we can pass from the quantum description of the world to the kind of phenomena we observe everyday when dealing with baseballs, pulleys and all that.

This is not a trivial question, every experiment has confirmed the validity of quantum mechanics as the correct description of our world but it is in stark contrast with the intuition we have all developed from observing microscopical objects all our life. Nonetheless, at first glance, both descriptions are radically different. To see how weird the quantum behavior is for us, macroscopic beings, let's consider one situation that shows most of the quantum subtleties: the double slit experiment.

Consider some particles source S, placed at a point A. In front of it, we place a screen C with two slits in it. Hence, we expect that any particle arriving to a screen at the point B where the arrival of electrons is measured must pass through one of the slits. This configurations is shown below.

Well, if we do this experiment with marbles, baseballs, grains of sand or any other classical object coming out of the source S, then the outcome will be just what we expect. Namely, if we shut one of the slits, then the distribution of arriving marbles at B will be peak just in front of the open slit. The resulting distribution of arriving marbles at B when both slits are open is just a sum of the peaks produced by the particles entering through each slit. This is shown below.


Here, the outcome of the double slit experiment with classical particles is shown. If we shut one slit, we get one sharp distribution (shown in blue). The outcome with both slits open is just the sum of the two peaks and it is shown in red.

There is a second classical object that we can throw against our screen at C: waves. So, lets imagine that we place the whole setup in a pool and the source at A stirs the water a little bit so it create waves. In a realist experiment, we should place S very far from C so we can have some nice plane waves arriving at C, but ignore such nuances for our purposes. When the incoming wave arrives to C, each slit will act as a new wavefront and produce an outgoing wave. Since we have two slits, the outgoing waves will be out of phase from some parts and in phase for others. This is better seen in the picture below.


The darker zones correspond to places where destructive interference happens, namely where the waves are completely out of phase and cancel out. On the other hand, clear zones are where waves are in phase and constructive interference happens. The outcome of all this, is that the distribution at B will be different from the one we got using marbles as we now have some interference pattern, as shown below.

As you can see, we now have many peaks and actually the biggest one is in front of a closed part of the screen C ! This is just the result of the way waves behave: they can add or subtract from each other in stark contrast with marbles as we do not expect one marble to cancel another!

So, if an electron behaves exactly as a marble we expect that the chance of arrival at some point x of the target B will obey two simple rules:

  1. Each electron which passes from the source S to some point x in B should go through either hole 1 or 2.
  2. The chance of arrival to x is the sum of two parts: P1 the chance of arrival coming through hole 1, plus P2; the chance of arriving after coming through hole 2.

I repeat, this is the case for classical particles. Even in classical mechanics we can get a different behavior by using waves, as illustrated above. Now, we use to imagine electrons as lil' charged things and with a big degree of naivety we would expect these two rules to extend to electrons.

Well, what happens when we use electrons? It turns out that we get a distribution identical to the one we get from waves! This is a good place to stop today, but nonetheless I should tease you a little.

The questions that arise immediately are: is the electron a wave? If so, how can we reconcile that with our intuition that it should pass through one hole? how can we measure that? As of now, we could actually say that the electron is indeed a wave. Nonetheless, that contradicts our experience of electric charge as a flow of electrons, which has proved to be extremely successful. Additionally, when using electrons to strip electrons from some metal the behavior is consistent with particles (its analog with photons is the well known photoelectric effect). Actually, all the subsequent discussion for the quantum case can be carried as well with photons. This kind of behavior with electrons behaving as confused teens not making their mind about being particles or waves is usually stated as the wave/particle duality. Hopefully, by the end of this series of posts you will agree with me that such duality is a rather childish way to describe things: electrons are particles, which get their peculiar behavior due to the way probabilities/amplitudes add. Furthermore, the mechanism to get the amplitude will lead us directly to the classical world with which we are all so familiar.

Besides, now that we can not use the two rules stated above, what rules should we use to calculate the distribution at B? This last question is essential: the big difference between the classical and quantum world is in the way we calculate probabilities. That is the topic for the next post. Meanwhile, you can read what happened when this experiment was performed in 1998 by E. Buks et al.

For all you able to read spanish, I decided to continue this "story" in my spanish website: sinédoque. This was due to the capabilities of blogger which were too meager at the time, specially regarding mathematical expressions. I have just implemented a new system on this blog so maybe some more "math heavy" posts will appear here in the future. Keep tuned.

Monday, February 15, 2010

The Towers of Hanoi

I don't know why but I recently came back to this old, interesting little problem. For those not familiar with the puzzle, here's an excerpt from Wikipedia:
[The Towers of Hanoi puzzle] consist of three rods, and a number of disks of different sizes which can slide onto any rod. The puzzle starts with the disks in a neat stack in ascending order of size on one rod, the smallest at the top, thus making a conical shape.

The objective of the puzzle is to move the entire stack to another rod, obeying the following rules:

  • Only one disk may be moved at a time.
  • Each move consists of taking the upper disk from one of the rods and sliding it onto another rod, on top of the other disks that may already be present on that rod.
  • No disk may be placed on top of a smaller disk.
Here's an image of how the initial configuration looks like with 8 disks (also from Wikipedia):



Also, here's an applet of the puzzle so you can try to solve it yourself: MazeWorks - Tower of Hanoi. Try with 3, 4 and maybe 5 disks, and see if you can solve it with the fewest number of moves.

You'll notice that, according to that applet, the minimum number of moves for the puzzle with N disks seems to be 2^N-1. The other day it occurred to me that this can be proven by mathematical induction easily, and that the procedure for doing so is actually the recipe for a recursive algorithm to solve the problem. I'll talk about these two items separately.

Proof that it takes 2^N-1 moves to solve the puzzle.

We want to prove that a solution to the Towers of Hanoi problem requires 2^N-1 moves, where N>=1 is the number of disks to move. Let's proceed by mathematical induction.

1) Base case: we prove the claim is valid for N=1.

If there is one only disk, moving it to the correct peg requires (trivially) a single move. And 2^1 - 1 = 1, which show what we want to prove is true for the base case.

2) Induction Hypothesis: we assume the claim is valid for N=k, where k >= 1 is some number.

This assumes that the solution to the k-disk problem takes 2^k - 1 moves. That is, we assume it takes 2^k - 1 moves to transfer k disks from one peg to an empty peg, using a third empty peg as auxiliary. Note that because of the rules, an "empty peg" is equivalent to "a peg containing one or more disks that are all larger than the disks we're moving".

3) Prove the claim holds for (k+1) disks: we must show that if the induction hypothesis is true, then we can prove that the claim is true for the (k+1)-disk problem as well.

In other words, we want to show that if it's true that the k-disk problem requires 2^k-1 moves to solve, then the (k+1)-disk problem requires 2^(k+1)-1 moves to solve.

So we want to solve the (k+1)-disk problem. Let's call the peg on which the disks initially reside the S (source) peg, the peg to which we want to transfer all the disks the D (destination) peg, and the third peg the I (intermediary) peg, which we'll use as auxiliary peg during the process. Then, we proceed as follows:
  1. Move the upper k disks from the S peg to the I peg. This is possible since we assumed it in the induction hypothesis, which also establishes doing so takes 2^k - 1 moves. This leaves the largest disk alone on the S peg.
  2. Now that it's possible, move the largest disk from the S peg to the D peg. The disk is now in its final position, so we don't have to move it again. This takes 1 move.
  3. Move the k disks that were on the I peg to the D peg. Again, by induction hypothesis, this takes 2^k - 1 moves. It doesn't matter we're moving the disks to a non-empty peg, since the disk on the D peg is larger than any disk we're moving, and hence doesn't interfere with the task. The S peg is used as auxiliary peg in this process. After this, the problem is solved, for all disks lie on the D peg.
So what was the total move count? In step i), we used 2^k-1 moves, in step ii) a single move, and in step iii) 2^k-1 moves again. The total:

Moves = (2^k-1) + (1) + (2^k-1) = 2*(2^k) + (1-1-1) = 2^(k+1) - 1

And so, we have proved that if it takes 2^k-1 moves to solve the k-disk problem, then it takes 2^(k+1)-1 moves to solve the (k+1)-disk problem. This, together with the base case whose validity we verified in the first step, makes the claim valid for all N. Q.E.D.

Optimality

Note, however, that at first this proof does not guarantee this is the optimal number of moves needed to solve the problem, only that the N-disk problem can be solved with 2^N-1 moves. Luckily, it seems I can justify the optimality of the solution formally.

Consider the following modified proposition, which we'll prove by induction too: "It takes 2^N-1 moves to solve the N-disk problem, and this is the minimum number of moves of any solution". This is trivially true for the base case (N=1).

To prove the induction step, consider the following argument. We want to show that solving the (k+1)-disk problem requires a minimum of 2^(k+1)-1 moves.

But consider that no matter how you solve the problem, at one point you'll need to move the largest disk to the D peg. Since the largest disk cannot be placed on top of any other disk, this means that the remaining k disks must be on the I peg when this happens. There is no other way this could be. So any solution strategy, no matter what is is, must pass through this common state.

Now, by the (modified) induction hypothesis, moving k disks from one peg to another is done optimally in 2^k-1 moves. So putting the puzzle in the configuration required to move the largest disk to the D peg requires a minimum of 2^k-1 steps. Similarly, after the largest disk is moved to the D peg, transferring the remaining k disks to the D peg also requires a minimum of 2^k-1 steps.

Finally, the middle step, moving the largest disk to the D peg, is trivially optimal, since we're moving a single disk.

With this, we prove that if the solution of the k-disk problem requires a minimum of 2^k-1 moves, then the solution of the (k+1)-disk problem requires 2^(k+1)-1 moves, and that it is also the minimum number of moves.

The uniqueness of the optimal solution is also proved with the same argument, since moving a single disk (both in the base case and in moving the largest disk to the D peg) is both optimal and the unique way to do it.

Solution Algorithm

A nice thing of this proof by induction is that it naturally produces a recursive solution algorithm. Basically, we define a function, let's it call it Solve(n, A, B, C), that moves n disks from peg A to peg C, using peg B as intermediary. The function is defined recursively following the logic of the proof above (Python pseudocode):

def Solve(n, A, B, C):
if (n==1):
A.transferTo(C)
else:
Solve(n-1, A, C, B)
A.transferTo(C)
Solve(n-1, B, A, C)


This is almost full Python code; it's only missing the data structure definitions needed to handle the disks and pegs. But it shows the heart of the algorithm. Note how the function calls itself recursively twice: first in a series needed to move the (n-1) disks to the intermediate peg, then in a second series to move the (n-1) disks onto the final peg. In each call, the peg order is changed to reflect which pegs are source, destination and intermediate.

You can get the full Python code here.

Monday, January 11, 2010

LHC alarmism debunked

I just attended a seminar by Michelangelo Mangano titled "Black holes at the LHC: safety and society". While of course the bottom line was that there is no real risk of the LHC destroying the universe it is still worthwhile to discuss it a little bit.

So, a little compendium of why you shouldn't be scared of LHC produced black holes:

  • LHC is almost surely not likely to produce black holes, simple as that, most of the models predicting such scenarios work with additional dimensions and require a higher center of mass energy than would be available at LHC. Anyway, there are indeed some scenarios where black holes can indeed produced at LHC, they are not very plausible but a case can be surely made from them.
  • Black holes are unstable: Black holes decay via Hawking radiation. Particles can be created out of "nowhere" in the vacuum, a rather sloppy argument essentially says that the uncertainty principle allows particles to pop out of nowhere as long as the energy loan required to create them is payed really fast. Now, consider one of this virtual particles being created just outside the black hole and the other one just inside of the black hole. The net effect for an observer outside the black hole is that black hole is radiating, in the process the black hole is loosing mass that is just escaping in this radiation. This radiation is not only electromagnetic, in principle a black hole can radiate anything.
    This makes the black hole to "evaporate" as it looses mass via radiation, it turns out that the lifetime of a black hole is proportional to its mass. For black holes with really small mass, like the ones speculated to be produced at LHC this yields out such a short lifetime that there is not a reason to worry at all.
    We can now be cynical and say that QFT in curved spacetime (the fancy name for the framework used to calculate this kind of things) is not really experimentally established, maybe there is something wrong with it and black holes could be stable, then you should consider that...
  • Black holes produced in particle collisions are charged: Collisions at hadron colliders happen between the charged components of protons and conservation of charge requires that the final product (in this case a black hole) remains charged. Any charged particle, even a microscopic black hole interacts with matter in a way that has being throughly studied and is summarized in the so called Bethe-Bloch equation for the range of energies likely involved in the LHC. The result is that the black hole will just radiate all its energy away in a short distance. But hey, what if we are somehow missing something in the creation of microscopic black holes and it is possible to produce stable and neutral black holes? This looks extremely unlikely but lets go ahead and see what can happen if they are indeed produced.
  • Accretion rates from microscopic black holes are negligible: Lets say we have a slow moving, stable and neutral black hole. Can it eat the Earth and cause all other kinds of havoc? Well, if we model mass infall into a black hole via Bondi accretion (a fancy name for spherical accretion) it turns out that to gain a considerable mass, say 1 ton, a huge amount of time is required. Right now I don't remember now the exact number but it was of the order of a thousand millions of years. If a predator capable of running at most a 1mm per year is chasing you with bad intentions you wouldn't be scared, would you?
Well, despite this arguments there is still people around trying to halt LHC and the issue has even been brought to court, where it has always been dismissed based on technicalities, mostly because the LHC is out of jurisdiction of the court.

So, why I am telling you all this? Well, wait for tomorrow...