Wednesday, 12 November 2014

Hey grrrl... the reasons why I'm furious about ESA's #shirtgate

The dust hasn't even settled yet on the amazing, incredible feat of human achievement - we have landed on Comet 67P.

HOORAH! Let me take the moment (ok many, many moments) to reiterate how wonderful this is.

And yet it was marred a little bit for me by the ESA #shirtgate incident (note that the hashtag has been reused from a previous incident on the internet, sorry about that folks).

When I opened up my social media this morning to get ready for the pre-announcements and hype (because these moments are what I live for as a scientist) I was shocked by something I saw in a colleague's post. She mentioned that the Rosetta Project Scientist Matt Taylor (@mggtTaylor) was on multiple media sources (an official BBC video, his own, and ESA's twitter feed etc.) wearing a crazy shirt. And sure enough, when I looked it up, this is what I saw:


Ok, wooaah.

There were also articles about the fact that Matt wants to challenge stereotypes of scientists and openly wear his tattoos - and this is something I whole-heartedly support. This is something I've blogged about before. It is extremely important to me that we concentrate on the science that someone has to offer rather on their appearance, because scientists come in all sizes and shapes and we should let them be just like everyone else.

So isn't this a double standard? I spend time writing about how I should be able to wear what I want as a scientist and here I am really upset by his shirt?

This is the really important reason why it is different, in case it wasn't immediately obvious to you right away. It objectifies women.

Matt's shirt portrayed several images of a naked woman, allegedly as a tribute to a sci-fi character.
He also allegedly said  on air (and this is something I'll admit I didn't hear myself - it was relayed to me):  "She's sexy, but I didn't say she was easy." [Edit: I've since been shown the link where Matt gives the "sexy" quote. He's talking about Rosetta, not the woman on his shirt! Thanks to Dave for reminding me to get the facts straight.]

Now - we have a huge problem getting women and girls into STEM fields. And spend lots of energy talking about how women aren't in science and should be (note: a Google search will yield many articles, that is just a recent one!).

And yet, here is a male scientist at a predominantly male science press conference from a male-dominated field - that is going to be broadcast to schools around the world - wearing a shirt objectifying women.

So, obviously the internet exploded. I, and many other people tweeted about it and were very angry, and later Matt changed his shirt (thank goodness before the most watched part of the landing).
But this begs the question, why did Matt choose to wear the shirt? Or rather, did he think about the message it would send? Did he care? Did anyone at the press conference even look at the shirt?

I hope that in the coming days we will hear more from Matt and/or ESA, but I feel like now I need to redouble my efforts to remind young women interested in science that yes, your mind is important. That yes, you are capable of being taken seriously in STEM fields. That yes, we do want you here (come and join me). And that no, your body isn't what defines you.

Until then, I'm going to look at pictures of the glorious mission and hope my anger subsides. It is a great day for science. It is not a super day for getting women into science.

[Edit: Thanks to Summer and Emily at @startorialist for some happier space shirt designs to brighten my day - and more here]

Monday, 29 September 2014

"Black Holes don't exist:" giving context to sensational science news

A friend of mine recently pointed me to this article about how "Black Holes don't exist." The article concerns two recent papers

http://arxiv.org/abs/1406.1525
http://arxiv.org/abs/arXiv:1409.1837

Since it came out the following edit has been added to the article:

"Due to some confusion, we feel it is important to clarify. The notable word in the title is “mathematically.” In science, there are conflicting predictions that come from different theories, assumptions, and equations–different equations result in different outcomes and different proofs. In short, one set of assumptions leads down one path and give us new (potentially important) things to consider. But there are many paths. It seems that many people were not sure how to situate or read these findings. Hopefully, this clarifies things. We’d like to apologize to anyone who took this out of context or who was confused by the implications. In the coming days and weeks, more physicists will weigh in with their findings. Things will update as they develop. Science on."

And this edit is actually exactly why I wanted to discuss the article. What does a claim like this mean, and how can a non-expert interpret it? The following is basically word-for-word what I wrote in reply to my friend who sent me the article.

I skipped reading the article at first [I have since read it, and you don't learn all that much], as lots of articles like it miss the point with physics. People love to say "Einstein was wrong!" or basically "[New sensational thing about physics hopefully with the word "Quantum" somewhere!]" while not appreciating what is really going on. And it annoys me. A lot. Anyway, onwards with constructive things...

I'm no expert, so I'll be arguing from authority, basically. First things first, Mersini-Houghton, the author, appears a totally respectable physicist who has published highly cited work on a variety of topics in the past, and works at a respectable institution. I'm in no way trying to slander her or her work. It's completely within the confines of work on these topics (black holes, the information paradox), as far as I can tell. Her recent work on these topics isn't highly cited (so far: and that is only a few months, but physics moves fast these days), but citations aren't a perfect gauge of a work's relevance. The important point to take away is: the author is certainly no two-bit crank posting on vixra (I hope I don't get bombarded by any more cranks than usual, but just look at the kind of stuff that goes on there! If you've never lost a few hours on vixra, I highly recommend it.)

What I want to discuss here is how the paper was reported, and more importantly how non-physicists should read reporting on physics in general. It's all about context, and what we mean by "exist" I think...

The paper referred to in the article was published in a respectable journal, so it went through peer review and someone with more knowledge in the field than me thought it was correct. However, let's put it in context. There has been a whole lot of interest in issues related to black holes recently, thanks to a famous paper on "firewalls." (http://en.wikipedia.org/wiki/Firewall_(physics)) There seems to be something we don't understand about black holes in that they lead to a paradox that requires giving up one of three major pillars of physics (three according to wikipedia, I thought it was just two: unitarity and locality, but there ya go). The name "firewall" comes from the easy way out: there is a wall in the way that prevents anyone seeing the paradox. I'm not sure how seriously the firewall itself is taken. I think of it as a last resort to sweep things under the rug. But like I said, I'm no expert.

This debate over what to do with firewalls has led to a huge number of papers proposing different resolutions to the problem. To give that a number, the original firewalls paper has gotten 249 citations in the 2 years since it was published. Experimental papers, and confirmed theory papers, get more than that. But for a "pure thought" paper, that is astounding.

(Although, the original AdS/CFT paper, a paper on pure thought depending on your take on AdS/CFT applications, just became INSPIRE's most cited paper. It took over the model of leptons, which is manifestly about the real world, but there ya go, that's physics!)

Many people have said we may need to do away with black holes in one way or another. For example, Hawking argued that there is no paradox if black holes aren't "eternal." Another solution is that black holes are really fuzzy quantum or turbulent things, and that makes the calculation that led to the paradox incorrect.

Okay, enough context. Mersini-Houghton's idea is one of these many that say black holes never form. The idea is that when a star collapses on the way to forming a black hole the "Hawking radiation" kicks in causing a pressure that stops collapse before a true black hole forms. If it never forms, there is a never a paradox. In her second paper out this month she works numerically with collaborator to show this happens in realistic situations (for example breaking the assumption of spherical symmetry).

So, is Mersini-Houghton correct? Her first paper came out in June and hasn't really been picked up in the community. The excitement over firewalls has died down, so maybe it's that. But if she was correct, it would be enough to set people off. It hasn't, so I judge there must be something about it that isn't compelling. Maybe she made some simplifying assumptions that people believe would make her argument wrong in a realistic situation? I'd like to ask an expert. The new numerical work makes me think she is correct, within the parameters she's set herself at least. It's whether those parameters are right.

But do I think it means anything? I'm inclined to say "no" for the following reasons:

1) It appears that we do see black holes out there in space. I'm not familiar with the observations, but as I understand it, it is an established fact. Maybe Mersini-Houghton can get around it, and her system still forms these things, but they aren't "formally" black holes because they miss the little paradoxy bit. Then, for all practical purposes (i.e. in astrophysics), it is a case of walking and quacking like a duck, but without the particular nuanced existential consequences. Practically, then, "black holes" still exist. This has to be true of all the other firewall solutions. The black holes are still there, because we see them, but some little bit in the middle is subtly different.

[edit: the Event Horizon Telescope hopes to directly image black hole event horizons in the coming years. Things like "Saggitarius A*" are pretty good candidates for them. Any other comments about direct evidence for black holes are very welcome!]

Now you have to go out and find an observation that can confirm that. People are trying, but it ain't easy (see this interesting proposal to tell proposed firewall solutions apart observationally using gravitational lensing!). An example discussing Mersini-Houghton's work is that the "bounce" of the star instead of forming a black hole that she predicts could be the source of some high-energy cosmic-ray type things (fast radio bursts in this case). We see stuff like that, and by looking at them in detail maybe you can see what they came from. But this is messy astrophysics, and the number of explanations for these things is often very wide.
I'd like to see whether (i) Mersini-Houghton and co can still make real things that look enough like black holes that they are consistent with what we have seen already, and (ii) if they can make any novel observational prediction to test their theory. If the answer to either of these questions is "no" then they are dead in the water.

2) On a purely philosophical level, Mersini-Houghton's solution doesn't solve the firewall paradox in my opinion. It may solve it "in real life" if there are no collapsing stars that form tricky black holes. But it doesn't solve the problem for theory. In theoretical physics you can still set up a "thought experiment" and if that makes your theory inconsistent then you are in trouble. The whole firewall thing began with a thought experiment. Black holes are solutions to Einstein's theory, as long as they are you can always imagine one just sitting there. Conjure it out of nowhere. It doesn't have to be made by a collapsing star (because in the thought experiment you also conjured the star from nowhere too).
Mersini-Houghton's solution doesn't alter Einstein's theory or quantum mechanics, and so you can still do that thought experiment, and the firewall problem is still there.

So, Mersini-Houghton hasn't solved that problem in my opinion. And indeed, black holes still "exist" in the theory, so for many theorists they are still just as relevant as thought experiment probes of whether we really understand the universe. And many theorists are platonists anyway, so black holes, even of the thought experiment kind, do "exist."

3) Outside of thought experiment, as long as black holes are still this "formal" solution of Einstein's equations (and they are in Mersini-Houghton's theory as far as I can tell) then they do still *really* exist. This is thanks to quantum mechanics, where things are allowed to "pop in and out of the vacuum" (http://en.wikipedia.org/wiki/Virtual_particle). Black holes do the same thing in quantum gravity, at least in string theory they do as far I know (I've heard people argue that they needn't in other theories, but I don't find these arguments compelling: in quantum theory you need a very good reason for things not to happen, or else they do). So, black holes still exist at the quantum level even in her theory.

Does that have any relevance to real life? Well, virtual particles do. We have very strong evidence that virtual particles are important. They have observable effects on particle collisions. They are the reason the electron has the charge and magnetic properties it does, so probably phones and microchips and such wouldn't work without them. (Does anyone have a better example of the reality of virtual particles? I mean, an accessible one, not just "electroweak precision observables, duh")

What about virtual black holes? (i) I don't know if they suffer form the firewall paradox, because of the popping quantum business, so maybe they aren't a problem. (ii) They are much harder to see. They are intrinsically quantum gravity things, and that is, as far as we know, not relevant to anything we have a hope of measuring on earth. But there is hope, and this is exactly why I study cosmology: the early universe is a lab for very high energy things, and we can hope for signals of quantum gravity in the sky (in fact this is a lot of what Mersini-Houghton did in her earlier career, too).

So, in summary:

* There are lots of theories like this. Maybe this one's right, maybe it isn't.

* There are at least "black hole-like" things out there in space that we have seen.

* Philosophically, black holes-proper still exist in this theory thanks to quantum mechanics.

* We need to come up with observational and experimental tests and consequences of all this. 

Thursday, 11 September 2014

The "Yes! And?" of science.

I personally believe that the academic "brand" of Impostor Syndrome (IS) is particularly tricky to deal with because underlying it is a certain type of arrogance. It took me quite a lot of time with a coach (thanks to the wonderful SupporTED program I participate in through the TED Fellowship) to realise that I really was arrogant in my Impostor Syndrome: anyone could say what they like about me being talented, but I was holding onto the belief that I the only person qualified to make judgements about myself. So with a slight of hand, I can disregard your positive statement. Easy Peasy. My coach had to bring out the Logical Data Big Guns to deal with me, but she did so, wonderfully. She showed me this internally arrogant attitude was seriously flawed. My data analysis software, my ability to process external feedback, was broken. I realised that I was rejecting data points based on my faulty Bayesian prior, and then refusing to quote the prior when making statistical inference. I know! I know! Bayes would be rolling in his grave! I was shocked, and chose to rename the problem Self-Data Malfunction.

So, if you know that this part of yourself is faulty and you want to repair it, what can you do? Well this is all happening subconsciously to some degree, so it isn't a case of just hearing and accepting the opinions of others. If it were that easy, I would have done it already!

When talking about the issue with people, I often heard a phrase that I realise was intended to be helpful, but to me expressed exactly the wrong idea: "fake it ‘til you make it."

The idea is that even if you don't feel worthy to be in your job, position of authority or degree program, you just "fake it" and act like you are worthy until some time later you realise, hey - you are in fact the woman who deserves to be there! And there are lots of strategies online and in books to help you build up the skills to "fake it".

But this just hit right to the core of my Self-Data Malfunction. If I was "faking it" at all, surely there must be some truth in my "you don't belong here" Bayesian prior? So then maybe my self-data analysis software was right after all! Cue the spiral of non-productive thinking.

And then I remembered a wonderful thing I've learned from doing improvisational comedy (which, by the way, I highly recommend - it's like emotional version of walking in traffic: all the excitement, none of the physical harm). The improve technique is the principle of "Yes! And?"

Here is how it goes.

Say you’re doing an improv scene with someone on stage and they suggest something, like they are your long-lost sister, or the floor you are walking on just happens to be made of fire, rather than rejecting it outright for being crazy (as these improvised suggestions often are), you imagine and accept the universe they've just created. You say "Yes!" to the idea. And? Then you run with it!

The "And?" part means that you build on it and immerse yourself in it. That often involves justifying the suggestion they just made - making it work within the context of the scene and your established characters. And then, ‘hey presto!’, you're doing improv.

When I was thinking about the Self-Data Malfunction, I realised that rather than faking it 'til I make it, I can "Yes! And?" my life in academia. It is incredible what that subtle change in emphasis did for my outlook on academic life.

So, what happens when you find yourself on the shortlist for a job you didn't think you could possibly get? You say yes! And? Go give a great talk/interview! You now live in the reality where you are a viable and attractive candidate for the position. Yes! And?

What about when you think you aren't good enough at writing this code, doing this derivation, finishing this paper? You remember that yes, you already got here, and you have skills that will enable you to tackle the task. And... then you go and smash it!

What happens when you are invited to submit that review paper or chapter and you feel like they may have asked the wrong person by mistake? You remember that yes, you have interesting things to contribute. And you now live in the universe where people want to hear/read them.

And what happens when someone like me wants to write about impostor syndrome, but there have already been great posts by incredibly smart, talented and accomplished men and women (for example John John’s post, Amy Cuddy's post on body language and how it can change your life, Ed Bertschinger's post on his own struggle) on the subject? What if I don’t yet have a faculty post, and the authority that comes with that to be able to write about impostor syndrome without fear of the affect it may have on people’s perception of me?

I remember that yes, I think I have something new to add to the mix, and then I remember that as a graduate student and postdoc I would love to hear from someone who wasn’t so accomplished or high up the academic ladder to tell me about things they’ve learned and are dealing with. And so I write this here blog post!

Does it mean you will always then succeed at things? Definitely not. I imagine your rate of success may be exactly the same as before. But your rate of trying new things, and putting yourself out there and taking risks will definitely improve, and with more opportunities come more chances to do an awesome job and succeed. And as we all know, it's all about statistics really.

It isn't easy to do all the time. The "No, but" voices are much more skilled and generally shriek banshee-like in my head, but this feels to me like a much more holistic way of enabling me to live and grow into my career and my life. The change is slow, but what I find happens is that I start to really enjoy new challenges and scary things, not because I’m trying to prove myself, but because I enjoy taking that journey to the “and” part of myself and find that it isn’t so crazy a world in the first place.

So…. Yes! And?


Wednesday, 21 May 2014

BICEP2 and Axions. A few comments.

After our paper on Axions and BICEP2 came out (here) we were contacted quite a bit by various media outlets for comment. One article appeared in Nature News. There is another due to appear in Quebec Science tomorrow. All the answering of interview questions made me think quite hard about explaining this business in a manner understandable by the lay person, and I think I got quite good at it. So I've decided to reproduce for you here the transcript of the interview I gave for Quebec Science. Their article only used a few quotes of mine in the end, but this here is the whole shebang!

(P.S. Sorry for the weird formatting: I'm not really sure what happened)


Monday, 17 March 2014

B-eautiful tensors

That's what BB said.

Yes, I've been waiting my whole life to make a post title like this.
But seriously, if you haven't been hiding under a rock this morning you will have noticed the internet go crazy for the detection by BICEP2 of tensor modes, the 'smoking gun' of inflation. Even the NYTimes got in on the action.

The detection is parameterised by the tensor-to-scalar ratio "r", the ratio of tensor modes to the usual scalar modes whose spectrum we have characterised well with experiments like WMAP, Planck and the ground based experiments like ACT. This detection is r = 0.2 + 0.07 - 0.05 (the two numbers give the upper and lower 68% confidence intervals). This means that the detection is significantly non-zero. Why hello, tensor modes.

The B-mode polarisation spectrum is shown here below, where all the other limits are just that, upper limits. This is pretty awesome if you think about how this fits in with all the efforts of so many.
Figure 1. The BB-mode spectrum from BICEP2 with previous data.


This detection is really exciting, and has implications not only for the specific theory of inflation and the kinds of models it supports - it also allows us to place constraints on other physics. For example, my colleagues David Marsh, Dan Grin, Pedro Ferreira and I wrote a paper investigating what the detection would mean for axion-like particle dark matter. Such a large value of r places a constraint on the energy scale of inflation, H_I, which in the axion model place constraints on the initial misalignment angle - leaving a model that has a high level of fine tuning (fine tuning in physics is generally considered a bad thing, you don't want to have to tweak your model to give you something reasonable, you want that reasonable thing to emerge organically). If we consider very light axions, then this constraint on r tells you about the fraction of the total dark matter that can be made up of these axion-like particles (as a function of their mass).

We show that this new constraint (indicated by the red curve) limits the fraction of the dark matter that can be made up of axions... which helps us rule out parameter space (which is a good thing!) You can read all about it here.

While the claimed detection of B-modes from BICEP2 is awesome and very exciting, it is also important to remain skeptical about possible systematics and issues with the detection. It is a very tough game, and such an important result that we need to make sure we pass all the possible tests we can throw at it. I for one am a little worried about leakage between temperature and polarisation in the spectrum. If you look at the cross-correlation between this measurement and the BICEP1 data, it seems that there is excess power on small scales (large multipoles).


Now it bears repeating that the BICEP2 result on r is only based on the scales between 30< ell<150, but these high ell issues to need to be addressed, as leakage could bias your signal high (make the evidence for tensor modes stronger).
Another thing to worry about are foregrounds. The team have presented reasons why they think foregrounds are not an issue for a signal so large, and it looks like they've done their homework, but I'll spend the next few days digesting the paper in more detail.

Also, this is such a large signal that we need to think about why other experiments have not seen it. In fact, if you consider the figure below from their paper:

you might be worried about a conflict with the results presented by the Planck team last March. First of all this plot is made by marginalising over running of the spectral index, so it is beyond the "vanilla" model + tensor modes (it has another parameter in it, the running of the spectral index, which gives it two free parameters relative to the base LCDM model without tensor modes at all).

So, the bottom line: I am excited by this (and so should you BB!) but there is more to understand and this result needs to be battle tested and confirmed. Long life the scientific process!! 

To BB or not to BB.

Ok, I'm done. Happy BB-day all.

Monday, 17 February 2014

Belief in Quantum Gravity (and Cosmology)

Recently a few discussions have alerted me to the role of belief when it comes to theories of quantum gravity. This comes about essentially because of the huge energy scales involved in quantum gravity: because we have no (direct) experimental access to the Planck scale.

Firstly, what is the Planck scale? We need to a short lesson on units to get there. The Planck scale is what we assume to be the natural scale in gravity. As a mass scale it is approximately the square root of 1/G, where G is Newton's constant (I normally prefer to include a factor of 8 pi and call this "reduced Planck scale" simply "the" Planck scale, but that is a matter of preference, although as we are discussing, preference is a driving factor here...). Planck noticed that "natural" units for physics can be established based on a few fundamental constants, that is, we measure things in units of those constants. The first is Planck's constant itself, h (or "h-bar" if you divide it by 2 pi), which measures units of angular momentum (Joules per second in SI), and is the fundamental constant associated to quantum mechanics. Next is the speed of light, c, which measures units of speed (duh!) (metres per second in SI), and is the fundamental constant associated to relativity. Finally, then, comes Newton's constant, G, which measures the force of the gravitational field of body of fixed mass (per unit distance squared from that body, per unit mass of that body, per unit mass of the test particle feeling the force, which all follows from Newton's famous law of gravitation). Newton's constant also appears in Einstein's theory of general relativity, and so is associated to all gravitational physics (it is inserted by hand into general relativity to fix the units and the weak limit, but by consistency carries through the rest, and in all that spectacularly verified glory).

Here's where the fun starts: we can measure *all* dimensionful quantities in physics in terms of these three constants. Let's focus on the Planck mass. First of all, notice it involves masses, in particular, two masses, and so mass squared (hence why we took the square root above). All the other things it involves can be expressed as appropriate powers of c and h. We can get acceleration from using the units of c and part of h (the seconds bit), and we can also use c (via E=mc^2) to turn energies, i.e. the Joules part of h, into masses. That leaves G just a measure of 1/mass^2, and the mass it measures is the Planck mass.

Now, gravity is a very weak force. What does that mean? It means that for all the fundamental particles we know if you consider the force between any two of them then the gravitational force is far weaker than any of the other forces (yes, even the Weak force). But, if there were a particle that weighed a Planck mass (which is about 10^18 times the mass of the proton, or the same mass as about one ten thousandth of a gram, judging roughly from a mole of hydrogen which contains 10^23 protons) then the strength of the force of gravity between those particles would be equal to the strength of all the other forces.

There is also that sneaky "per unit distance squared form that body", which means if you bring the particles closer together, gravity gets stronger. When you compute that change in force taking account of the appropriate quantum mechanics (the renormalisation group flow) then we find that all the forces not only change in this simple high-school physics way, but also fundamentally, as we go to short distances. The constants of nature "flow" with energy scale (though h and c, and debatably G, do not). This means that in addition gravity becomes of comparable strength to other forces on very short distance scales, in fact at the Planck length (using our units we can change mass into length too). (If you want to read more about all of this, go and read Frank Wilczek's great book "The Lightness of Being")

Normally in computing quantum effects we can ignore gravity because it is so weak, but at the very high energies of the Planck scale, gravity becomes so strong that we cannot ignore it, and this is therefore the scale at which a theory of quantum gravity is needed. At all the energies below the Planck scale gravity was so weak that we could treat it as a "classical background". (It is a common misconception that physicists "cannot treat quantum mechanics and relativity at the same time".  We're actually very good at it: we can do so-called "quantum field theory in curved space-time", but to do this we always treat both halves separately, that is we have "classical space-time")

Okay, so now we are finally there and we can discuss why quantum gravity involves belief. It involves belief because the Planck scale is so very big. It is 10^18 GeV in particle physics units. The rest energy of a proton is about 1 GeV. The LHC runs at about 10^4 GeV. The biggest machine physicists can even think of making in the foreseeable future is about 10^5 GeV, which is still a very long way from the Planck scale. (I read somewhere that a particle accelerator capable of reaching the Planck scale would have to be the size of the solar system and use a large fraction of the sun's total output. I don't know where I read that, or how the maths was done) At these comparatively low energies we can don't need to specify our theory of quantum gravity in order to do calculations in normal theories. As long as the quantum theory reduces to general relativity in the right limits, pretty much anything goes (although some things may not, they may "resist embedding", as recently and elegantly discussed in this paper).

The enormity of the Planck scale means we cannot do experiments to test quantum gravity directly. And this means that for the most part whether you think string theory is a better theory than loop quantum gravity, or vice versa, is based on your aesthetic opinion about those theories. The role of aesthetics in physics *is* important, and helps guide us towards new laws (for more on this read/watch Feynman's "Character of Physical Law", or read Weinberg's "Dreams of a Final Theory"). It is precisely that aesthetics that has even got us as far as being able to contemplate quantum gravity, but beliefs about aesthetics diverge at the edges of our knowledge.

I came to think about this recently during a conversation with colleague. We were discussing what kind of indirect evidence could possibly be considered as for or against a given theory of quantum gravity, where by indirect I mean evidence discovered well below the Planck scale, either in cosmology or in a spectrum of new particles that could be found at foreseeable collider. I was primarily thinking of whether this evidence could support a complex theory of quantum gravity with many possible solutions, in particular, the "string landscape". Certain solutions and low energy physics scenarios appear "more likely" (in quotes because of the notorious measure problem: there is a *lot* to discuss here) in the landscape, and I argued that seeing such signals could be indirect evidence for the landscape (I do argue this a lot, and was particularly inspired by Paul Langacker's recent colloquium at PI on this subject, which you can see here). My colleague replied:

"In [theory of quantum gravity] which I believe in, the situation is..."

and we went on to try and interpret (unsatisfactorily in my opinion) all such results in light of said theory. And so, it has become abundantly clear to me how important our beliefs are in interpreting indirect evidence. I guess this is obvious, but it does get a little worse. Earlier the same day I had discussed during a mini-conference this exact topic of indirect evidence pointing to string theory and the landscape. I asked the audience, "if we discovered ultra-light axions in cosmology would you consider this a good pointer towards string theory and the landscape?". An audience member replied:

"No, I would try and interpret it in light of [theory of cosmology]"

I found this very honest, but depressing. The role of belief is so strong in the far and esoteric reaches of cosmology and quantum gravity that even when faced with a nominal prediction and hypothetical evidence for that prediction, someone cannot be convinced away from their beliefs. I'm not trying to be above all of this. I admit to being in a similar situation myself. I *believe* that the landscape is unavoidable, and that this behooves us to interpret the world in light of this. Why? Because, following Gell-Mann "anything that isn't forbidden is mandatory" (quoted from that same elegant paper linked to above) the landscape has a much wider space of what is possible, and thus not forbidden, and is therefore an interesting playground that forces us to question all possible assumptions. As a phenomenologist this is daunting, but I love the challenge of trying to find tell-tale needles in this haystack.

I wonder, even if we could do experiments up at the Planck scale, if all parties could ever be convinced? If scattering carried a uniquely stringy character (there are some, but I don't know them) could this still be "interpreted in light of [theory]"? On the flip-side, and this is more important to me, what types of evidence would I consider as being counter to my own beliefs that might force me to revise them?

Monday, 18 March 2013

'Twas the week before Planckmas...

This week will see cosmologists excitedly waiting for, and celebrating, the upcoming results from ESA's Planck satellite. We've been waiting for this day since the launch of Planck in 2009 (in fact, most people having been waiting for this day since the late 1990s, when the satellite was proposed, initially called COBRAS/SAMBA). This multi-national collaboration has already released some data and results a year ago (on subjects such as point sources and clusters detected through their Sunyaev-Zel'dovich signature), but the first large suite of cosmology results will be announced on Thursday the 21st of March 2013, at a large press event.
Here at Princeton Astrophysics, we are having our own Planck Party at 5 am, and event which will no doubt have as much excitement as the pre-dawn Higgs party we had at the Institute for Advanced Study last summer.

So what is all the excitement about?

Until the Planck release, the tightest constraints at multipoles less than 1000 have come from NASA's WMAP satellite, which was recently awarded the Gruber Foundation Cosmology Prize. WMAP operated for nine years and really helped to pin down the cosmological model on the largest scales.


The plot above shows the power on the y-axis as a function of multipole (x-axis). Multipoles are inversely related to angle, that is, large angles correspond to small values of the multipole, while small scales are large values of l.

On smaller scales (i.e. to the right of this graph) two experiments have dominated the game recently, The Atacama Cosmology Telescope (based in the Chilean desert, and the collaboration I'm a part of) and the South Pole Telescope (no prizes for guessing where this telescope is!)

The gold points are the same as the black points in the top plot, but with a logarithmic scale on the y-axis. From this plot, it is clear to see how ACT and SPT provide all the signal at small scales - the WMAP data points end around l=1000. Combining the data from WMAP with these experiments helps us put tight limits on our cosmological model and on non-standard physics in the early universe.

Planck will improve on this picture by making the error bars much smaller on all scales. On large scales we are looking to see if any of the WMAP anomolies are present, and on intermediate scales Planck will also greatly reduce the error bars (on multipoles of 800 - 2000), where the WMAP error bars are large or unconstrained (see the linear scale plot at the top of the page). 

This is particularly interesting for a parameter of recent interest, namely the effective number of relativistic species, or Neff. If we had three neutrino species (which is the standard picture) - Neff would be 3.046 (this number is not exactly three due to electron-positron annihilations in the early universe). It helps to think of the number in terms of extra neutrinos, but what Neff actually measures is if there was any extra (or less) energy from such a relativistic species. It doesn't specify what that species should be, and many authors have proposed some interesting candidates, from sterile neutrinos to `dark radiation'. If there was more relativistic energy when the CMB was formed, this would lead to a few interesting effects, the most obvious being the decrease in amplitude of the small scale Silk damping tail - the intrinsic CMB spectrum which drops in power as l increases. Of course, there are many degeneracies between Neff and other parameters, which is why better data (and independent data) help us tease apart the degeneracy.


All three experiments (WMAP, ACT and SPT) recently released their constraints on cosmological parameters including Neff (they are here, here and here).
The three experiments have some mild tension the best-fit values of Neff (we discuss the consistency between them in a recent paper) - the plot above shows this. In both cases the ACT and SPT data are combined with the latest WMAP9 results. The left-most panel shows the one-dimensional contours for Neff, while the two right panels show an error ellipse. Dark ellipses shows models which are consistent with the data at 68% confidence, while the lighter ellipses show models consistent at 95% confidence. Any model outside of the ellipses is less than 5% likely to fit the data. The red lines/curves are for WMAP9 and ACT, the green for WMAP9 and SPT and the black curves/contours show the combination of all three experiments together. While SPT sees a higher value of Neff than 3.046 at Neff = 3.74 +/ 0.47, and ACT a slightly lower value with Neff = 2.90 +/ 0.53, the combined data are completely consistent with the standard picture: Neff =  3.37 +/ 0.42 (which may dismay or delight you, depending on your camp of interest!).

By improving the constraints on the power at intermediate scales, Planck should tell us more in a few days. This is particularly interesting because while ACT and SPT look at different regions of the sky (on smaller patches), Planck will release results based on the full sky - another independent measurement of the same underlying physics.

[There is a great post by Jester on RĂ©esonances about Neff (posted just before the ACT constraints were released) written for those with a particle physics interest.]

Planck will also measure the weak lensing of the CMB by gravitational structures - an extremely subtle effect which moves power around on the maps of the CMB temperature on arcminute scales, but coherently over degrees. ACT  and SPT have measured this deflection - and Planck will improve the errors on this measurement by a great deal on all scales. The deflection power spectrum is a strong probe of structure, and things which would wash out that structure, such as a massive neutrino.

Another key constraint that will come from Planck is one on the non-Gaussianity of the initial conditions of the universe, which is a strong test of the various inflationary models out there today.

[There is an awesome TEDx talk by Ed Copeland on CMB physics and inflation which provides a nice summary of the link between the CMB and the early universe.]

One way to think of non-Gaussianity is by imagining a distribution with some level of skewness and kurtosis (so, a normal distribution that has been distorted). A simple picture for how to produce a two-dimensional temperature map from the power spectrum above, is to generate a Gaussian realisation of the power spectrum - at each angular scale (defined by the multipole), use the power to define the variance in temperature on that scale. However, if the temperature field is non-Gaussian, then the full map is not described by the two-point function, or power spectrum: we need to use higher order statistics to characterise the initial conditions if they are non-Gaussian! That is typically why we use the bispectrum (the three point function) and higher order statistical correlation functions to measure non-Gaussianity.

 The WMAP bound is consistent with zero fNL (the parameter describing the level of non-Gaussianity, a quantity we expect to be vanishingly small in the simplest single field models of inflation) with -3< fNL < 77 at 95% confidence. However, the expected errors on fNL from Planck should go from the errors on fNL of about 20 to errors of a few! If the central value of fNL = 37.2 found by WMAP remains while the errors decrease we will put some serious pressure on many inflationary models - it is always a theoretical treat to find you aren't living in a `vanilla' universe.

These are only a few of the presents we are expecting on Thursday. Make sure to tune in to hear the results, and enjoy the flurry of papers on the latest cosmological bounds using the temperature of the CMB. For the polarisation measurements, you will still have a little wait before Planck (and ACTPol and SPTPol) entice you with more results - as it is an even more delicate procedure to tease out polarisation from these sensitive instruments.

Until then, we wait to boldy constrain where only a few experiments have constrained before...