Saturday, December 31, 2011

The Smartest Mathematican I Know (Under 50)

The smartest mathematican I know personally is Susan J. Sierra, who was a fellow math major with me at Oberlin College. After earning her PhD from the University of Michigan (Go Blue!) and some post-doc positions (ever heard of a place called "Princeton" where folks like a guy called Einstein used to be on the faculty), she is now on the faculty at the University of Edinburgh where she is working in the fields of noncommunitive algebric geometry, noncommunitive algebra, algebric geometry.

Hopefully, local fashion will not cause her to develop an affinity for tartan, however. The Carnegie Mellon Tartans are one of Oberlin's athletic rivals (and yes, they do generally crush us, as a quick Google search would make clear).

In other words, she's doing work in the areas of math that are the underpinnings of Yang-Mills theory (e.g. in the fundamental physics of the Standard Model that underlies nuclear physics which has been confirmed by every experiment in the last 40 years is a non-communitive algebra, in loop quantum gravity, in string theory, in higher dimensional physics (not just fundamental physics but also fields like condensed matter physics where higher dimensional formulations of problems can be easier to solve when transformed into algebric transformation of problems too hard to solve in their natural form), and a surprising number of practical problems that are more mundane.

A mundane example of noncommunitive geometry is the kind of math you need if you are a program like mapquest trying to determine the optimal route to get from point A to point B in a city with one way streets and rush hours. The time it takes to get from point A to point B in one direction and the shortest path between them may be different from the time it takes to get from point B to point A due to traffic loads, stop lights, one way streets and so on. While basic Newtonian mechanics assumes a communitive geometry, in general, any system with an arrow of time induced by CP violation, friction, or the second law of thermodynamics implies a noncommunitive geometry.

Noncommunitive algebra also has mundane as well as ordinary applications. For example, the mathematics behind tax planning is noncommunitive, because the tax code treats losses (i.e. negative numbers) very differently from profits (i.e. positive numbers) and is also asymmetric in time, for example, with different treatments of carryforwards of losses and carrybacks of losses. Any set of functions that operate differently forward and backward is noncommunitive. Noncommunitive algebras are also sometime called "non-Abelian", as Mr. Abel's name has become synonymous with communitive algebras (some odd associations and footnotes related to him can be found here).

At its most basic, algebric geometry is the study of equations that can translate into shapes, like the analytic geometry that you studied in pre-algebra or trig in high school, which have been widely known since Alexander the Great studied them as part of his education (although the conic sections weren't as neatly tied to equations then as they are now, something we owe to Descartes and his peers). But, geometries in more than our ordinary four dimensions are much harder to express in any way other than equations, and since fields like string theory call for more than four dimensions, one really can't make sense of any of it without a firm command of algebric geometry. It is also critical to fundamental issues in relativity and gravity such as the still unresolved question of whether it is conceptually possible for the geometric formulation of gravity in general relativity to have an equivalent formulation in the nature of a force exchanged by particles such a graviton, or whether there exists some proveable "no go theorem" that could establish that it is impossible to have a formulation of general relativity in a Minkowski space-time.

Also, if you had the impression that there are no unsolved problems remaining in mathematics, you are incorrect. Susan Colley, one of my math professors at Oberlin College, explains in a recent article just how many open questions remain in mathematics and provides some current examples such as the outstanding Millenium Prize questions (some of which, like the question in Yang-Mills theory, readers of this blog in academia are actively working in their own research to solve).

Thursday, December 29, 2011

Looking Back At 2011 and Forward To 2012

This blog started in May as an effort to focus my Wash Park Prophet blog by breaking it into a science blog and a general blog more heavily concentrated on law and politics, on the theory that the two sets of posts are more or less independent. After a brief transition period, the total output on the two blogs combined has turned out to be very similar to that of the separate blogs and the split was just about perfect in capturing close to half of the total in each blog.

The traffic and comments have taken a hit at this blog, but the quality of the comments has been good. There have been multiple comments from anthropologists and physicists in the fields covered and from some of the best bloggers in their fields, and this blog is starting to show up in more google searches of scientific subjects.

Posts with strong policy implications, like most of my posts on IQ and mental health, I've kept on the Wash Park Prophet side.

This blog has given me a space to think more deeply about the subjects covered and to explore them at a conceptual level that plays out their implications and corollaries. Enough of those insights have been plausible enough to make the enterprise interesting, whether or not they turn out to be correct. I've also learned and resolved several misunderstandings I've had about physics and anthropological data in the process, and filled many gaps in my understanding. For example, I have a much more solid understanding of how the Standard Model weak force works.

This year has had a bumper crop of new developments in physics, mostly related to the search of the Higgs boson, and several notable developments in neutrino physics, such as evidence for slightly superluminal neutrinos, evidence for more than three generations of neutrinos, and advances in pinning down their masses and PMNS transition matrix entries for neutrinos.

Hints of beyond the Standard Model CP violation and of mass differences between particles and antiparticles have been quashed. A Higgs boson has probably been detected. No other new particles not predicted by the Standard Model have been detected. The multiple exclusions of dark matter candidates in both particle physics, direct detection efforts, and astronomy constraints on dark matter properties have made within the Standard Model options, like neutrino condensates, look more attractive. Definitive evidence of hypothetical quantum physical behavior, like neutrinoless double beta decay, flavor changing neutral currents, magnetic monopoles, proton decay, evidence of extra dimensions, evidence of discrete structure in space time, and evidence of compositness in fundamental particles has remained elusive. The inconsistency between the radius of ordinary hydrogen and muonic hydrogen, perhaps due to inaccurate measurements of the former is one of the few laboratory scale anomalies that has lasted. The prospect of a particle physics desert now that a Higgs boson has been identified looms. The Higgs boson doesn't destroy SUSY, although it makes technicolor a historical curiousity. But, SUSY supporters are getting discouraged as more and more models in its parameter space are excluded, including the MSSM, the minimal supersymmetric standard model, and most R-parity conserving version of the theory.

In the area of anthropology, archaeology and pre-history, the big stories have been ancient DNA, evidence of admixture with archaic hominins, archaeological evidence of modern humans at very early Out of Africa dates in India and Arabia, the discrediting of mutation rate dating particularly for Y-DNA, and much more widely available whole genomes that are being collected and analyzed by bloggers outside the academy. Increased data make it possible to develop increasingly complex and constrained outlines of pre-history, although Jared Diamond's notion that technologically driven (often food producing technology driven) waves of migration with varying degrees of admixture have had a profound impact on population structure does seem to continue to be a major theme. Some legends and origin myths are being confirmed, others are being flatly rejected as counterfactual.

In 2012, the prospect for more ground breaking fundamental physics developments seems modest. Beyond the Standard Model theories are falling by the day. Majorana masses for neutrinos continue to be more and more disfavored by the evidence despite their theoretical attractiveness. The Higgs boson discovery profoundly increases the energy scale at which the Standard Model equations start to become pathological. The prospects of new physics at the TeV scale look ever more dim. The number of plausible fundamental dark matter candidates gets slimmer and slimmer -- direct detection experiments and astronomy constraints seem to disfavor the heavier candidates, while particle physics have closed the door on any light fundamental particles that interact with the weak force. Sterile neutrinos aren't ruled out experimentally, but the absence of evidence for Majorana mass in neutrinos weakens the case for them.

The prospects for new breakthroughs in pre-history in 2012 seems greater. The quantity of whole genome data and the quality of our ability to analyze it has grown, we are likely to get some new ancient DNA samples to add to a very limited data set, and new understandings of pre-history coupled with relatively low levels of armed conflict and fewer autarkic regimes are making it easier to identify and study archaeology in places where it is most likely to bear fruit relevant to the remaining open questions in the field. It is too much to hope that we might find a Rosetta stone to illuminate the Harappan language or some similar hotly debated pre-historic linguistic question, but simply pinning down more accurately the timeline of plant and animal domestication in Africa (particularly in the Sahel and Ethiopia), for example, could add a great deal of certainty and corroboration to models of pre-history and lingustics there that tell us more about sequencing and relative relatedness than they do about the historical moments at which key events happened and the technological and social forces that drove those events.

LHC detects long predicted b-anti-b meson

In further proof that the Standard Model works, the Large Hadron Collider has found a heavy and quickly decaying meson made of a bottom quark and anti-bottom quark that had long been predicted by QCD but had never been observed. There are about a dozen dozen hadrons predicted by the Standard Model, and most have been observed already, but a few of the heavier ones remain well characterized but undiscovered. This finding narrows the ranks of the missing hadrons. Analysis of the importance of the new find can be found here.

Wednesday, December 28, 2011

Feynman's IQ

Nobel prize winning physicist and science popularizer Richard Feynman, whose graphic novel biography by Ottaviani and Myrick I recently finished, claimed himself to have an IQ of about 125 on a school test, although he was off the charts in mathematical ability.

The cover of the graphic novel, by the way, features this quote, "If that's the world's smartest man, God help us.", from Lucille Feynman, his mother.

You can also read about his thesis at a recent blog post.

Thursday, December 22, 2011

Is the Wavefunction of Quantum Mechanics Real?

Steve Hsu has a nice post on that extent to which different ways of thinking about quantum mechanics, sometimes called "interpretations" can be distinguished in thought experiments and real experiments, with the strong implication that the wavefunction of quantum mechanics has a physical reality, although I will leave the subtlties of wording of this delicate matter to him and the blog post that he in turn is quoting.

Monday, December 19, 2011

The Case For The Massless Up Quark

A generalization of Koide's formula suggests a nearly massless up quark, contrary to model dependent estimates that suggest an up quark mass of about 40%-60% of the down quark mass (e.g. here), while the formula accurately estimates the conventional value of the d quark mass. There is a case for this in supersymmetric theories as a solution to the "strong CP problem.", although this approach has been questioned. Early lattice QCD simulations in the Standard Model also suggested that the massless up quark solves the strong CP problem.

A masslesss or nearly massless up quark can also be brought to bear to account for neutrino mass.

There is somewhat similar interesting inquiry into massless QCD vacuum energy and its relationship to gravity.

It isn't unusual to model QCD with masses for all the light quarks set at zero for calculational convenience and still have the model produce meaningful results.

Neutrinoless Double Beta Decay

One of the pivotal questions in fundamental physics is the nature of neutrino mass.

The observation of the neutrino oscillations in experiments with atmospheric, solar, reactor and accelerator neutrinos proves that neutrino masses are different from zero and that the states of flavor neutrinos e, μ, tau are mixtures of states of neutrinos with different masses. There are two general possibilities for neutrinos with definite masses: they can be 4-component Dirac particles, possessing conserved total lepton number which distinguish neutrinos and antineutrinos or purely neutral 2-component Majorana particles with identical neutrinos and antineutrinos. . . .

Neutrino masses are many orders of magnitude smaller than masses of their family partners, leptons and quarks. . . . The most natural possibility of the explanation of the smallness of the neutrino masses gives us the seesaw mechanism of the neutrino mass generation. This beyond the Standard Model mechanism connects smallness of neutrino masses with the violation of the total lepton number at a large scale and Majorana nature of neutrino masses. If it will be established that neutrinos with definite masses are Majorana particles it will be strong argument in favor of the seesaw origin of neutrino masses.

Investigation of the neutrinoless double beta-decay of nuclei is the only practical way which could allow to proof that neutrinos are Majorana particles.

From here.

Theorists tend to prefer the assumption that the neutrino and anti-neutrino are identicial, and hence that a process known as neutrinoless double beta decay is possible.

The Experimental Constraints On Neutrinoless Double Beta Decay

One experiment by H. V. Klapdor-Kleingrothaus (Heidelberg-Moscow) published in 2001 claimed to see neutrinoless double beta decay experimentally, and claimed six sigma support for that conclusion by 2006, but the experiment has not been successfully replicated in three other completed attempts to do so, and has been subject to considerable criticism in the discipline.

More than a dozen current or proposed experiments that are already under construction or will commence construction in the next few years, are looking for signs of neutrinoless double beta decay.

The predicted frequency of neutrinoless double beta decay in a simple Majorana mass scenario is a product of the three neutrino mass eignenstates and the correspoding PMNS matrix elements. A 2010 recap of the theory is found here. The key number that is produced using these estimates is on the order of 0.2-0.6 eV according to H. V. Klapdor-Kleingrothaus which corresponds to effective Majorana masses. A larger number would yield a higher (and presumably easier to observe) decay rate, while a smaller number would yield a lower (and presumably hard to observe) decay rate.

An effective Majorana mass of this scale would imply absolute neutrino masses that are much greater than the experimentally established values for the differences in mass between the three neutrino mass eigenstates by a couple of orders of magnitude, and hence, a nearly degenerate set of neutrino mass eigenstates, a result that seems like a poor fit to a measured value of theta 12 in the PMNS matrix that is about ten times as large as theta 13 in the PMNS matrix - since big differences in transition matrix values seem to have some association with big differences in mass between the particles in question.

Experiments that are underway would increase the sensitivity of the experiments to Majorana masses more than ten times as small as that of Klapdor-Kleingrothaus, making is possible to rule out or confirm that finding. Direct neutrino detection experiments, such as Ice Cube, which just went on line in Antarctica, also provide measurements of neutrino properties that can constrain the theoretically expected values for neutrinoless double beta decay in Majorana mass neutrino models. (See also here setting out the experimental agenda for neutrino research in 2004 through about 2014).

While current experiments establish only the relative mass differences between neutrino eigenstates, rather than absolute masses, if the absolute values are on the same order of magnitude as those differences (on the order of 0.003 eV for the first and second mass eigenstate gaps and 0.05 eV for the second and third mass eigenstate gap), then they are much lower than the Klapdor-Kleingrothaus effective Majorana neutrino mass estimate, and would seem to be inconsistent with a Majorana neutrino mass in anything but a normal mass hierarchy (a first generation neutrino mass lighter than a second generation neutrino mass which is lighter in turn than a third generation neutrino mass). Astronomy data also place significant minimum values on neutrinoless double beta decay rates (the linked article also remarks on the very strict current experimental limitations on the magnetic moment of neutrinos which disfavors the possibility that they may be composed of charged preons).

Neutrinoless Double Beta Decay In SUSY Models

Neutrinoless double beta decay experiments also constrain SUSY models which need to have a characteristic SUSY scale on the order of 1 TeV to fit that Klapdor-Kleingrothaus measurement, or smaller if that measurement is not replicated. (Larger decay values have been pretty well ruled out, and by implication, characteristic SUSY scales in SUSY models with Majorana mass of more than 1 TeV, which should be within the power of the LHC to detect, are also disfavored.)

If neutrinoless double beta decay is ten times more rare than that measurement, this would imply a characteristic SUSY scale on the order of 630 GeV, which is a scale that is likely to be ruled out or confirmed at LHC around the same time that that neutrinoless double beta decay experiment results with that precision are available. Neutrinoless double beta decay rates that were thirty or forty times as small as the claimed Klapdor-Kleingrothaus measurement in a SUSY matter would bring the characteristic SUSY scale so low that it would be inconsistent with current LHC bounds.

Of course, nimble theorists can always come up with some variant theory that would escape these bounds (see, e.g. this paper from 2007 with Dirac neutrino masses in a SUSY variant). Indeed, the sheer number of beyond the Standard Model proposals to deal with neutrino properties are immense, although the many are simply slight variants on the same themes. But, the bound on SUSY theories from neutrinoless double beta decay is notable because it is experimentally independent of the particle accelerator driven bounds on the masses of the lighest supersymmetric particles, and because non-detection of neutrinoless double beta decay favors smaller SUSY scales, while non-detection of supersymmetric particles at particle accelerators favor larger characteristic SUSY scales. Taken together, neutrinoless double beta decay experiments and the LHC operate as a vice squeezing SUSY parameter space in opposite directions.

Predictions

Neutrinos Lack Majorana Mass

My personal prediction is that we will eventually establish bounds on absolute neutrino eigenstate mass and bounds on Majorana mass from a failure to detect neutrinoless double beta decay that will together establish definitively that neutrinos have Dirac masses, just like all other Standard Model fermions and as a result of the same mechanism despite the fact that neutrino masses are much smaller than other Dirac masses, that neutrinos and antineutrinos are not the same thing.

This prediction is driven mostly by the pivotal role that the distinction between a neutrino and antineutrino plays in maintaining lepton number conservation (which has never been observed to be violated experimentally and produced large numbers of valid predictions about decay patterns) that motivated their predicted existence in the first place.

A fortiori, this prediction also assumes that SUSY models with Majorana neutrino masses are also wrong. There are other reasons to find the remaining range of SUSY parameter space to be implausible, but this is another one which is quite strict.

There Are No Sterile Neutrinos or Fourth Generation Fermions

A finding that neutrinos lack Majorana mass would not necessarily rule out the possibility that there are right handed neutrinos (aka sterile neutrios) with Dirac mass that give rise to left handed neutrino mass via a seesaw mechanism. The Standard Model assumed that neutrinos had no mass at all, so it is indeterminate as to how this issue is resolved, and its other predictions are largely decoupled from it.

Precision electroweak measurements suggest that fourth generation left handed neutrinos of less than 45 GeV are ruled out, which would be so far in excess of the other three neutrino masses that it makes the entire notion of a fourth generation of Standard model fermions seem implausible. But, because right handed neutrinos would not interact with the weak force, precision electroweak measurements can't rule them out or say much of anything about what masses they might have, although seesaw models tend to favor right handed neutrinos that are much heavier than left handed neutrinos.

Theories with heavy sterile neutrinos draw succor from the perceived need for a fairly heavy dark matter candidate, although direct dark matter searches and astronomy data are incresingly narrowing the experimental window in which such heavy dark matter particles could exist. They also find support from the fact that there are four permutations at each generation of every charged fermion in the Standard Model (LH particle, LH antiparticle, RH particle, RH antiparticle), so the existence of only a LH particle and RH antiparticle seems to leave the neutrino column of the chart of Standard Model particles with gaps, and it is hard to rule out the presence of something in those gaps because a right handed neutrino would be so inherently weakly interacting apart from its gravitational interactions, just as hypothesized dark matter.

But, heavy right handed neutrinos would also contradict the pattern for all of the charged fermions of the Standard Model in which the right handed and left handed versions of the particle and the right handed and left handed version of the antiparticle all had the same mass.

Very heavy right handed neutrinos also seem out of line with the example of the Z boson, which is its own antiparticle, which has a mass only marginally greater than that of the W+ boson which has the W- boson as an antiparticle, with all three being intimately intertwined, and the Higgs boson having a mass on the order of the sum of the three weak force boson masses. Similarly, neutrons are not dramatically heavier than protons, and electromagnetically neutral hadrons generally are not so much different in mass from electrically charged hadrons. If charge or its lack has in impact on mass, it does not seem to be a dramatic influence.

Also, since baryogenesis and leptogenesis scenarios generally assume that quarks and leptons have their origins in weak force decays, any hypothesis with right handed neutrinos must also come up with a leptogenesis scenario specific to them.

My personal prediction, although I make it with far less confidence than I do when predicting that neutrinos lack Majorana mass, is that there are no right handed neutrinos.

The PMNS Matrix has a CP violating phase

There are good reasons from quark-lepton complementarity to suspect that that PMNS matrix has a CP violating phase complementary to the CP violating phase in the CKM matrix.

There also seems to be preliminary evidence for the existence of such a phase at the MINOS experiment where the profiles of neutrinos and antineutrinos seem to be different. (Incidentally, CP violation would also seem to disfavor Majorana neutrino theories, since if the particle and antiparticle are identical, they shouldn't exhibit different behavior.)

I expect that CP violations will be confirmed in the PMNS matrix, in W boson mediated interactions, but not Z boson mediated interactions, with a phase complementary in some way to that of the CKM matrix CP violating phase.

Conclusion Regarding Predictions

My predictions are generically "dull" from a theorist's perspective. They leave the mass generation mechanism for neutrinos in a "black box", they predict no new particles to serve as dark matter candidates (neutrino condensates or perhaps stable glueballs begin to look attractive as dark matter candidates in this scenario), and they predict no new kinds of particle interactions.

They also throw the vast majority of the theoretical output on neutrino physics and models that call for right handed neutrinos, Majorana neutrinos, or seesaw mechanisms into the dustbin. Basically, tens of thousands of fundamental physics papers over the last decade are counterfactual flights of fancy in this scenario.

This approach would seem generically to leave conventional grand unified theories and theories of everything overconstrained. Most predict something more than the Standard Model or are inconsistent with experiment. A nice summary of the data points these models try to fit can be found here. (Footnote, I hadn't noticed before that the not quite running coupling constant scale of the Standard Model is a couple of orders of magnitude lower than the SUSY GUT scale, which would make concerns about very high energy scale breakdowns of the Standard Model with a Higgs boson of the experimentally suggested mass less intense).

Friday, December 16, 2011

Musings On Mass In A Higgsful World

The lay description of the Higgs boson typically describes it as critical primarily in giving rise to intertial mass by creating a field that is frequently described, essentially, as the viscosity of free space.

Every experiment to date has determined that interial mass and gravitational mass are the same thing. Indeed, the equivalence of these two things is a bedrock foundation of general relativity.

Fundamental fermions each have one of twelve non-zero rest masses. Fundamental bosons each have one of four rest masses, with zero as one of the allowed values (belonging to photons, gluons and the hypothetical graviton, if there is one). We don't have any fundamental theory to explain the relationship of all of the fifteen non-zero rest masses of the Standard Model of Particle Physics to each other (we do have theoretical reasons for photons, gluons and gravitons to have zero rest masses), although we do have a formula that relates the mass of the W bosons to the mass of the Z boson, we know that there are some almost certainly non-random relationships between the fermion masses (such as Koide's formula for the charged lepton masses) although we aren't precisely sure who these numerical relationships arise, and there naiively appears to be a simple formula from which the Higgs boson mass can be derived from the W and Z boson masses (one half of two times the W boson mass plus the Z boson mass)that is a very close match to the tenatatively measured amount, although there is no consensus concerning why this relationship exists either.

It also seems to be the case, that there is an intimate relationship between the fundamental particle masses, the four parameters of the CKM matrix that governs the relative likelihood of particular flavor transitions via W bosons for quarks (including CP violating phases), and the four parameters of the PMNS matrix which codes the same relative likelihoods for leptons. The matrixes also seem to show some sort of relationship between the magnitude of the coupling constants for the three Standard Model forces (electromagnetism mediated by photons, the weak force mediated by W and Z bosons, and the strong force mediated by gluons) each of which is itself a function via equations and constants determined phenomenologically (rather than from first principles) of the energy level of the interaction in question which brings us back to the mystery of mass-energy all over again.

But, mass turns out to be a slippery thing. Mass is not simply additive in composite particles. Each of the couple hundred different possible hadrons has a very precise rest mass, but in composite particles bound by the nuclear strong force, the rest mass of the whole is generally not simply the sum of the rest masses of the component parts. Likewise, while total mass-energy in any system is conserved (with an E=mc^2 conversion factor), interactions via the nuclear weak force routinely do not conserve mass alone.

General relativity and special relativity add further complications. The relationship between mass and acceleration is a simple linear one at low velocities, but must be modified by a Lorentz transform at velocities approaching the speed of light. The relationship between mass and acceleration runs with a particle's kinetic energy levels.

Even more confounding, in general relativity, is the fact that forms of energy other than mass give rise to gravitational effects and are subject to the effects of gravity, even if they don't have any mass at all. A photon will follow the geodesic created by a gravitational field, even though it has no mass, and the flux of photons through a volume of space is part of the stress-energy tensor that gives rise to gravity in general relativity.

A mass field's linear momentum (in three dimensions) including its Lorentz boost factors, its angular momentum (in three dimensions), and the pressure it is experiencing (in three dimensions), in addition to its rest mass and the electromagnetic flux (more accurately four current) of energy in that volume of space also add to the stress-energy tensor.

The conventional stress-energy tensor of general relativity doesn't have terms for strong force flux and weak force flux, neither of which were known at the time it was formulated, but I don't think that anyone seriously doubts that fluxes of these forces contribute to the stress-energy tensor in the precisely the same way that fluxes of photons do.

Convention and personal preference dictates whether observed dark energy effects are modeled as a constant of integration in cosmological equations derived from the equations of general relativity, or as a real, uniform energy field that fills all of space-time and as energy which is a subset of mass-energy, gravitates. Physics already provides several fields which are present at nearly uniform levels throughout the universe - the physically observed and electromagnetic cosmic background radiation, the Higgs vacuum expectation value of the Higgs field, and the energy field implied by zero point energy (i.e. the amplitude in quantum mechanics for a particle-antiparticle pair to arise seemingly out of nothing in empty space), although none of these is a good match for the observed cosmological constant, or the observed overall flatness of space-time away from dense mass fields (as opposed to a strongly convex or concave structure of space time). Additional proposals are also out there, and as I understand the matter, the extent to which graviational fields (aka the background flux of gravitons in the universe) themselves, because they carry energy, give rise to gravitational effects isn't a question that I have seen a consensus answer to in the educated layman's and generalist physicist oriented literature (the question is subtle because "in general relativity the gravitational field alone has no well-defined stress-energy tensor, only the pseudotensor one.")

The standard description of the reason that efforts to describe gravity with a Standard Model plus graviton model is that the quantum mechanical equations of the graviton are not renormalizable, but given what I understand to be general relativity's BRST symmetry, (see, e.g. Castellana and Montani (2008)) it isn't obvious to me that this proposition is really true in a theoretical sense or in the sense that the equations actually break down in the UV limit, even if they may be impracticable to do calculations with by any non-numerical method we known outside special cases where simplifying ssumptions make an analytical solution possible. Castellana's abstract states (preprint here):

Quantization of systems with constraints can be carried out with several methods. In the Dirac formulation the classical generators of gauge transformations are required to annihilate physical quantum states to ensure their gauge invariance. Carrying on BRST symmetry it is possible to get a condition on physical states which, different from the Dirac method, requires them to be invariant under the BRST transformation. Employing this method for the action of general relativity expressed in terms of the spin connection and tetrad fields with path integral methods, we construct the generator of the BRST transformation associated with the underlying local Lorentz symmetry of the theory and write a physical state condition following from BRST invariance. This derivation is based on the general results on the dependence of the effective action used in path integrals and consequently of Green's functions on the gauge-fixing functionals used in the DeWitt–Faddeev–Popov method. The condition we gain differs from the one obtained within Ashtekar's canonical formulation, showing how we recover the latter only by a suitable choice of the gauge-fixing functionals. Finally we discuss how it should be possible to obtain all of the requested physical state conditions associated with all the underlying gauge symmetries of the classical theory using our approach.

(Abhay Ashtekar's reformulation of the equations of general relativity in the 1980s has been privotal to the field of quantum gravity.)

The concern that it might be necessary to retain background independence in an extension of the Standard Model with a graviton, see, e.g. here (although not necessarily discrete background independence, at least other than as part of a strategy to formulate the theory in a discrete setting and then use calculus to take the limit of that formulation as the minimal distance became infinitessimal) which is something that a naive quantization of a spin-2 particle on a Minkoski background modeled on other Standard Model quantizations can't capture that effect is a more serious concern. The fact that there is only a pseudotensor, rather than a stress-energy tensor for the gravitional field alone might also be a clue that general relativity's equation has a subtle defect in its formulation.

While the magnitude of Newtonian gravity is a function of rest mass only (and would imply a massless, color charge neutral, electromagnetic charge neutral, scalar spin-0 graviton), in general relativity, the overall magnitude of the effective gravitational force, as I understand it, is a function of total mass-energy in the volume of spacetime where it is being evaluated. Likewise, rather than being the simple radial attractive force of Newtonian gravity, in general relativity the direction in which gravity directs massive and massless particles alike, is modified from its radical attractive direction by a vector that incorporates the directionality of all of the particle motion, energy fluxes and pressure that are acting on volume of space-time in question.

The fact that both ordinary linear acceleration, and the acceleration induced by the force of gravity, which are identical in effect, also induces space and time dialation according to a Lorentz factor, further complicates the affair, which helps explain why the mathematics of general relativity is so challenging.

It has been hypothesized that the whole of general relativity and special relativity can be reproduced by simply quantum mechanical rules for a massless, electromagnetically neutral, color charge neutral spin-2 graviton (a tensor particle) that couples to everything with mass or energy, and the spin-0, CP-even, 125 GeV +/- 2 GeV, electromagnetically neutral, color charge neutral Higgs boson (a scalar particle), although to my knowledge, no one has ever successfully proposed an operational realization of this hypothesis that has been rigorously shown to be equivalent to the equations of general relativity or some variant of those equations that is empirically indistinguishable through some slight technical tweak to the theory (such as Einstein–Cartan theory which adds torsion to the metric which allows gravity to respond to spin angular momentum in a way that the original formulation does not, or the Brans–Dicke theory of gravitation, which is a scalar-tensor theory and hence naively more directly parallel to a Higgs boson-graviton formulation in quantum mechanics).

Modified gravity theories attempting to explain dark matter effects which are consistent with general relativity in all domains where dark matter effects are negligable, are generically scalar-vector-tensor theories (the Bekenstein direct derivation of Milgrom's theory is dubbed TVS, while many versions of Moffat's theory that attempts to do something very similar in a slightly different way, prefers the order SVT), and were these theories to be quantitized, would presumably require, in addition to a spin-2 graviton, a spin-1 gravitovector (presumably massless, color charge neutral, and electromagnetically neutral), and perhaps also a massless spin-0 scalar graviton if the Higgs field couldn't be appropriated for that purpose.

Loop quantum gravity proposes a discrete space-time structure from which the four dimensionality of space-time and locality are merely emergent properties that are ill defined at the quantum level. Rigorous, but theory dependent tests have the discreteness of space-time have so far demonstrated a continous space-time structure at scales that would appear to be well below the Planck scale below which many direct measurements of distance and time associated with particles becomes inherently uncertain. Quantum mechanics exhibits a phenomena called entanglement which fit some definitions of non-locality, although entangled particles must share of speed of light space-time cone from a common point of origin in space-time bound and there are theoretical questions over what this bounded form of non-locality means and what can be achieved with it in terms of information transfer.

LQG tends to envision mass as someting sort of like clumping of nodes of adjacent points in space-time together. Some versions of it have a graviton that emerges from the equations and propogages.

Supersymmetry models, like the Standard Model, does not include gravity and are formulated in Minkowski space. The gravitational extention of supersymmetry models is generally called supergravity (SUGRA) and string theory/M-theory generally attempts to embed supergravity theories within its overarching substructure and naturally predicts the existence of a spin-2 particle associated with a graviton. String theory uses extra-dimensions, in which gravity interacts more easily than the four observable dimensions, as a mechanism by which to turn a force which is much weaker than the other three fundamental forces in the context of systems with small numbers of particles interacting with each other into just another manifestation of the an underlying fundamental force whose symmetries are broken by branes, dimensional compactifcation and other mechanisms that are not always well defined.

Neither general relativity nor special relativity nor Newtonian mechanics and gravity, contemplate a physical, aether-like Higgs field that gives rise to interia. Newtonian mechanics employs the low velocity limit of special relativity relating force and acceleration of F=ma as a low of motion rather than a substance, and takes the fact that matter has mass in amounts to be empirically determined as axiomatic. The equivalence of gravitational mass to inertial mass, and of gravitationally induced accelerations to other accelerations is a core axiom from which that theory is derived, and while general relativity does conceptualize mass as a sort of crystalized energy that factors into the Lorentz equations in a manner different than energy not in the form of mass does, general relativity does not address the question of what process causes energy to crystalize into mass. None of the classical theories of gravity and mechanics has an aether-like field that gives rise to inertia like the Higgs field.

The Standard Model is formulates in Minkowski space, where special relativity applies, but there is no gravity and no curvature of space-time that flows from gravity, although ad hoc applications of classical general relativity in a non-systemic way to the equations of general relativity in circumstances where general relativity effects are intense, for example to understand Hawking radiation from black holes, has been attempted with success. Among other problems with this approach, wave-like field theories do not naturally transform into particle-like theories in curved spacetime, and the acceleration of the observer influences the observed temperature of the vacuum.

It also is worth pointing out that despite the new development of the Higgs boson, parallels between the QCD equations and gravity seem strong than parallels between electroweak equations and gravity, even though the electroweak equations seem to be what is imparting rest mass to the fundamental particles in the Standard Model. The QCD connection is particularly notable given that 99% of baryonic mass arises from gluon exchange in hadrons. One could imagine, for example, a quantum gravity Lagrangian equation that was somehow related to the square of the QCD Lagrangian plus the square of the electroweak Lagrangian, weighted relative to the contribution of each set of equations to the source of the gravitational mass. A 1% contribution to gravity from electroweak sources which frequently were proportional to QCD sources, since weak force decay at any given moment in low energy systems isn't much of a flux and the proportion of particular kinds of fundamental particles (up and down quarks, electrons, neutrinos and unstable fundamental prticles) ought to be relatively uniform everywhere, might make that component of a true law of gravity invisible.

Thursday, December 15, 2011

Higgs Announcement Reactions In The Physics Blogosphere

Lubos explains why he thinks the finding is real and notes alleged consistency with a four generation of fermions Standard Model (SM4) as well as SUSY in his view.

He is convinced (not entirely unreasonably) that the Standard Model with a 125 GeV-126 GeV Higgs boson implies vaccum instability below the Planck scale which is 1.22*10^19 GeV (perhaps as low as 10^9 to 10^13 GeV, but perhaps actually as high as 10^20 GeV), and hence the existence of new physics at some scale above the electroweak energy scale and possibly beyond the range of the ability of the LHC to detect it. FWIW, I think it is very plausible that the vacuum instability threshold is precisely the Planck scale, eliminating the need for all new physics, but there is plenty of room for disagreement both due to uncertainty concerning the values for masses that are close to the threshold of critical threshold in the relevant Standard Model equations and the difficulty involved in using perturbative approximations of the Standard Model equations in energy ranges so far from the energy scale that those approximations were designed to provide accurate calculations in. For example, the coupling constants of the Standard Model are "running constants" that depend upon the energy level of interaction involved and a slight tweak in how those constants run could become very material as such extremely high energy levels in a manner similar to the way that the Lorentz factors in special relativity become much more important in a non-linear way as one approaches the upper bound of the velocity "c" (the speed of light). Assuming that a running formula and running constant values calibrated on energies of less than 10^3 GeV will still be valid at energies more than a million times as great is not a safe assumption. In his view, "One may say that the apparently observed Higgs mass favors squarks in the multi-dozen TeV scale. . . . "garden variety" supersymmetric models with light squarks and "gauge mediation" of the supersymmetry breaking have become almost hopelessly contrived and fine-tuned, and have been nearly euthanized. The apparently observed SUSY-compatible but not-too-low value of the Higgs mass favors scenarios with heavy scalars (especially heavy stop squark); or extensions of MSSM with additional particle species. See another new paper by Carena et al. trying to obtain new possibilities with various hierarchies between slepton and squark masses." In particular, the Minimally Supersymmetric Standard Model (MSSM) is pretty much dead. One paper he cites also concludes that the "gravity mediated constrained MSSM would still be viable, provided the scalar top quarks are heavy and their trilinear coupling large. Significant areas of the parameter space of models with heavy supersymmetric particles, such as split or high-scale supersymmetry, could also be excluded as, in turn, they generally predict a too heavy Higgs particle."

Matt Strassler is more skeptical about the data supporting a Higgs boson discovery at all.

Kea at Arcadian Pseudofactor, after months of diatribes against the existence of a Higgs boson is pretty much convinced and is now looking for big picture contexts that could have the Standard Model with that kind of Higgs boson in it that parallel her previous theoretical lines of inquiry.

For my druthers, I think that a whole variety of constraints are going to make beyond the Standard Model physics for the next decade or two much more timid than they have been in the last few decades. Among the features of models that are going to be increasingly disfavored are:

1. Single digit TeV or lighter new particles.
2. Boson number or lepton number violations at less than extremely high energies.
3. Proton decay (the minimal period just gets longer and longer).
4. Magnetic monopoles.
5. CPT violations.
6. Additional generations of bosons.
7. Additional large scale dimensions.
8. Technicolor.
9. Simpler SUSY models.
10. New gauge symmetries that operate outside the neutrino sector.

I personally seriously doubt that we will find right handed neutrinos or Majorana mass in neutrinos (something that would be shown, for example, by neutrinoless double beta decay, which I doubt will be discovered), although neutrino physics are one of the least experimentally constrained area of fundamental physics today. I doubt that we will find a fourth generation of Standard Model particles, sterile neutrinos outside the three generations observed and outside a fourth generation, scalar or vector gravitons, or other fundamental particles that could be WIMPs like a lightest supersymmetric particle. I doubt that we will find when the dusts settles, anomalous CP violations that hold up after having seen so many disappointments.

I personally think that dark matter effects will turn out to be some combination of (1) a neutrino condensate (or something similar that is a composite effect of non-quarks), (2) undercounted ordinary matter that is "dim", (3) underestimated general relativistic effects in large complex systems, (4) glueballs, and (5) quantum gravity modifications of the equations of general relativity that are only relevant in very weak gravitational fields. In other words, I think that the only particle potentially missing from the list of fundamental particles with any meaningful probability is a plain vanilla, spin-2, zero mass graviton although we could discover that space-time is discrete or that the number of space-time dimensions is ill defined at tiny scales and is only an emergent property of the universe.

Another hot area will be firming up the calculations under the existing Standard Model equations in more extreme and complicated scenarios (e.g. meson molecules, or extremely rare and ephemeral top quark hadrons).

I also think that there is considerable room for exploration of non-locality in fundamental physics.

Some interesting new papers about BSM physics phenomonology include:

An argument that sterile neutrinos may be less experimentally constrained than they seem.

A conclusion based on the apparent Higgs boson mass that "current data, in particular from the XENON experiment, essentially exclude fermionic dark matter as well as light, i.e. with masses below 50 GeV, scalar and vector dark matter particles."

A paper looking at two Higgs-doublet extensions of the Standard Model in light of the new information on the Higgs boson mass, which finds that some versions of possible while others are not.

A look at experimental bounds on lepton number violating models, since: "In the Standard Model (SM), the lepton L and baryon B numbers are conserved due to the accidental U(1)L × U(1)B symmetry. But the L and B nonconservation is a generic feature of various extensions of the SM. That is why lepton-number violating processes are sensitive tools for testing theories beyond the SM." Indirect bounds on branching ratios for lepton number violations from experiments as incorporated into popular lepton number violating theories are extremely stringent.

Superluminal neutrinos could be applied to explain CP violations.

The LHC may be able to see supersymmetric particles of less than 1 to 1.6 TeV when it has acquired a particular volume of data.

Wednesday, December 14, 2011

Musings On Gluon Speed, Mass and Charge

We assume, for some very good reasons, that gluons have no mass and move at a uniform speed equal to the speed of light. But, we've never directly measured a gluon's speed or mass.

Confinement, the principle of quantum chromodynamics that particles with strong force color charge do not persist in non-color neutral systems more than momentarily, prevents us from observing free gluons and free quarks with very few exceptions - top quarks can come into being only to immediately decay via a W boson before forming a color neutral hadron, and in theory, multiple gluons could combine into a color neutral "glueball." Otherwise, quarks and gluons remain confined in hadrons - three quark varieties called baryons and two quark varieties called mesons. One could imagine four or five or more quark hadrons, but they are not observed. The only composite structures with more than three quarks which have been observed have subcomponents which are mesons or baryons.

The strong force interactions we see in mesons and baryons, with protons and neutrons constituting the only two varieties of hadrons that are ever stable, take place overwhelmingly at very short distances. A hadron is on the order of a femtometer. Strong force interactions sometime extend beyond an individual hadron, but I'm not aware of any circumstance where the strong force has ever been observed to act at a distance greater than that of several nuceli, something that follows from the nature of the strong force itself that peaks at a short, characteristic distance, but is vanishingly weak at shorter distances or distances even as large as the diameter of a large atom.

At the tiny distances involved, it would be impossible to distinguish experimentally between gluons that move at the speed of light and gluons that move, for example, at (1+1*10^-5) times the speed of light (the OPERA estimate of the speed of high energy neutrinos). Definitively ruling out a mass for gluons is even more fraught and theory dependent, because on one hand, gluons are conceptualized in the relevant equations as having a "rest mass" of zero, but on the other hand, QCD attributes very little of the mass of hadrons (on the order of 1%) to the rest mass of the constituent quarks and almost all of the mass to the gluonic color force fields that bind them, in effect, to the glue which is embodied in gluons. The mass of a hadron is vastly greater than the sum of the rest masses of its parts, and the equations of QCD impart considerable dynamical masses to gluons. Moreover, given that gluons are never actually at rest, the concept of a "rest mass" of a gluon is as much a parameter in an equation as it is something that is "real" in the sense that it could be directly measured, at least in principle.

We can make some rough boundary estimates on the speed of a gluon based upon the size of the proton and neutron, our knowledge from experiment and lattice simulations about the internal structure of protons and neutrons (the gluon field is strongest in the middle and the three light quarks basically orbit around the edges) and the characteristic time period in which strong force interactions take place (which is shorter than the time frame of bottom quark decay, but longer than the time frame of top quark, W boson or Z boson decay). But, there is far too much uncertainty in these estimates to make a very precise estimate and a naive diameter of the nucleon divided by hadronization time estimate could easily be too fast because it would omit information about indirect paths from one quark to another and the frequency with which gluons are emitted by quarks. We have models that can fill in some of these blanks (although the theory doesn't necessary break down the components of the process that add up to the overall hadronization time one by one), but the estimates that would be made are theory dependent, including the assumption that massless gluons travel at the speed of light, which isn't helpful when that is the parameter of the equation that you are interested in testing at great precision.

The OPERA experiment reopens this line of inquiry. Quarks and charged leptons whose speeds have been directly measured (in the case of quarks indirectly in hadrons), couple to photons which, by definition, move at the speed of light. Neutrinos, whose Lorentz speed limit might conceivably be slightly different, at least in the vicinity of Earth, don't couple to photons. Neither do gluons. Neither do Z bosons or Higgs bosons. Z bosons and Higgs bosons, unlike gluons and photons have mass, so they must always travel at some speed less than their Lorentz speed limit related to their kinetic energy.

The Z boson has the shortest lifetime of any of the fundamental particle, even shorter than a top quark, so it is virtually impossible to simultaneous measure their speed and energy with sufficient precision to distinguish its Lorentz speed limit from the speed of light.

We just barely received a non-conclusive determination that the Higgs boson exists. It appears to be extremely unstable, just like the other massive bosons, the W boson and the Z boson. So, there is no way that we can directly measure the Lorentz speed limit of the Higgs boson any time soon.

The way that we first derived the speed of light in a rigorous scientific way was from Maxwell's equation, which set the speed of light, "c", equal to the inverse of the square root of the product of the permittivity of free space and the permeability of free space, which are measures related to the properties of electric and magnetic fields respectively in a vacuum. The current formulation of special and general relativity insists that the Lorentz speed limit for all kinds of particles is the same, but it wouldn't be inconceivable that the speed of a photon and Lorentz speed limit of charged particles, might be different from the Lorentz speed limit for particles that don't interact with photons.

We can get pretty precise relative speed estimates for neutrinos v. photons from a few supernova that we have caught in the act allowing us to measure the arrival time of a wave of neutrinos relative to the photons, and can make a pretty decent one or two significant digit estimate of how far away the source supernova was in that event based on red shift (and perhaps other methods). But, this method cannot measure absolute distances to five significant digits, and we don't have a perfect understanding of the underlying supernova dynamics so we can't be sure, for example, in what sequence the neutrinos and photons were emitted in that event.

Because the strong nuclear force and weak nuclear force operate only at short range, it isn't obvious to me that a revised theory of special relativity and general relativity in which there was one "c" for particles that couple to photons or are photons, and another slightly different "c'" for particles that don't couple to photons and aren't photons, would have any phenomenological impact that would be observable apart from neutrinos travelling a little bit faster than photons.

A theory with more than one Lorentz speed limit for different kinds of particles would be an ugly theory, but so far as I can tell, not one that would necessarily lead to any paradoxes or theoretical inconsistencies.

As a related aside, I don't think that we have found any way to confirm that gluons don't have a magnetic dipole that would be sufficient to indicate that they were composite, rather than being truly fundamental particles with no inherent electromagnetic charge at all. The determination that there are eight different kinds of gluons itself and the way that Feynman diagrams for QCD interactions are handled is almost itself a preon theory, further complicated by linear combinations, as is, for that matter, the derivation of the weak force bosons in electroweak unification theory. We don't call gluons or quarks composite, but the way they exchange color charges comes very close to that kind of description.

Tuesday, December 13, 2011

Parameterization Independent Q-L Complementarity

The relationship between the parameters of the CKM matrix and PMNS matrix called quark-lepton complementarity that seems to be only observed in one particular parameterization of these matrixes (of nine possible parameterizations) has been observed. A new study finds a way to achieve essentially the same relationship in any parameterization of the two matrixes.

Bose-Einstein Condensate Dark Matter

A recent preprint sketches out the notion of a Bose-Einstein condensate as a dark matter candidate and finds that it is a better fit to the data than traditional cold dark matter models because it produces a mass distribution closer to that of observed rotation curves as opposed to the cuspy mass distributions found in cold dark matter models. The alignment of the quantum states of Bose-Einstein condensate matter creates an effectively repulsive force between the particles that counteracts the clumpiness one would see with gravity alone, allowing for a spread out mass distribution, as a result of the fundamental property of fermions that fermions of the same quantum state cannot co-exist at the same point in space. Settled physics describes these properties of Bose-Einstein condensates, and the low temperature environment of deep space is a natural place for this state of matter of persist.

The pre-print finds that the temperatures in deep space are cold enough for condensates to have arisen in the process of the formation process of the universe, and determines that only negligable error is introduced by approximating the behavior of a Bose-Einstein condensate in thermal equilibrium with the cosmic background radiation temperature of about 2.73 degrees kelvin as a zero temperature Bose-Einstein condensate at observationally viable scales.

The study concludes (although only somewhat obliquely) that the masses of the dark matter particles within this condensate, constrained by factors such as the data from the bullet cluster collision, should have about the same order of magnitude as neutrinos (10^-3 eV to 1 eV), which would conveniently dispense with the need to discover some new fundamental particle or type of interaction to explain dark matter effects at the galactic scale, while providing a non-baryonic dark matter candidate (since it is quark free) consistent with prior data.

The pre-print addresses earlier criticisms of Bose-Einstein condensate dark matter by claiming that the critics have not done a good job of modeling the way that Bose-Einstein condensates would behave.

Of course, this implies a univers with a whole lot of neutrinos in it, and through the notion of this dark matter as a relic of the early universe that experiences Bose-Einstein condensation when the universe finally gets cool enough, escapes the usual assumption that neutrino dark matter should be "hot" (i.e. move at relativistic speeds), which would eliminate galactic structure. The pre-print refrains from suggesting a leptogenesis mechanism that could create that many neutrinos, and likewise refrains from even making a definitive association between the predicted Bose-Einstein condensate dark matter and massive neutrinos.

If improved census estimates of the amount of baryonic matter in the universe are accurate and the ordinary matter to dark matter ratio is about 1-1, it suggests a baryogenesis/leptogenesis model in which the total mass of all of the leptons in the universe and the total mass of all the hadrons in the universe is equal.

A similar idea with right handed composite neutrinos with keV masses is explored here. Another similar theory is explored here. A beta decay test of the viability of this kind of scenario has been proposed. Another discussion of experimental evidence in astronomy for non-relativistic neutrinos as dark matter can be found here.

This paper is notable for being one of the few to address how leptogenesis could arise without violatig B-L symmetry or resorting the Majorana mass related methods, although it still requires "new physics."

It isn't obvious to me, however, that leptogenesis needs beyond the Standard Model physics to produce an excess of neutrinos over charged leptons and quarks. For example, if you have a Z boson that decays to a W+ and W- boson which in turn decay to a positron, a neutrino, an electron and an antineutrino, and if the positron and electron then annihilate into a photon or Z boson, but the neutrino and antineutrino do not (since they can couple to a Z boson, but not to a photon and could be prevented by matter-energy conservation from creating a final state with anything but neutrinos in it). Moreover, the photon or Z boson can repeat the cycle by creating a W+ and W- boson again until there isn't sufficient matter-energy in the system for the cycle to repeat itself. Charge conservation limits the number of charged leptons and is suggestive of a close link between net charged lepton number and baryon number, and the instability of systems with a lepton and its antilepton in the same vicinity prevents the actual number of charged leptons from being much greater than total baryon number in the long term. But, no similar bound impacts neutrino number,

Dead Sea Once Dry

The Dead Sea was completed dried out about 120,000 years ago, during an arid period in the region. The earliest evidence for anatomically modern humans outside Africa is from around 100,000 years ago. The study took sediment cores from the Dead Sea to depths sufficient to provide a decent continous paleo-climate record from the region for the entire duration of the time that humans have been outside Africa and at least some of the time period during which it was occupied by Neanderthals (who were present in the Near East until about 50,000 years ago give or take a few thousand years, cohabiting the region with modern humans for many thousands of years).

Higgs Day!

Philip Gibbs has a great live blog account of today's announcements about the LHC Higgs boson results. Some highlights:

CMS . . . exclusion from 130 GeV up. Excess seen at about 123 GeV of 2.5 Sigma.

The 123 GeV peak is heavier than the 119-121 GeV peak that had been rumored in the past week, and is close enough, once rounding error and statistical uncertainty are considered, relative to the ATLAS result.

Both the CMS and ATLAS results show surprising wide decay widths in their diphoton results, about 6 GeV (which is about three times that of the top quark), which are overlapping but shifted from each other, although this may partially be an artifact of thin data sets rather than an accurate measure of the true decay width of the Higgs boson.

The 2.5 Sigma excess is accompanied by another not quite 2 sigma excess at 137 GeV but given the much better resolution of the data there, a real Higgs boson at that mass ought to have a much stronger signal than sub-two sigma by now, so it is still in the 95% confidence interval exclusion range. "The CMS ZZ->4l clearly rules out the 140 GeV possibility, but has an excess at lower mass."

The ATLAS experiment has about a 2.8 sigma Higgs boson signal at about 126 GeV. ATLAS sees a potential signal, not quite 2 sigma, driven by data from the ZZ->4l channel at roughly 240 GeV, but is within the Brazil bands everywhere heavier than that up to 500 GeV and in that channel at all lower masses. There are results in excess of 1 sigma in the 120-130ish GeV range from ATLAS in this channel but nothing that by itself confirms the diphoton result that ATLAS is seeing (pre-announcement rumors states that the diphoton result involves just three events but is significant because there is nothing else that can create that signal). The combined ATLAS data is outside the two sigma Brazil bands from about 123 GeV to 129 GeV with a peak at 126 GeV. Thus, the results from ATLAS are just barely consistent with those from CMS. (These are the two main experiments at LHC looking for a Higgs boson; the only other place doing high energy physics Higgs boson searches finished its work earlier this year when Tevatron was shut down for lack of Congressional funding due to its inability to add much to LHC findings with its less powerful experiment.) CMS doesn't see much significant in the WW channel which has far more background noise making a signficant observation harder to obtain.

The viXra combined plot from the diphoton channel is significant in mid-120s GeV range, with no other SM light Higgs boson masses in the running. "[T]he CMS combined plot . . . gives a clean indication for no Higgs about 130 GeV [and up] and the right size signal for a Higgs at about 125 GeV, but there is still noise at lower mass so chance that it could be moved."

Per Lubos: "In ATLAS combination with 2.1 sigma (locally) from ZZ and 1.4 sigma (locally) from WW, the combined excess near 126 GeV is 3.6 sigma locally and 2.3 sigma globally (with the look-elsewhere effect correction)[.]"

Resonnances reports: "CMS excludes Higgs down to 127 GeV. ATLAS also excludes the 112.7-115.5 GeV range" and that in the diphoton and quadlepton channels that there is a "mass resolution of order 2 GeV."

The excess is seen by both experiments and in each of these channels. The excess in H→γγ peaks around 124 GeV it CMS, and around 126 GeV in ATLAS, which I guess is perfectly consistent within resolution. In the 4-lepton channel, ATLAS has 3 events just below 125 GeV, while CMS has 2 events just above 125 GeV. It's is precisely this overall consistency that makes the signal so tantalizing.

In sum, the announcement seems to show a Higgs boson in the mid-120 GeV mass range, with a significance on the order of between 3 and 4 sigma for the combined results, but the coincidence between the CMS and ATLAS results isn't quite as precise as one might hope in either estimated mass or signal significance, and confidence in the result is also reduced by the fact that the signals in some of the other channels at this mass range are not as strong as one might wish to see, although there do seem to be some individually insignificant but collectively notable excesses over expected background in the noiser channels.

There are a few other fairly weak bumps in the data, but they are not terribly consistent with each other, so they are probably just flukes. Low experimental accuracy at the low end of the mass range (ca. 115 GeV-121 GeV) leaves the data there particularly inconclusive and uninformative.

From a numerology perspective, the results a consistent with a Higgs boson mass equal to the W+ + W- + Z boson masses divided by two, which is about 125 GeV, but are probably a tad heavy to be equal to half of the Higgs field vacuum expectation value, which would be 123 GeV.

Generally, in physics, two sigma results routinely disappear with more data and rarely amount to anything, three sigmas results pan out about half the time, and five sigma results are considered lasting and permanent discoveries. Today's announcement nudges the likelihood that a light Standard Model Higgs boson exists to somewhat better than 50-50, but as promised, is inconclusive at this point. The fact that there are strong thoeretical reasons to expect that a light Standard Model Higgs boson (or an indistinguishable light SUSY Higgs boson) exists nudges the odds against a Higgsless or heavy Higgs only model being correct a little higher.

Of course, this is only the first year of what will be more than a decade of experiments at LHC. A year from now, the inconclusive results that we received today will almost surely be confirmed or denied and the LHC will move on to looking for other kinds of new physics.

I've been fortunate enough to be alive while quite a few of the fundamental particles of particle physics were first observed (and even while a few of the higher atomic number atoms in the period table were first synthesized and some of the key equations of the Standard Model were developed). A Higg boson could very well be the last one discovered ever, or at least, during my lifetime. Assuming that a Higgs boson is discovered at this mass, the Standard Model of Particle Physics will have no missing pieces, although the masses and transition matrix for neutrinos will still be somewhat indefinite and the question of whether neutrino masses are Majorana masses or Dirac masses will remain unresolved (my sense is that the later is more likely, but see-saw mechanisms are very popular in theoretical circles).

Quote of the day:

Kea said... Wow! Fairies exist! Amazing work from ATLAS and CERN, and thanks for the early report.

FWIW, I rather like Kea's mocking terminology that calls Higgs bosons "fairies" and the Higgs boson, "the fairy field." It certainly beats calling it the "God particle" hands down.

UPDATE: Gibbs has his combined plot estimates out and they are very convincing, particularly after LEP and Tevatron data are included and one looks at the signal to not signal probability charts. I'm pretty comfortable that the Higgs boson find that is currently a 3 sigma result at about 125 GeV +/- a couple GeV is real. The combined data also reinforce the conclusion that this will be the only Higgs boson detected in the under 500 GeV mass range (at least) based upon the data in the combined plots from all four experiments, despite modest "bumps" in specific channels at certain other mass ranges - in the overall picture they fade to become mere statistical noise.

This is a huge triumph for the scientists who predicted this discovery back in the 1970s. It completes the Standard Model. No particle predicted by it is missing, no particle (or force, other than gravity which is beyonds stated scope) which is not predicted by it, has been discovered. There is no replicated high energy physics experiment that is significantly at odds with its predictions (the lone significant deviation at the moment is OPERA's superluminal neutrino result).

At the end of the 1800s, scientists thought that they had conquered all of the fundamental rules of nature, although there were physical constants to be honed and applications of those rules that had not been worked out. Five generations later, we may very well have actually achieved that result.

I am increasingly inclined to conclude we will find no supersymmetry, and that we will ultimately be able to explain dark matter with the physics that we already have in hand. When we someday find a way to integrate general relativity and the Standard Model that it will turn out to be a dotting of i's and crossing of t's moment rather than a discovery that provides any meaningful new phenomenological insight. With all of the underlying pieces having pretty much come together, we may just a mathematical trick or two away from the day when that formulation of quantum gravity becomes possible and the job of theoretical physicists becomes merely a matter of making it all look as pretty as possible.

Monday, December 12, 2011

Higgs Data May Be More Complicated Than Expected

A compliation of the huge repository of rumors concerning tomorrow's big announcement regarding the ATLAS and CMS Higgs boson searches at the LHC, suggests that the announced results may form a far less coherent picture than expected with indications of possible signals at more than one mass (which, if true, would be the clearest experimental sign ever of supersymmetry, which generically predicts more than one Higgs boson).

BSTR Symmetry

BRST symmetry is a property present in various kinds of quantum mechanical equations (which is quite mathematically and geometrically challenging), in which one kind of a non-physical output of the equations cancels out another kind of non-physical output of the equations, making the equations "renormalizable" (i.e. possible in principle to calculate without blowing up into infinities). In the context of QCD, another way to put this is that "all the UV [i.e. high energy] divergences of the theory can be cancelled by the counterterms[.]" This symmetry appears to be present even in the non-abelian gauge fields of the weak force and strong force equations, although this reality may need footnotes for exceptions in certain special cases. This was described theoretically in the 1970s, but apparently, since then this insight hasn't done much besides assuaging concerns that renormalization didn't have a mathematically rigorous basis.

The theoretical program involved in working with the BSTR symmetry also provides a theoretically sound alternative to the Feynman path integral conceptualization of quantum mechanics, that is rather more conceptually challenging to grok, but generalizes to cases more complex than QED in Minkoski spacetime more naturally.

Europe's Genetic History Is Complex

One doesn't have to adopt Dienekes' theory regarding how West Eurasians came to be in full to recognize that his data necessarily implies a complex population genetic history for West Eurasia, because in "the Old World . . . distantly located populations are often more similar to each other than to their more immediate geographic neighbors."

Mixing Angle and Coupling Constant Numerology

Quantum Diaries Survivor revisits some interesting phenomenological relationships of the quark and lepton mixing angles, that is also died into a way to use fundamental constants to derive the electromagnetic coupling constant. On the two-do list is a check to see how the proposal fits with quark-lepton complementarity ideas which seem more soundly theoreticallly motivated.

Friday, December 9, 2011

Another B Meson Excess CP Violation Bites The Dust

A 2008 result showing more than Standard Model CP violations in B meson decays at Tevatron has turned out to be a statistical fluke after the collection of more data and a little more precise analysis of the experiment.

New Study Supports ANI/ASI Description of South Asian Whole Genomes

Indian populations are characterized by two major ancestry components, one of which is spread at comparable frequency and haplotype diversity in populations of South and West Asia and the Caucasus. The second component is more restricted to South Asia and accounts for more than 50% of the ancestry in Indian populations. Haplotype diversity associated with these South Asian ancestry components is significantly higher than that of the components dominating the West Eurasian ancestry palette.

Modeling of the observed haplotype diversities suggests that both Indian ancestry components are older than the purported Indo-Aryan invasion 3,500 YBP.

Consistent with the results of pairwise genetic distances among world regions, Indians share more ancestry signals with West than with East Eurasians.

From here (open access) via here.

The paper itself goes on to state in some portions that I excerpt below (greater than and less than signs transliterated with words because of their impact on html formating in a blog post; internal references to sources and figures omitted):

Reich et al. have also made an argument for a sizeable contribution from West Eurasia to a putative ancestral north Indian (ANI) gene pool. Through admixture between an ancestral south Indian (ASI) gene pool, this ANI variation was found to have contributed significantly to the extant makeup of not only north (50%–70%) but also south Indian populations (greater than 40%). This is in contrast with the results from mtDNA studies, where the percentage of West Eurasian maternal lineages is substantial (up to 50%) in Indus Valley populations but marginal (less than 10%) in the south of the subcontinent. . . .

[W]e used the model-based structure-like algorithm ADMIXTURE that computes quantitative estimates for individual ancestry in constructed hypothetical ancestral populations. Most South Asians bear membership in only two of the constructed ancestral populations at K = 8. These two main ancestry components—k5 and k6, colored light and dark green—are observed at all K values between K = 6 and K = 17. These correlate (r > 0.9; p < 0.00001) perfectly with PC4 and PC2 in West Eurasia, respectively. Looking at the Pakistani populations (0.51) and Baluchistan (Balochi, Brahui, and Makrani) in particular (0.59), the proportion of the light green component (k5) is significantly higher than in the Indian populations, (on average 0.26). Importantly, the share of this ancestry component in the Caucasus populations (0.50) is comparable to the Pakistani populations.

There are a few populations in India who lack this ancestry signal altogether. These are the thus-far sampled Austroasiatic tribes from east India, who originated in Southeast Asia and represent an admixture of Indian and East Asian ancestry components, and two small Dravidian-speaking tribes from Tamil Nadu and Kerala. However, considering the geographic spread of this component within India, there is only a very weak correlation (r = 0.4) between probability of membership in this cluster and distance from its closest core area in Baluchistan. Instead, a more steady cline (correlation r = 0.7 with distance from Baluchistan) of decrease of probability for ancestry in the k5 light green ancestral population can be observed as one moves from Baluchistan toward north (north Pakistan and Central Asia) and west (Iran, the Caucasus, and, finally, the Near East and Europe).

If the k5 light green ancestry component originated from a recent gene flow event (for example by a demic diffusion model) with a single center of dispersal where the underlying alleles emerged, then one would expect different levels of associated haplotypic diversity to suggest the point of origin of the migration. . . . Our simulations show that differences in haplotype diversity between source and recipient populations can be detected even for migration events that occurred 500 generations ago (∼12,500 years ago assuming one generation to be 25 years). For alleles associated with k5, haplotype diversity is comparable among all studied populations across West Eurasia and the Indus basin. However, we found that haplotypic diversity of this ancestry component is much greater than that of those dominating in Europe (k4, depicted in dark blue) and the Near East (k3, depicted in light blue), thus pointing to an older age of the component and/or long-term higher effective population size. Haplotype diversity flanking Asian alleles (k7) is twice greater than that of European alleles—this is probably because the k7 ancestry component is a composite of two Asian components ([at] K > 10).

In contrast to widespread light green ancestry, the dark green ancestry component, k6 is primarily restricted to the Indian subcontinent with modest presence in Central Asia and Iran. Haplotype diversity associated with dark green ancestry is greatest in the south of the Indian subcontinent, indicating that the alleles underlying it most likely arose there and spread northwards. It is notable that this ancestry component also exhibits greater haplotype diversity than European or Near Eastern components[.] . . .

[G]enetic diversity among Pakistani populations (average pairwise FST 0.0056, although this measure excludes the Hazara, who show substantial admixture with Central Asian populations) is less than one third of the diversity observed among all South Asian populations (0.0184), even when excluding the most divergent Austroasiatic and Tibeto-Burman speaking groups of east India. . . . all South Asian populations, except for Indian Tibeto-Burman speakers, show lower FST distances to Europe than to East Asia. This could be either because of Indian populations sharing a common ancestry with West Eurasian populations because of recent gene flow or because East Asian populations have relatively high pairwise FST with other non-African populations, probably because of their history of genetic bottlenecks.

Similarly, the clines we detect between India and Europe (e.g., PC1 and PC2) might not necessarily reflect one major episode of gene flow but be rather a reflection of complex demographic processes involving drift and isolation by distance. Nevertheless, the correlation of PC1 with longitude within India might be interpreted as a signal of moderate introgression of West Eurasian genes into western India, which is consistent with previous studies on uniparental and autosomal markers.

Overall, the contrasting spread patterns of PC2 and PC4, and of k5 and k6 in the ADMIXTURE analysis, could be seen as consistent with the recently advocated model where admixture between two inferred ancestral gene pools (ancestral northern Indians [ANI] and ancestral southern Indians [ASI]) gave rise to the extant South Asian populace. The geographic spread of the Indian-specific PC2 (or k6) could at least partly correspond to the genetic signal from the ASI and PC4 (or k5), distributed across the Indus Valley, Central Asia, and the Caucasus, might represent the genetic vestige of the ANI. However, within India the geographic cline (the distance from Baluchistan) of the Indus/Caucasus signal (PC4 or k5) is very weak, which is unexpected under the ASI-ANI model, according to which the ANI contribution should decrease as one moves to the south of the subcontinent. This can be interpreted as prehistorical migratory complexity within India that has perturbed the geographic signal of admixture.

Overall, the locations of the Indian populations on the PC1/PC2 plot reflect the correlated interplay of geography and language. In concordance with the geographic spread of the respective language groups, the Indian Indo-European- and Dravidic-speaking populations are placed on a north to south cline. The Indian Austroasiatic-speaking populations are, in turn, in agreement with their suggested origin in Southeast Asia drawn away from their Indo-European speaking neighbors toward East Asian populations. In this respect, it is interesting to note that, although represented by only one sample each, the positions of Indo-European-speaking Bhunjia and Dhurwa amidst the Austroasiatic speakers probably corroborates the proposed language change for these populations.

[I]t was first suggested by the German orientalist Max Müller that ca. 3,500 years ago a dramatic migration of Indo-European speakers from Central Asia (the putative Indo Aryan migration) played a key role in shaping contemporary South Asian populations and was responsible for the introduction of the Indo-European language family and the caste system in India. A few studies on mtDNA and Y-chromosome variation have interpreted their results in favor of the hypothesis, whereas others have found no genetic evidence to support it.

However, any nonmarginal migration from Central Asia to South Asia should have also introduced readily apparent signals of East Asian ancestry into India. Because this ancestry component is absent from the region, we have to conclude that if such a dispersal event nevertheless took place, it occurred before the East Asian ancestry component reached Central Asia. The demographic history of Central Asia is, however, complex, and although it has been shown that demic diffusion coupled with influx of Turkic speakers during historical times has shaped the genetic makeup of Uzbeks (see also the double share of k7 yellow component in Uzbeks as compared to Turkmens and Tajiks), it is not clear what was the extent of East Asian ancestry in Central Asian populations prior to these events.

Another example of an heuristic interpretation appears when we look at the two blue ancestry components that explain most of the genetic diversity observed in West Eurasian populations (at K = 8), we see that only the k4 dark blue component is present in India and northern Pakistani populations, whereas, in contrast, the k3 light blue component dominates in southern Pakistan and Iran. This patterning suggests additional complexity of gene flow between geographically adjacent populations because it would be difficult to explain the western ancestry component in Indian populations by simple and recent admixture from the Middle East.

Several aspects of the nature of continuity and discontinuity of the genetic landscape of South Asia and West Eurasia still elude our understanding. Whereas the maternal gene pool of South Asia is dominated by autochthonous lineages, Y chromosome variants of the R1a clade are spread from India (ca 50%) to eastern Europe and their precise origin in space or time is still not well understood. In our analysis we find genetic ancestry signals in the autosomal genes with somewhat similar spread patterns. Both PC2 and k5 light green at K = 8 extend from South Asia to Central Asia and the Caucasus (but not into eastern Europe).

In an attempt to explore diversity gradients within this signal, we investigated the haplotypic diversity associated with the ancestry components revealed by ADMIXTURE. . . our current results indicate that the often debated episode of South Asian prehistory, the putative Indo-Aryan migration 3,500 years ago falls well within the limits of our haplotype-based approach. We found no regional diversity differences associated with k5 at K = 8. Thus, regardless of where this component was from (the Caucasus, Near East, Indus Valley, or Central Asia), its spread to other regions must have occurred well before our detection limits at 12,500 years. Accordingly, the introduction of k5 to South Asia cannot be explained by recent gene flow, such as the hypothetical Indo-Aryan migration. The admixture of the k5 and k6 components within India, however, could have happened more recently—our haplotype diversity estimates are not informative about the timing of local admixture.

Both k5 and k6 ancestry components that dominate genetic variation in South Asia at K = 8 demonstrate much greater haplotype diversity than those that predominate in West Eurasia. This pattern is indicative of a more ancient demographic history and/or a higher long-term effective population size underlying South Asian genome variation compared to that of West Eurasia. Given the close genetic relationships between South Asian and West Eurasian populations, as evidenced by both shared ancestry and shared selection signals, this raises the question of whether such a relationship can be explained by a deep common evolutionary history or secondary contacts between two distinct populations. Namely, did genetic variation in West Eurasia and South Asia accumulate separately after the out-of-Africa migration; do the observed instances of shared ancestry component and selection signals reflect secondary gene flow between two regions, or do the populations living in these two regions have a common population history, in which case it is likely that West Eurasian diversity is derived from the more diverse South Asian gene pool.

Most of this analysis makes sense (although it is extremely equivocal). But, I disagree with the conclusion that a lack of regional differences in genetic diversity between regions implies a time depth of more than 12,500 years. It is more plausible, in my view, given Indo-European historical and pre-historical evidence from multiple other sources to assume a recent and simultaneous dispersal of a large population to many different regions, with the size of the population entering South Asia being somewhat larger than elsewhere, thus sustaining simmilar levels of diversity in different regions with South Asia. In part, their conclusion seems driven by probably inaccurate assumptions about the time depth of the principle European and Near Eastern autosomal components (k3 and k4). Further, if k5 is Indo-Aryan and it had a significant Caucusian source, it may have had considerable age prior to Indo-Aryan expansion during which it could have accumulated its diversity.  The weak geographic pattern in k5 within India may reflect a fairly complete subcontinental imposition of a high caste ANI ruling class.

I am also inclined to think that the West Asian/ANI components (k3, k4 and k5) probably represent at least two waves of migration, at least one of which is Harappan (probably k3), and at least one of which is Indo-Aryan (probably k5 with a minority component of k4).

The observation that Central Asia has East Asian components now that it may not have earlier (the current component appears from historical evidence and ancient DNA to have its origins in the last 2,000 years), because they did not enter the South Asian gene pool from Central Asia, is notable and probably correct.

The study also confirms prior studies in finding that Tibeto-Burman populations are more distinct genetically (and hence probably more recent arrivals) than other South Asian populations.

The autosomal profiles show significant instances of African origins in Pakistani populations (and a Dravidian speaking Brahui population no different genetically from its Indo-European language speaking neighbors) but strong ASI components and no discernable African components in Dravidian speakers, even in Andhra Pradesh where Y-DNA haplogroup T frequencies are highest. Andhra Pradesh also has very low percentages of Near Eastern (k3) or European (k4) components in this breakdown, although the sample is small enough that it may not be representative for relatively recently appearing, relatively moderate frequency genetic types that have not reached fixation in the area.

Thursday, December 8, 2011

Genus Homo Anatomy May Explain Speech

Most of the effort to explain the superior speech abilities of humans have focused on cognitive abilities, but a new study finds that the loss of air sacs that are part of the anatomy of Great Apes and missing link genus Afarensis, but not part of members of the genus Homo (at least 600,000 years ago), may also have been important in the development of speech in humans. The study doesn't resolve the presence of this feature in Homo Eragaster or Homo Erectus, ca. 2,500,000 to 600,000 years ago, although it suggests that it makes sense to study those remains with this feature in mind.

GUT Intuitions

If as loop quantum gravity theorists suggest, gravity truly is a function of the geometry of space-time that is different in kind from the particle basis of the other three fundamental forces in physics, then a grand unified theory (GUT) of the other three forces formulated with a version of the particle propagators of the Standard Model particles in a discrete rather than continuous across a quantum space-time grid is a theory of everything. This would have the virtue of eliminating the need for extra dimensions found in string theory/M theory, but not in mere supersymmetry.

Indeed, simply formulating the Standard Model particles and forces in a not precisely local discrete space-time like loop quantum gravity would itself be a grand accomplishment, finally providing a way to unify the Standard Model and general relativity at all.

It also wouldn't surprise me if the pathologies such as magnetic monopoles and proton decay found in many GUTs, for example, the earliest minimal non-supersymmetric SU(5) theories, could be cured if they were embedded in a discrete, general relativistic rather than a continuous Minkowski space-time.

Another particularly nifty speculative stab at unification by Cohl Furey at the Perimeter Institute (flagged by Kea at Arcadian Pseudofactor) suggests that an algebra arising from the combination of the real numbers, R, the complex numbers, C, the quaternions, H, and the octonions, O, might very well be capable of reproducing the Standard Model, with the algebraic limitations of octonions proving particularly useful in inducing non-physical combinations (hence an RxCxHxO model). As the abstract explains:

Unified Theory of Ideals (2010)

Unified field theories try to merge the gauge groups of the Standard Model into a single group. Here we lay out something different. We give evidence that the Standard Model can be reformulated simply in terms of numbers in the algebra RxCxHxO, as with the earlier work of Dixon. Gauge bosons and the fermions they act on are unified together in the same algebra, as are the Lorentz transformations and the objects they act on. The theory aims to unify everything into the algebra RxCxHxO. To set the foundation, we show this to be the case for a single generation of left-handed particles. In writing the theory down, we are not building a vector space structure, and then placing RxCxHxO numbers in as the components. On the contrary, it is the vector spaces which come out of RxCxHxO.

The paper gives just on incomplete example of the process (for left handed first generation particles), but doesn't follow the concept to completion. But, it does show promise.