Thursday, February 28, 2013

Louse Genetics Parallel Human DNA Clades

Lice have been primate parasites in Africa for twenty-five million years and continue to be through the present. 

The current study is based upon a somewhat disappointingly smalll sample consisting of "75 human head lice [that] were collected from different individuals at 10 localities throughout the world. Clothing lice came from two sites: Canada and Nepal. Canadian clothing lice (N = 16) were collected from a single homeless person. The two clothing lice from Nepal (N = 2) were collected from two persons." This small sample size and the paper's comparison in the paper to divergent results from another, even smaller louse genetics study by Leo et al. (2005) of louse genetics in eleven double clothing-head lice infections designed to determine if they consitute one or two species, suggest that neither study nor the two studies combined, have secured a sample comprehensive enough to capture something close to the complete global genetic diversity in human head and clothing lice.

The ten locations were a Western Canadian City, New York City, San Francisco, two locations in Florida, Hondurus, Thailand, Nepal, Cambodia, Norway and the United States.  The mtDNA clade analysis was also informed by five previously published studies of louse genetics.  As a result, the geographic descriptions below convey the impression that they are more complete than they is actually the case.  Fore example, the fact that some regions, like South America, aren't specifically described, appears to be a function of the lack of a sample from that location rather than necessarily constituting evidence of a population genetic barried between Central America and South America that played a role in human louse genetics.

Human Head Louse mtDNA genetics

There are four main clades in the mtDNA population genetics of lice (open access) based on their mtDNA. Clade A is found worldwide, but has two internal geographic clusters, one containing all the Clade A lice outside Africa, and the other including African lice   Clade B is found in Europe, Australia, Central America and North America.  Clade C is found in Nepal and Ethiopia.  The analysis was conducted using the essentially the same cluster analysis analytical tools used in human population genetics studies.

Clade A is estimated to have diverged from the common ancestor of Clades A and B about 110,000 to 540,000 years ago.  No date is suggested in the study for the internal clustering within Clade A, although presumably, it was later than the divergence date for the entire clade.  Clade B is estimate to have diverged 150,000 years ago.  Clade C also diverges something on the order of 150,000 years ago.  The chimp louse diverged from the common ancestor of Clades A, B and C about 2,000,000 years ago.

The dates for Clade B and C are consistent with divergences associated with the original Out of Africa migration.  Clade A could be consistent with either Out of Africa, or with the appearance of the common ancestor of Neanderthals and modern humans, or with an intermediate date such as the emergence of the first modern humans.

The ABC divergence date from the chimp louse takes place right around the time that Homo erectus evolved and became the first hominin to leave Africa.
[P]revious studies suggested that louse mtDNA haplogroup A has had a long history associated with the host lineage that led to anatomically modern humans, Homo sapiens. Studies of modern human expansion out of Africa show the footprint of serial founder effects on the genetic diversity of human populations as revealed by the human pattern of increased genetic distance and decreased diversity with distance from Africa. The microsatellite loci developed in this study are ideal markers to measure louse genetic diversity and how it parallels to human diversity. Louse mitochondrial haplogroup B is found in the New World, Europe and Australia but not in Africa. Reed et al. suggested that its evolutionary origins might lie with archaic hominids from Eurasia (i.e., Homo neanderthalensis) and that they became associated with modern humans via a host switch during periods of overlap.
Autosomal Louse Genetics

Autosomal louse genetics were also considered and in a step that far too few studies of human genetics take, correlated with the mtDNA data.
Population structure was inferred with a Bayesian clustering approach implemented in the STRUCTURE software. In all the three STRUCTURE analysis, all worldwide human lice were assigned to four genetic clusters (K = 4), one defined by clothing lice from Canada, the other head lice from North America and Europe, a third cluster was composed of head lice from Honduras, and the fourth cluster included Asian lice (both head and clothing lice). . . . 
In another STRUCTURE analysis, we incorporated the mitochondrial haplogroup data for each louse. . . . there is no correlation between mitochondrial haplogroups (A and B) and nuclear genetic clusters, at least among the current samples.  
We also employed the multivariate technique Principal Coordinate Analysis (PCA) . . . We found similar results as the STRUCTURE analyses, where one cluster included head lice from North America and Europe, while all Asian and Central American lice comprised a second cluster with the exception of the clothing louse from Nepal that showed an intermediate position between this group and clothing lice from Canada. . . .

For Canada, New York, Honduras, and Cambodia populations we further analyzed their genetic substructure by analyzing each population individually using STRUCTURE. These results revealed an increase in the number of regional genetic clusters in New York (K = 3), and in Cambodia (K = 3). Although Evanno's method cannot evaluate K = 1 as the most likely number of clusters, we found that populations from Canada and Honduras showed admixture for all individuals when K = 2. This can be interpreted as evidence supporting Canada and Honduras as a single genetic cluster, respectively, at least with the current number of microsatellite markers analyzed. 
[T]he clothing lice (Canada and Nepal) grouped more closely with the Central America-Asia cluster probably because of close ancestry. These results are consistent with the idea that clothing lice evolved from head louse ancestors, invading the body region only recently with the advent of clothing use in modern humans. Further, studies have shown that clothing lice emerged from only one of the three mitochondrial haplogroups (Clade A) roughly 83,000 years ago. Although clothing lice belong to a single mtDNA clade, they appear to have evolved locally (in situ) throughout the world from head louse populations.

Woit, Arkani-Hamed and Feng on SUSY Bounds

I'll sum up the conclusions of the SUSY skeptic, the leading SUSY theorist, and another SUSY defender, who appear in the title to the post and discuss the boundaries on SUSY theories in light of the latest LHC data, below the jump.  SUSY isn't dead, but the dream of SUSY as a beautiful theory that could address all of the unsolved problems in physics that motivated the model in the first place and could produce phenomena observable at the electro-weak energy scale has definitely been tarnished.  If superpartners are out there, they are much heavier than SUSY theorists had naively expected them to be.

Before I get to that, however, a little rant is in order.

If some bright young theorist were proposing these ideas for the first time, a few stumbles while exploring some pretty interesting possibilities would be noteworthy.  But, the reality is more alarming.  A huge share of the fundamental physics community has devoted almost an entire generation's worth of their efforts to formulating and testing this theory and it is turning out to be a dud.

This is a risk that every theoretical physicist takes.  Dozens of new theory papers are published every week, and only one or two of them can hope to be correct.  The rest are flights of fancy that don't come close to describing our world for one reason or another in the best of times.  When one theorist pursuing an ideal strikes out, that's O.K.

But, the big problem is that their incredible collective intellectually resources have been ill utilized because a huge proportion of them are simply recycling and noodling the same old shopworn ideas that have already been found wanting.  The answer to the ultimate question of life, the universe and everything is almost surely somewhere out there on the left fork in the road and 90% of the people trying to answer it are still stuck on the right fork where they have been for the last thirty years.  When a very large share of everyone in this line of work is pursuing the same theory that turns out to be a dud, it is not O.K. 

Basically, far too many people are still asking the wrong questions and getting no new answers as a result.  Clearly, a frontal attack on basically aesthetic concerns, like the "hierarchy problems", "naturalness" and gauge unification, which suggested some version of SUSY as a place to look for solutions in the first place are not getting us anywhere.

Knowing what we know today, and thinking like a Baysean, we should be removing all but 10% of so of theoretical physicists who on are that fork from the search for SUSY and tell them to start working on anything but SUSY instead.  Ideally, these newly displaced physicists should work on something that no one else has every developed very well before now. 

But, mid-career and late-career theoretical physics researchers have incredible sunk costs that they have incurred in mastering knowledge relevant only to SUSY, so this kind of wholesale repurposing of the theoretical physics workforce is probably an impossible pipe dream.  We are institutionally incompetent to devote adequate resources to BSM alternatives other than SUSY any time in the next couple of decades.  Add the tenure factor and the future doesn't look bright for a long time to come. 

We are basically betting the future of theoretical physics research on Earth for the next few decades on the very small cadre of lone wolves or small packs of researchers doing something different at places like the Perimeter Institute in Canada (arguably the most important institution to the future of physics in the world other than CERN, bar none).

Of course, true believers like Lubos Motl have another view, and maybe he's right on this score.  Maybe experimental constraints are sufficiently stringent that any internally consistent theory that also explains the evidence to date that is not the Standard Model has to look a lot like the Standard Model.  Maybe once one considers the various no go theorems and constraints the theoretically constrain the BSM alternatives, almost all of theory space highly constrained to be SUSY-like, or at the very least M-theory-like.  The one thing physicists did learn over the last few decades that they didn't know before is that the seemingly distinct classes of models that they had been exploring all turn out to be ultimately equivalent to each other for all practical purposes.  If one is careless it is easy to end up just putting old wine in new skins, which would also be a waste of time.  SUSY may not be the only possibility, but maybe it doesn't take many theoretical physicists to explore the mere handful of other alternatives that are still viable. 

Still, anytime someone comes up with a "no go" theorem, you always have to wonder if the people who are proposing it and reviewing its proof are really right, or have simply overlooked loopholes in it that they have been insufficiently creative to conceive.  What shared assumptions does everyone involved have that they shouldn't?  Are we really rightly devoting immense theoretical and experimental research to exploring possible SUSY theories because there are no other viable options?  Or have theoretical physicists, collectively, just become lazy and unoriginal?

Part of the reason that loop quantum gravity, variations on Koide's formula, modified gravity theory research and similar line of inquiry are interesting and are discussed at this blog, is that the people doing this kind of research are at least asking questions that haven't been beaten to death for decades by teams of theorists as large as the credits reel of a superhero movie.  Even more importantly, after asking these new questions, they are getting new answers and making genuine progress of some kind. 

Right, wrong, or "not even wrong", at least they are thinking out of the box, which is the only way that we are going to ever make any progress in theoretical physics after a generation of chasing dead ends and stagnation.

Inferring Historical Linguistic Affiliations With Genes

How does one come to conclusions about what languages were spoken by prehistoric peoples who did not have languages that were ever attested in writing? 

What role does genetic evidence play in assessing an archaeological culture's linguistic affiliations?

Some key assumptions guide my use of genetic data to infer language shift in the absence of historical evidence. I would argue that each of the twelve assumptions set forth below the jump has a solid theoretical and empirical basis.  The parts of these assumptions in the body language of each point related to genetics are in bold.  There are sometimes close calls involved in applying these tests, but these assumptions are not speculative and do not involve a blind equation of genes and linguistic affinity. 

In any given place there are typically only half a dozen or fewer archaeological cultures that were not historically attested.  In any given continent or subcontinental sized region, there are at most a few dozens.  The nature of task of inferring historical linguistic affiliations is to assign a suspected linguistic affiliation to each group of one or more related archaeological cultures based on the nature of their transitions from prior cultures, and to connect them to historically attested languages when possible.  Genetics provide valuable evidence in evaluating these small numbers of transitions in any given language area, but genetics have to be used property and not just blinding associated with a language without other context to support that connection.

Taken together, the sum of these linguistic stories provide an understandable narrrative that shows how the peoples of prehistoric eras provided the cultural source for modern populations and languages.  This post merely states the assumptions involved, rather than applying them, a task of a lifetime left for other posts.

Wednesday, February 27, 2013

Gravi-Weak Unification Talk Slides

The slides from the lastest talk on Gravi-weak unification theory can be found here with audio here.  Previous coverage here.

In a nutshell, the argument is that the right handed side of the weak force (which operates only on left handed particles) is gravity.  It predicts a sterile neutrino that could account for dark matter, a single Higgs boson and a graviton naturally.  The asymmetric nature of the two forces is related to the cosmological constant.  The forces share a coupling constant that is inserted in different ways in the respective equations.  More conclusions based on the theory are expected to be forthcoming later this year.  Dark matter particle interactions with each other are governed by a force with a mathematical structure similar to electromagnetism, i.e. U(1).

More on Non-SM Higgs Boson Exclusions

In his most recent blog post, Jester examines some of the mounting evidence that the spin-0, electromagnetically neutral, even parity particle with a mass of about 125 GeV +/- 1 GeV is the Standard Model Higgs boson or something very near to it, rather than being a mere "Higgs-like" particle. 

125 GeV Particle Quantum Number Properties Already Established

The mere existence of diphoton decays (which were diagnostic of the Higgs boson in the first place) established that it was an electrically neutral bosons, as a result of the laws of conservation of electromagnetic charge (which has never been observed to be violated) and the laws regarding conservation of isospin in quatum mechanics (only even spin bosons can decay to two spin-1 particles that combined have an even spin).

Other decays observed in the 125 GeV particle decay established that it was indeed a massive particle and established its approximate mass. 

Large Hadron Collider (LHC) data excluding the possibility that the 125 GeV particle is a spin-2, or a parity odd particle, discussed previously at this blog, can be found here

Both the Standard Model Higgs boson and the supersymmetric Higgs bosons all lack the color charge of quantum chromodynamics which is mediated by color charged gluons.  So far as I know there is not yet any definitive experimental data analysis establishing that the Higgs boson lacks color charge.  But, this seems highly unlikely, as this would surely have some weird impact on the data and this is not well motivated theoretically.  Any color charged interactions would produce wild deviations from the Standard Model Higgs boson branching ratios and decay rates in many different channels that have not bee observed, unless this was highly tuned in unexpected ways or way supressed by some unanticipated mechanism.  No one out there seems to be worrying about the possibility of color charged Higgs bosons at this point.

The W and Z couplings are consistent with a the Standard Model Higgs boson so far.

Jester argues that we may fairly call this particle "the Higgs boson" (meaning the Standard Model Higgs boson or a non-Standard Model Higgs boson almost indistinguishable from it with the same role in the low energy effective theory of particle physics) if its couplings to the W and Z bosons approximates the Standard Model strength of cV=1 and may be called "a Higgs boson" if there is a coupling to the W and Z that is non-zero (which it most definitely is by many, many sigma). 

The combined data to date show that for the 125 GeV particle that has been discovered, at "the 95% confidence level cV is within 15% of the standard model value cV=1" (with a mean of about 1.05).  The LHC data, while less precise than the LEP data so far, has a mean value slightly lower than one of about cV=0.95 while the LEP electroweak precision data which is a bit more precise, implies a value a bit higher than 1 of perhaps cV=1.1.

Thus, the LHC data is consistent with the margin of error with the particle being the Standard Model Higgs boson to the W and Z bosons, and it is inconsistent with the particle being some kind of even-parity, electrically neutral, massive spin-0 particle that does not couple to the massive electroweak gauge boson, which arguably would be outside the definition of the Higgs boson, and hence is fairly described as "a Higgs boson."

One can argue for the theoretical possibility that the Higgs boson might be "fermiophobic" (and hence have suppressed couplings to both quarks and leptons) or "leptophobic" (and hence have suppressed copulings to leptons).  But, none of the data, particularly as it is about to be updated, seem to point strongly to that conclusion. 

These possibilities and the charge SUSY Higgs boson possibilities are discussed here and were motivated mostly by a possible diphoton decay excess that seems to have been nothing more than a statistical blip in the data as more results from the LHC are produced from larger data sets that include more of its run so far.  The diphoton excess is now rumored to be less than two sigma.

Heavy SM-like H' Higgs bosons excluded up to 600 GeV

The exclusion for a particle with the SM Higgs boson properties other than a 125 GeV +/-about 1 Gev extends up to 600 Gev based on LHC (large hadron collider) data released in Feb 2012. These limits exclude, for example, a "second generation" or excited state of the SM Higgs boson, call it "H' aka H prime" (by analogy to hypothetical heavy W and Z bosons called W' and Z') that are hypothesized by QCD physicists like Marco Frasca, in those mass ranges (see also his comments in a related post at Dispatches from Turtle Island).  The LEP and Tevatron data (i.e. data from two prior lower energy collider experiments that have now concluded; LEP focused on leptons, Tevatron was basically a lower powered version of the LHC) exclude ligher Standard Model-like Higgs bosons.

SM4 Higgs bosons excluded up to 600 GeV.

Another non-SUSY, non-SM theory is SM4, i.e. the SM plus a fourth generation of fermions. An SM4 Higgs boson is excluded from 120 GeV to 600 GeV by LHC and at lower masses by the LEP and Tevatron.  This finding together with other LHC data, has effectively killed the SM4 and the SM5+ as viable extensions of the Standard Model.

Limits on SUSY Higgs Bosons

LHC data on the properties of the Higgs boson powerfully constrain supersymmetry theories although there are always ways that the theory can be modified to keep it alive.  A state of the art just around the corner theory can be found, for example, in a December 31, 2012 paper by one of the leading supersymmetry theorists in the world. Even he is aware of the impact of constant fine tuning on the plausibility of the model stating:
[C]ompletely natural supersymmetric theories may still turn out to describe physics at the TeV scale, and there have been no shortage of models of this sort proposed recently in response to null-results for new physics from the LHC. It is however fair to say that these models are rather elaborate. Many of these theories are actually just as ne-tuned as more conventional versions of supersymmetry, but the tuning is more hidden. The more sensible theories of this sort may be "natural" with respect to variations of their Lagrangian parameters, but in an admittedly hard-to-quantify sense, their epicyclic character involves a tuning in "model space."

Easily Excluded SUSY Higgs Bosons.

All supersymmetry (SUSY) theories have at least five spin-0 Higgs bosons, four more than the Standard Model's one Higgs boson, and in general have 1+4n for n equal to a positive integer (Higgs bosons come in "doublets" of four and the first three are "eaten" by the massive Standard Model electroweak gauge bosons, the W+, the W- and the Z).

Two of the SUSY Higgs bosons, commonly labeled "H+" and "H-" are electromagnetically charged and hence inconsistent with the LHC data for the 125 GeV particle.  Some less minimal SUSY models also have doubly charged Higgs bosons (i.e. charges of +2 or -2, rather than the +1 or -1 of the minimal supersymmetric model (the MSSM) and the next to minimal supersymmetric model (the NMSSM) which are also excluded as 125 GeV particle candidates by the LHC data. 

The exclusion range for the H+ and H- SUSY bosons from early LHC data, in general, was at least 80 GeV to 140 GeV, and since most versions of supersymmetry theoretically exclude charged higgs bosons of less than W boson mass, this is really a 0 GeV to 140 GeV exclusion for a SUSY H+ or H-. A later LHC data analysis ruled out a SUSY Higgs boson in the 144 GeV to 207 GeV mass range in a model dependent prediction.  It also isn't obvious to me that there is a meaningful window of possibility that there are charged SUSY Higgs bosons with masses in the 140 GeV to 144 GeV range (even though there is apparently not a full fledged 95% confidence interval exclusion in that mass range).  A few diehards point out that Tevatron and LEP were not sensitive enough to detect and hence exclude SUSY H+ and H- bosons in absolutely every conceivable corner of SUSY parameter space, but acknowledge that that the non-detection of SUSY H+ and H- bosons in these experiments together with the discovery of a neutral, spin-0, partity even Higgs boson with about 125 GeV of mass dramatically narrows the possible range of values for key SUSY parameters like tan beta.

One SUSY Higgs boson, commonly labeled "A" had odd-parity but is electromagnetically neutral, and hence inconsistent with the LHC data regarding the 125 GeV particle.

Less Easily Excluded SUSY Higgs Bosons

The two other SUSY Higgs bosons, commonly labeled "H" for the heavier one and "h" for the lighter one, are even parity and electromagnetically neutral, just like the Standard Model Higgs boson and hence consistent to that degree with the 125 GeV particle. 

The LHC data analysis has not yet definitively ruled out the possibility that both an H and h with almost identical masses at this point.  Each experiment has measured the Higgs boson mass in two different ways.  Both of the measurements at one experiment and one of the measurements at the other are all very close to each other.  Another measurement at one of the experiments is 1-2 GeV different from the other data points, which is on the verge of veing statistically notable given the estimated margins of error at the LHC experiments.

The two almost 125 GeV Higgs boson possibility is disfavored theoretically.  The Standard Model predicts only one Higgs boson.  The SUSY H and h Higgs bosons aren't expected in SUSY theories to have nearly degenerate masses differences, and have other theoretical properties discussed below.

A more plausible possiblity is that the observed Higgs boson mass measurement discrepency at LHC, which is seen at only one of two measurements at one of the two parallel experiments at LHC, is due to some sort of systemic or theoretical calculation error.  This is a better fit to our currently incoomplete knowledge than a genuine bimodal mass distribution, which would be a sign that there are two different Higgs bosons with very similar masses whose data are getting mixed up with each other to make the two particles look like a single Higgs boson.

Distinguishing SUSY H and SUSY h from the SM Higgs boson via particle couplings

In the MSSM at least, and to some extent in SUSY theories generically although to differing degrees, the H and h have different couplings to other particles than the SM Higgs boson.  Given the tight fits of the observed LHC data to the results expected with the 125 GeV had couplings at the SM prediction level from the LHC data so far (and the rumored closer fit after new data that rumor has it will be released in March) are problematic for that model.   As a post at the viXra blog explains, after the conference:
we will have 40% more data for most channels and about 75% more for the diphoton channel. We know that all channels other than diphoton are perfectly in line with the standard model Higgs while the diphoton channel cross-section is a bit too large.
In point of fact, the MSSM has already been moribund for a year or two for a variety of reasons, so this model specific addition to its death of a thousand cuts isn't all that notable in and of itself. 

But, many MSSM predictions are generic to most or all more elaborate SUSY models.  While the specific expected couplings of the MSSM are unlikely to be correct, even if SUSY is the correct description of the universe, the general observation that H and h each have couplings to other particles that are different than those of the SM Higgs bosons is likely to be a generic feature of all SUSY theories. 

This is because one of the key motivations for SUSY theories in the first place (although SUSY has taken a life of its own beyond this purpose) was to solve the hierarchy problem through more transparent sources of particle couplings in the Higgs sector.  Otherwise, the Standard Model higgs boson must have properties which are very finely tuned for no apparent deeper reason.  Generically, SUSY theories accomplish this by making the couplings in the Higgs sector different than those in the Standard Model.  It may be possible to devise a SUSY theory where a single SUSY Higgs bosons nearly matches the Standard Model Higgs bosons and the couplings of all of the other SUSY Higgs bosons are suppressed, but this isn't very well motivated theoretically and destroys the "beauty" and constrained nature of SUSY models that make them attractive in the first place if done too crudely.

Distinguishing SUSY H and h from the SM Higgs boson via resonnance width.

A particle's width is the manifestation of its decay half-life in a way that is natural on a particle resonnance graph plotted from experimental data.

Another way to distinguish a SUSY H or h from a SM Higgs boson in addition to its couplings is with the width of the resonnance of the particle. A SM Higgs has a much narrower width (and hence a shorter half-life) than the SUSY Higgs boson's width in the MSSM and in most SUSY theories.  This is a measurement that should be possible to make soon, possibly from the further analysis of the pre-LHC shutdown data.  A determination of the estimated rest mass of the Higgs boson and its width go hand in hand.

Model specific properties of SUSY Higgs bosons.

Notably, in the MSSM and to a less clear extent in other SUSY models the light Higgs is always lighter than the Z Boson; the Charged Higgs is always heavier than the W Boson.

Thus, in the MSSM, the 125 GeV particle that has been detected must be an "H" and can't be an "h" which must have a mass of less than 90 GeV.  The MSSM also demands an "A" boson on the same order of magnitude in mass to the "h" if the "H" is under 135 GeV or so.

So, the non-detection of an "h" or of an "A" or "H+" or "H-" in a wide range of masses at LHC, LEP and Tevatron is a serious blow, unless the interactions of the other four SUSY Higgs bosons is so slight that they would evade detection, something that requires fine tuning of a SUSY parameter called tan beta and also a number of other SUSY parameters. 

Contrawise, if there is a neutral, parity even SUSY Higgs boson of 150 GeV or more, in which case the 125 GeV particle is an "h" if SUSY theories indeed follow these conclusions, than the "A", "H+", and "H-" must all have masses on the same order of magnitude as the SUSY "H", which could be much, much higher than anything that could be detected at the LHC.

The bottom line is that it takes a lot of fiddling with SUSY model parameters in non-minimal version of SUSY to make it fit the Higgs boson data from the LHC, even before considering other experimental constraints on these models from the non-detection of supersymmetric particles at the LHC or prior colliders, from the weak evidence for SUSY from recent astronomy observations and direct dark matter detection experiments, and from the experimental boundaries on phenomena like proton decay rates and neutrinoless double beta decay.

At some point Occam's razor must favor the far simpler Standard Model that fits all of the experimental evidence to date over the SUSY theories with far more experimentally determined parameters, far more moving parts, vast numbers of particles for which there is absolutely no experimental evidence, an ever shrinking parameter space that is far outside the original expectations of the theory and does not achieve some of the purposes that originally motivated the theory, and so on.  This really matters because SUSY is a part of every Supergravity theory, and all viable versions of M theory aka string theory, have supergravity and SUSY, rather than the Standard Model, as a low energy effective approximation.

Bottom Line: Leading BSM Theories Almost Dead Leaving Room For New Theories

Most of the viable beyond the Standard Model theories that had currency just two or three years ago, including the MSSM are dead.  The Standard Model has been completely verified by experiment.  We are past the beginning of the end of supersymmetry and string theory, although they aren't official dead yet.  We are close to the middle of the end.  But, it is hard for anyone coming to the field fresh in 2013 to see SUSY or String Theory as anything more than mathematical toy models that are unlikely to be accurate explainations of how nature really works.

The theoretical landscape of particle physics is almost empty, even though almost no one thinks that the Standard Model is a complete or final theory of particle physics at all scales and energy levels.  There is a vacuum in the world of fundamental physics right now that is ripe to be filled by someone taking a new approach not burdened by the wasted legacy of decades of dead end string theory and supersymmetry research in the theoretical physics community.  That effort increasingly looks like the worst wrong turn since a heliocentric astronomy theory replaced epicycles.

A Brief Historical Footnote

The emerging consensus is that the particle that has been discovered at the LHC with this mass is the one that Higgs and others building on preliminary electroweak unification theories in 1960, predicted in 1964. The Higgs boson had been more or less fully described theoretically except for its mass in 1972, more than 40 years earlier.

The core of the entire Standard Model that the Higgs boson is a part of, but with just two generations of fermions and zero mass neutrinos, was in place in 1974 when quantum chromodynamics in substantially its current form had finally been described, although it had to be amended later in the 1970s to include a third generation of fermions.  The only significant modification of the Standard Model, other than measurement of its experimentally measured constants, since 1975 when it became clear that it had to have three generations of particles, was that the 1975 version of the theory assumed that neutrinos were massless, which was discovered to be inaccurate in 1998.

Neutrinos were predicted in 1930 and were first observed experimentally in 1956. The muon neutrino was proposed in 1962 and was first observed in 1975. Only by 1981 when all of its particles except the top quark and tau neutrino and Higgs boson had been observed did the Standard Model really achieve full fledged scientific consensus and the existence of the Higgs boson component of the theory remained somewhat controversial until early 2011.

The W and Z bosons were observed experimentally in 1981, each of the third generation fermions were observed experimentally from 1975 to 2000 (tau 1975, bottom quark 1977, top quark 1995, tau neutrino 2000), and the Higgs boson was observed in late 2011 with the discovery becoming definitive in 2012 and the confirmation that the observed particle really is the Higgs boson continues into 2013 although the ranks of the skeptics are getting thinner.

Two competing classes of Standard Model modifications, one called "Dirac mass" and the other called "Majorana mass" have been proposed to explain this and there is not yet sufficient scientific evidence to distinguish the two possibilities. The Majorana mass explanation usually incorporating a see saw mechanism with three undiscovered companion heavy neutrinos for each known flavor, two additional CP violating phases in the PMNS matrix, and an additional kind of intraneutrino interaction, has more support in the theoretical physics community, because it fits well into grand unification theories and uses supersymmetry-like reasoning. But, the Dirac mass proposal is the more conservative of the two proposals as it requires the minimum number of experimentally measured Standard Model constants and no new particles or interactions at the expense of not explaining why neutrino masses are so small (just as the Standard Model doesn't purport to explain the values of its other constants). Both approaches have roots in ideas about the nature of fundamental quantum particles proposed in the 1930s by Majorana and Dirac respectively. The Dirac mass mechanism was adopted for other fermions in the Standard Model when it was originally formulated.

The State of Future Research

It increasingly seems likely that the rest of the LHC run, which will continue for at least another five to ten years, will confirm the Standard Model, refine the measurement of its experimentally measured constants, and rule out any other kind of beyond the Standard Model physics. 

In general, for the foreseeable future, all that the LHC can do by itself is to establish that all but one of the SUSY Higgs particles are either rather heavy, or interact very weakly with ordinary matter, but can't rule out their existence entirely by itself. It may very well be possible, however, in five or ten years, to greatly limit the parameter space of all SUSY theories, to make predictions that flow from the subset of all SUSY theories with experimentally allowed parameters, and then to use different kinds of experiments to falsify that subclass of SUSY theories.

One particularly promising approach along these lines is to set limits in the minimum mass of all particles added to the Standard Model by experimentally allowed SUSY theories and tightly bound parameters like tan beta, and then make generic predictions about what these modesl say about the rates at which they predict phenomena like neutrinoless double beta decay (which happens at a rate that generally increases in SUSY models as the masses of the superpartners in the theory increases), and then to experimentally determine that neutrinoless double beta decay does not occur at such a high rate.

There is fierce debate going on at this time regarding what kind of high energy physics experiments, if any, should be conducted when the capabilities of the LHC have largely been exhausted.  Diehard SUSY believers want bigger colliders to find superpartners at masses that are just around the corner.  SUSY skeptics wonder if scarce fundamental physics research funds aren't better spent elsewhere.

A variety of neutrino physics experiments in progress are likely to resolve that Majorana v. Dirac mass issue for neutrinos within five to fifteen years.  There is still one Standard Model constant that has not been measured at all, the CP violation parameter for neutrino oscillations in the PMNS matrix.  There have been no direct measurements of the absolute value of the neutrino masses although the relative neutrino masses are known and increasingly strict upper bounds on absolute neutrino masses have been accomplished.  The neutrino related constants will all have reasonably meaningful experimentally measured values in the next five to fifteen years as well. 

There are also many experimentally measured Standard Model constants that are not known particularly accurately and will gradually be refined at the LHC and in other experiments over the years, this will be a never ending project unless a deeper theory allowing them to be calculated from first principles is devised.  Likewise, the process of making more and more precise calculations with the Standard Model equations, particularly in QCD, is ongoing.

The process of ruling out all alternatives to the Standard Model is never ending, but I belive that the experimental data will have advanced enough in the next five to fifteen years to develop a consensus in the particle physics community that supersymmetry and string theory are not correct descriptions of the universe.

The Standard Model was fully formulated (except for neutrino mass) and had achieved consensus status, by the time that I was in junior high school.  The project of expanding it to include neutrino mass, and of experimentally validating and measuring all elements of the Standard Model will probably be complete during my lifetime, by the time that my children are old enough to be graduate students.

I also suspect that major progress will be made in formulating and empirically validating dark matter and quantum gravity theories in that time frame, although I don't hold high hopes for grand unification or a theory of everything at that point.  Much of this work will be based on astronomy research and direct dark matter detection experiments (which overlap heavily with neutrino research).  I personally suspect that no real progress will be made in developing a consensus theory that describes why the Standard Model constants have the values that they have, until all of them have been experimentally determined with a fair amount of precision already.

But, in my lifetime, we will have an essentially complete, if somewhat ugly, rulebook for all of physics and a fortiori, all of the laws of nature.








A Food Prehistory Blog

Dorian Q. Fuller is the pre-eminent pre-historian of Neolithic (and later) crop domestication and diffusion, aka an archaeobotanist, in the world today.  And, he has a blogish thing reviewing the latest scholarly work in the field and well as a more ordinary blog on the same subject, both of which are well worth the read. 

His research frequently provides relatively definitive resolutions of questions that other aspects of material culture, physical anthropology and genetics cannot to questions regarding the origins and interrelationships of agricultural cultures traced via the origins of their crops.

Tuesday, February 19, 2013

Why Did Asians Become Asian?

Razib, at Gene Expression has an interesting post on a gene that shows strong signs of selective pressure in East Asia prior to the migration of the founding population of the Americas called EDAR based on a paper published in Cell, Modeling Recent Human Evolution in Mice by Expression of a Selected EDAR Variant

Razib notes that the authors of the Cell paper fail to make a convincing argument for what selective pressure was involved.  Neither their climate adaptation theory, nor their sexual selection for small breast and lusterous black hair theory withstand serious scrutiny.  I advance another hypothesis which I believe is a more convincing explanation for this selective swep and another striking East Asian racial type defining mutation (low levels of body hair), although it is a hypothesis to be tested rather than a proven theory.

The Selective Sweep At The EDAR Locus Was A Protective Response To Bubonic Plague.

My hypothesis is that the EDAR gene and low levels of body hair were protective against flea carried bubonic plague, an extremely lethal disease with its origins in China.  This disease became a powerful selective pressure in East Asia when the domestication of dogs provided a vector for this flea borne disease in human communities. 

These communities also became more vulnerable to the plague because their transition to behavioral modernity in the Upper Paleolithic revolution led to the formation of semi-sedentary communities who fished and hunted other coastal animals, and this in connection with the advantage conferred on them by domesticating dogs allowed them to live in villages with population densities sufficient to sustain plague outbreaks.

The communities of semi-sedentary fishing and foraging villages with domesticated dogs where the bubonic plague outbreaks began were probably similar in diet and social organization to those of the early Jomon people of Japan, the Comb Ceramic and Pitted Ware people of Northeast Europe, the Paleo-Eskimos of Greenland, the Inuit and the Native Americans of the Pacific Northwest. 

The mass deaths in Inuit communities from exposure to diseases carried by Basque whalers who were their first contact with Europeans supports the hypothesis that these communities had sufficient population density and intercommunity trade links to sustain a major outbreak of bubonic plague without causing it to fizzle out by killing people before the disease could be spread.

Bubonic plague acts on the lymph system.  The derived EDAR allelle might have been protective because lymph nodes are warehouses for the immune system, so a higher frequency of lymph nodes would strengthen immune response, and because small breasts would have reduced the volume of vulnerable plague targeted lymph tissue there making women with smaller breasts more likely to survive a plague infection. 

The hair type phenotype associated with the derived EDAR allelle, and also one or more other allelles coding phenotypes for low levels of body hair which are a racial type defining East Asian allelle, may have reduced the vulnerability of people with this phenotype to infection with the plague's flea vector.

Other hard selective sweeps on allelles that confer resistence to diseases, are documented in recent human evolution (e.g. malaria), and hard sweep on the genes for lactose persistence can to some extent also be seen as genes enhancing immune capacity via better nutrition.  So the hypothesis that EDAR conferred selective fitness by conferring disease resistence is a plausible one. 

But, since the timing of the EDAR selective sweep was clearly too early for it to be driven by the Neolithic revolution, and very low population density terrestrial hunter-gatherer populations who moved very slowly before horses were domesticated are ill suited to harboring infectious disease outbreaks which tend to quickly run their couse and fizzle out, they must have arisen in more tightly tied, higher population density communities. 

Upper Paleolithic fishing villages whose prosperity is also enhanced by the use of domesticated dogs could account would have had nearly Neolithic population densities.  Their use of domesticated dogs would simultaneously provide a bubonic plague flea vector in a community where the rat vector of the bubonic plague flea Middle Ages was not available (since there were no large human grain stores for the rats to plunder) and explain why this happened 30,000 years ago around the time that the dog was domesticated, rather than at some earlier or later point in prehistory.

The hypothesis that EDAR conferred resistence in particular to bubonic plague rather than some other disease is driven by the particularized importance of the lymph system that the derived EDAR allelle appears to enhance to bubonic plague in particular relative to other lethal infectious diseases.

Why Don't Finns and Russians Have The Derived EDAR Allelle?

How do I explain the absence of the derived EDAR allelle in Northeast European populations who show clear genetic signs of East Eurasian genetic contributions, probably through linguistically Uralic populations distingished by the Y-DNA haplogroup N which is a sister clade to the dominant Asian Y-DNA haplogroup O?

The Last Glacial Maximum sterilized Northern Asia, killing off the plague and killing or exiling all of its modern humans.  North Asia was repopulated from one or more genetically East Eurasian refugia, probably including Tibet and the Altai Mountains, that had become isolated from coastal China before the selective sweep took place ca. 30,000 years ago in response to a series of bubonic plague outbreaks.

These refugia included the place of origin in North Asia of Y-DNA haplogroup N whose population did not undergo the selective sweep for the derived EDAR allelle that coastal villages in China from which this mutation expanded did, perhaps because these nomadic terrestrial hunter-gatherers lacked the relatively high population densities in semi-sedentary villages of the fishing people in which this mutation arose, or perhaps because the colder climate there was protective.

Even if the derived EDAR allelle was present in low frequencies in the North Asians who repopulated North Asia, genetic drift and founder effects could have purged it from the populations who made East Eurasian genetic contributions to Northeastern Europeans, because the derived EDAR allelle was not actively conferring selective advantage in either the tundras of Siberia or in Europe, where bubonic plague did not arrive until the 14th century from a Chinese source via Siberia's Southern fringe.

The story of Y-DNA haplogroup N securing its current distribution in a counterclockwise expansion from Southeast Asia is wrong according to this hypothesis.  Instead, haplogroup N subsequently introgressed into the Han Chinese population before or in the early phases of the East Asian Neolithic revolution and expanded with its rice and millet farmers, all of the way to to South Asia where Munda rice farmers carried its most far flung extension.

Later North Asian populations acquired the ubiquitous EDAR allelle from their expanding Han Chinese neighbors to the South who in the Neolithic era that followed the East Eurasian genetic introgression into Northeast Europe, brought higher density settlements that were subject to EDAR alllelle selective pressures associated with the periodic plague outbreaks that followed Han Chinese trade routes into North Asia.

Caveats.

The biggest gap in this theory is that I have no idea whether the derived EDAR alllele (or the derived allelle for low levels of body hair found in East Asians) would have indeed been protective against death from bubonic plague.  But, this is something that should be relatively easy to examine once the question is well posed.  If it isn't protective, we need to find out what the gene did that did confer selective advantage, because neither the climate conditions theory, nor the small breast fashion fad theory make sense to explain such a powerful selective sweep.

DNA phylogeny evidence already in print should be able to establish whether the hypothesis regarding the Northeast European infusion of East Eurasian DNA is consistent with the evidence.










Monday, February 18, 2013

"No Go" Theorems In String Theory

Lubos Motl, in the context of a larger string theory discussion, recalls some useful "no go" theorems of String Theory in fairly accessible language.

1.  There are no fields with spins of greater than 2 (the conventional expectation for gravity) that have masses of less than the string scale (i.e. huge), basically ruling out their existence in any low energy effective theory that could be tested experimentally.

2.  Spin 2 fields (of masses of less than the string scale) must be "modes of the gravitational field" and the corresponding particles have to be "gravitons of a sort".  They don't have to be exactly like Einstein's theory of relativity on string theory grounds alone, but they have to be quite similar to it.  Some string theory variants permit multiple kinds of gravitons.

3.  Spin 3/2 fields (of masses of less than the string scale) "have to be accompanied by the local supersymmetry – they have to be gravitino fields."  So, if there are spin-3/2 fields in a given string theory, they have to act like the spin-3/2 fields of SUGRA (supergravity) theories. 

He doesn't address at length, but implicitly assumes familiarity with, the principle that fundamental field spins have to come in discrete half-integer increments, meaning that all other fields of string theory have to be spin-1 (associated with photons, gluons, W bosons and Z bosons in the Standard Model), spin-1/2 (associated with Standard Model fermions) or spin-0 (associated with the Standard Model Higgs boson).

Thus, any newly discovered particles predicted by String Theory have to have some fundamental properties similar in many ways:
(1) to Standard Model fermions or bosons, or
(2) to the hypothetical spin-2 graviton, or
(3) to the hypothetical spin-3/2 supersymmetric gravitino.  The superpartners of Standard Model fermions in Supersymmetry theories (i.e. squarks and sleptons) are spin-0 bosons.

While there are no fundamental particles with spin-3/2, there are exotic hadrons (three quark composite particles) with spin-3/2 that have been observed. so we can be fairly confident that we know how particles of this spin would behave if they existed.

The particle content of supersymmetry theories

The superpartners of both Standard Model spin-0 bosons (Higgsinos) and Standard Model spin-1 bosons (Winos, Binos and Gluinos) in Supersymmetry theories are spin-1/2 fermions.  Linear combinations of these particles produce "Charginos" and "Neutralinos". Supersymmetric theories also include more than one Higgs boson (the simplest have five - two neutral parity even Higgs bosons, one neutral parity odd Higgs boson and two charged Higgs bosons that are each other's antiparticles).  Non-minimal supersymmetric theories generally have more kinds of Higgs bosons in addition to other complexities.

In supersymmetric theories, the electromagnetic charge, weak force interactions, and color charge interactions of these particles are highly constrained, but the masses of supersymmetric particles and some of their other properties are experimentally fitted, rather than theoretically predicted.

Existing high energy physics experiements imply that Supersymmetric Higgs bosons must either be quite heavy, or must have interactions so subtle that previous searchers for them in lighter mass ranges (typically below the mass of the Higgs boson discovered already) would have been missed because something in the theory suppresses their interactions or creation. They also imply that other superpartners must be quite heavy. And, experimentally valid Supersymmetry theories with many very heavy superpartners also need a mechanism to suppress naively expected high rates of neutrinoless double beta decay which are not observed.

Discoveries that would falsify the Standard Model, SUSY and String Theory

String theories that have low energy effective field limits that don't look like either the Standard Model, or Supersymmetry (with Supergravity), in the low energy effective field limit are excluded. 

The discovery of any spin-0 fundamental particles other than the Standard Model Higgs boson falsified the Standard Model.  The discovery of any new spin-1/2 fermions falsifies the Standard Model.  The discovery of any new fundamental spin-1 bosons falsifies the Standard Model.  the discovery of any spin-3/2 or spin 5/2+ particles falsifies the Standard Model.  Discovery of a spin-2 graviton would not falsify the Standard Model as this would be outside of the scope of the phenomena that it describes.

In Supersymmetric theories, searches for fundamental particles with properties different from these are ill motivated theoretically and would falsify supersymmetry, supergravity and string theory.

Experimental constraints on Supergravity and String Theory from gravitino properties

Astronomy observations strongly constrain many variants on the properties of a gravitino (and hence, string theory parameter space). 

If the gravitino is too heavy and nearly stable, it gives rise to too much dark matter and doesn't solve the "hierarchy problem" that supersymmetry was devised to address.  If it isn't stable enough, it produces a universe with no stars in it.  In split supersymmetry models, a heavier gravitino is allowed by we need to be observing other superpartners at the LHC very soon.  Limitations associated with the properties of the gravitino are one of the main reasons that R-parity conserving theories of supersymmetry are now strongly disfavored experimentally.  In other words, it is experimentally necessary if supersymmetry exists at all, that almost all supersymmetric particles, like all second and third generation fermions in the Standard Model, are inherently unstable particles that with the exception of one or two dark matter candidates, do not exist in nature in the modern universe.

In the most plausible possibility for gravitino properties consistent with experimental evidence, the gravitino is the lighest supersymmetric particle (LSP) and the source of most dark matter: "R-parity is slightly violated and the gravitino is the lightest supersymmetric particle. This causes almost all supersymmetric particles in the early Universe to decay into Standard Model particles via R-parity violating interactions well before the synthesis of primordial nuclei; a small fraction however decay into gravitinos, whose half-life is orders of magnitude greater than the age of the Universe due to the suppression of the decay rate by the Planck scale and the small R-parity violating couplings."

Increasingly tight experimental constraints on dark matter properties, however, greatly narrow the available parameter space for gravitino dark matter.  The gravitino is a much smaller target to search for that it would have been not so many years ago. 

Yet, boundaries on the properties of the gravitino, since it is common to all supergravity theories by definition, impose limits all of extended supersymmetry theories whatever their particle physics content and impose limits on all string theories.  So, these generic constraints of theory-space from the experimental limits on the properties of any particle with a gravitino's intrinsic spin are quite powerful.

If the gravitino parameter space is overconstrained, meaning that its existence with all of its theory dependent properties that make it such a small target are experimentally ruled out, than so are all supergravity theories.

LHC Shut Down For Next Two Years

The Large Hadron Collider (LHC) which discovered the Higgs boson was shut down on Saturday (February 16, 2013) for maintenance and upgrading that is anticipated to take the next two years or so. 

What will LHC scientists be doing for the next two years?

The next two years of LHC physics will involve analyzing data that has already been collected in its first run, rather than collecting new experimental data.

Genes, Mothers And Lovers

The reality that mothers never completely leave behind ties to the fathers of their children is not just a social reality. It is a biological one.
Microchimerism is the persistent presence of a few genetically distinct cells in an organism. This was first noticed in humans many years ago when cells containing the male “Y” chromosome were found circulating in the blood of women after pregnancy. Since these cells are genetically male, they could not have been the women’s own, but most likely came from their babies during gestation.

In this new study, scientists observed that microchimeric cells are not only found circulating in the blood, they are also embedded in the brain. They examined the brains of deceased women for the presence of cells containing the male “Y” chromosome. They found such cells in more than 60 percent of the brains and in multiple brain regions. . .

Tuesday, February 12, 2013

Balkan Neolithic Has Large Demic Component

Strontium isotopes document greater human mobility at the start of the Balkan Neolithic
Dušan Borić and T. Douglas Price
Questions about how farming and the Neolithic way of life spread across Europe have been hotly debated topics in archaeology for decades. For a very long time, two models have dominated the discussion: migrations of farming groups from southwestern Asia versus diffusion of domesticates and new ideas through the existing networks of local forager populations. New strontium isotope data from the Danube Gorges in the north-central Balkans, an area characterized by a rich burial record spanning the Mesolithic–Neolithic transition, show a significant increase in nonlocal individuals from ∼6200 calibrated B.C., with several waves of migrants into this region. These results are further enhanced by dietary evidence based on carbon and nitrogen isotopes and an increasingly high chronological resolution obtained on a large sample of directly dated individuals. This dataset provides robust evidence for a brief period of coexistence between indigenous groups and early farmers before farming communities absorbed the foragers completely in the first half of the sixth millennium B.C.

From here.

The strontium isotype analysis of human skeletal remains provides an independent confirmation of the genetic evidence (including ancient DNA evidence) tending to show that farming came to Europe with mass migrations of farming populations, rather than primarily through technology transfers to existing hunter-gather populations. 

Within seven hundred years of their arrival (the press release suggests more like a couple of hundred years), the indigeneous hunter-gather populations of the region were exiled, had died off, or were assimilated in the farming communities. 

Neither the press release, nor the abstract set forth above, quantify the extent to which indigeneous hunter-gathering populations of the Balkans were assimilated into farming communities as opposed to being exiled or dying off (a variety of scenarios from genocide to declining game habits that reduce carrying capacity are possible within that category).

We can infer from lines of evidence not included in this study that women were probably more likely to be assimilated into the farming communities than men - leading to greater conservation of mtDNA lineages than Y-DNA lineages across the Mesolithic-Neolithic transition.  This study shows a gender differential that is somewhat different although the phrasing could be more clear:
An interesting finding of the study is that 8,000 years ago, when Neolithic farmers were beginning to migrate into the Danube Gorges and overlap with Mesolithic hunter-gatherers, more women than men were identified as foreigners. A possible explanation for the variance, according to the study, is that women came to these sites from Neolithic farming communities as part of an ongoing social exchange.
As I read this, the researchers seem to intepret their findings as showing that young male farmers born on the Neolithic frontier (and hence having "local" strontium isotypes) procured wives from more settled Neolithic lands who appeared to be "foreigners" in their strontium isotypes, rather than seeing this as evidence of indigneous men transitioning to farming and then taking Neolithic culture women as wives.

Friday, February 8, 2013

More Evidence Against A BSM Higgs Boson

A new paper discussed here continues the ongoing business of establishing that the particle discovered in 2012 which seems a lot like the Standard Model Higgs boson is indeed the Standard Model Higgs boson and not merely something Higgs-like.
One may extend the Standard Model by adding the fourth generation of quarks and leptons. The behavior of the Higgs boson changes a little bit. Well, it changes substantially enough so that this Higgs boson in a larger model may be distinguished from the Higgs boson according to the Standard Model.
One may also modify the Standard Model in another way: make the Higgs boson fermiophobic. . .  a fermiophobic Higgs boson is one that doesn't interact with the fermions (leptons and quarks) at the tree level (the fermion masses have to be produced more indirectly). The interactions with the W-bosons and Z-bosons are still essential for the consistency of the theory.
The recent paper has simply excluded both the four-generation interpretation of the newly found Higgs boson as well as the fermiophobic interpretation of the newly found Higgs boson. The confidence that these models – that are very specific, essentially as predictive as the Standard Model itself – have been falsified is very strong.
These results extend some previous 2012 results that have excluded the possibility that the 126 GeV Higgs boson is a pseudoscalar rather than a scalar; and that its spin is greater than zero such as J=2.  Some deviations in the behavior of the Higgs boson from the Standard Model may be found in the near or distant future – at most something like 10% deviations in the coupling constants.
The exclusion in the fourth generation case is really not just an exclusion of a different kind of Higgs boson, it is really a generalized exclusion of the Standard Model with four generations entirely.  And, while the paper apparently does not rigorously examine the Standard Model with more than four generations, it is a fair assumption that those models would also contradict the Higgs boson data we have, just as the four generational model does.  Thus, this is one more piece of very powerful evidence that the Standard Model has precisely three generations of fermions, no more and no less.

A fermiophobic Higgs boson was never something many people really expected.

Matt Strassler has recently discussed other ways that the detected particle could be discovered to be something other than a Standard Model Higgs boson.

Strassler also discusses what kinds of supersymmetry (aka SUSY) theories are and are not excluded by the increasingly precise data on the behavior of the Higgs boson.  Minimal supersymmetry is pretty much dead, as are many other versions of these theories.  But, there are enough moving parts in more complex SUSY theories that they can't be entirely excluded yet.

Etruscan Origins In A Prehistoric European Context

A new mtDNA study including ancient DNA disfavors the hypothesis of Herodotus that the Etuscans were Bronze Age migrants to Italy from Western Anatolia and instead supports the theory that they were the genetic and cultural descendants of the first farmers of Southern Europe, i.e. the Cardium Pottery culture. 

They survived for centuries before eventually succumbing to Roman might, because they adopted a number of Indo-European innovations.  This allowed them to persist while other communities of their ancestral culture were conquered and culturally and linguistically extinguished by later waves of peoples much earlier.

The Historical Context Of The Etruscans And Rhaetic Peoples

No one doubts the unanimous Roman historical account that the Etuscans were present in Tuscany before the Romans arrived in Italy in the early Iron Age (according to tradition, Rome was founded in the 8th century CE). 

The Romans quickly and forcefully assimilated other Italic, but non-Roman, people, such as the nieghboring Sabines who had preceded them in the area.  The Indo-European Italic peoples who started to arrive in Italy sometime after Bronze Age collapse ca. 1200 BCE, probably assimilated other non-Indo-European populations of Italy as well, although this is not well documented historically.  The Etruscans, however, resisted Roman assimilation until around the 1st century CE, after more than half of a millenium in which they maintains a linguistically non-Indo-European and culturally distinct society from that of the Romans.

Pliny the Younger in his Natural History (79 CE) provides a key piece of evidence regarding their origins.  He wrote that:
adjoining these (the [Alpine] Noricans) are the Raeti and Vindelici. All are divided into a number of states. The Raeti are believed to be people of Tuscan race driven out by the Gauls; their leader was named Raetus. 
The Etruscans, of course, were predominantly associated with the Tuscan region in Pliny's time, so the "Tuscan race" would have referred to the Etuscans, in contrast to the Italic peoples like the Romans who had migrated to Italy more recently.

Pliny's link of the Alpine Raeti and the Etruscans is confirmed linguistically by Helmut Rix (ca. 1998), and also archaeologically. Villanovan material culture migrated from the Alpine area to Tuscany around the time of Bronze Age collapse. (N.B. modern Swiss Rhaetic derived from Latin, and Iron Age Swiss Rhaetic languages, are completely different languages that happen to share the same geographically derived name.)

Both the Rhaetic retreat to the mountains and the secondary Etruscan migration to Tuscany were probably driven by the "push" of early proto-Italic and Celtic populations around the time of the Bronze Age collapse.  They were a pilot wave arriving ahead of the Indo-European populations that were expanding into Italy ahead of them. 

Indeed, one of the likely reasons that the Etruscans survived as a distinct culture longer than any other the other non-Indo-European cultures of Southern Europe (except the Basque) is that they adopted culturally many of the innovations of the Indo-European Urnfield culture at their heels, and thus could compete with it.  If historical accounts and written examples of the Etruscan language had not survived, archaeologists would probably have assumed based upon cultural signs like the practice of cremating the dead that the Etruscans shared with contemporaneous Indo-Europeans that they were just a somewhat distinctive and now extinct variety of Indo-Europeans.

The theory of Herodotus that the Etruscans had Bronze Age origins in Western Anatolian was rejected by his contemporary Greek Historian Dionysius of Halicarnassus for a variety of solid linguistic and religious culture grounds even at the time it was offered. A Bronze Age migration from Western Anatolian also suffers from the fact that Western Anatolia would have been a linguistically Indo-European area (not consistent with the non-Indo-European Etruscan language) for most of the Bronze Age.  Genetic evidence, including the latest mtDNA evidence discussed below, also disfavors the hypothesis offered by Herodotus.

The DNA Evidence

The new mtDNA study confirms once again that the Etruscans were surely not derived substantially from Upper Paleolithic indigeneous hunter-gatherer populations of Europe, whose mtDNA was dominated by mtDNA haplogroup U4 and U5.

The new mtDNA study puts common origins between the Etruscans and Western Anatolian populations no earlier than the early Neolithic era.

The ancient DNA evidence also suggest that Etruscans were intrusive to Italy, probably in the archaeologically supported early Iron Age period when Etruscan culture appears in Tuscany.  As an earlier ancient DNA study of Etruscans indicates:
Genetic distances and sequence comparisons show closer evolutionary relationships with the eastern Mediterranean shores for the Etruscans than for modern Italian populations. All mitochondrial lineages observed among the Etruscans appear typically European or West Asian, but only a few haplotypes were found to have an exact match in a modern mitochondrial database, raising new questions about the Etruscans’ fate after their assimilation into the Roman state. 
If the Etuscans were truly autochronous in Tuscany and in Italy more generally, one would expect Etuscan mtDNA haplogroups to be present at low levels throughout Italy.  But, this is not the case.

Areas of historical Etruscan occupation also have a relatively high concentration of y-haplogroup G, which is characteristic of first wave Neolithic populations.

The Etruscans Were A Relict Cardium Pottery Migration Wave Population

Etuscans are thus derived from people who arrived as part of a folk migration in the first wave Southern European Neolithic Cardium Pottery culture that included Southern France, Sardinia and all of the territory attributed to the hypothetical Tyrsenian language family to which the Etruscan and Rhaetic languages belong.  By the time that their society was documented by Roman historians they had already become a relict population of that culture. 

The Lemnian language of the Aegean Sea spoken on that island until the 6th century BCE, which was not within the range of the Cardium Pottery culture, is also proposed (convicingly) to be part of that language family.  One plausible hypothesis is that it may represent an eastern colony of the Tyrsenian, aka Cardium Pottery Neolitic descended, culture. It might even have been founded in response to the migration pushes that caused the Etruscans to migrate from the Alps to Tuscanny. Archaeological evidence suggests that Tysenian language family speakers may have arrived around the 9th century BCE.  The island is also associated with the center of the cult of Hephaestus, the god of metallurgy, whose secret mystery rituals may have been conducted in a non-Greek language.  It could be that this cult arrived with the Tyrsenian colonists and that their metalworking trade is what secured their acceptance in this community.

The Cardium Pottery culture, in turn, was derived from Fertile Crescent Neolithic cultures in what is now Syria and Southern Central Anatolia (rather than Western Anatolia as Herodotus had supposed), although both the donor and receiving regions have seen massive demographic upheaval in the intervening 7500 years (a time frame consistent with the mtDNA analysis in the new study).

The Cardium Pottery culture was distinct from but parallel to the Linear Pottery Neolithic peoples (aka LBK) who were a first wave Neolithic people who expanded demically into a territory including the Danube river basin. The ancient DNA of LBK Neolithic peoples show strong genetic similarities to the Cardium Pottery peoples in terms of haplogroup distributions, particularly on the Y-DNA side. But, the LBK people appear to have had geographically distinct origins from the roughly contemporaneous first wave Neolitic Cardium Pottery peoples. The earliest LBK origins were in Southern Hungary and the Ukraine, perhaps in turn with roots in the Vinča and Karanovo cultures (in turn derived from the Starčevo culture of Southeastern Europe).

Both the LBK and Cardium Pottery cultures, which were first wave farming cultures in much of Europe, were derived, in general, from the greater Fertile Crescent Neolithic cultures in Southwest Asia and Anatolia that had emerged prior to 6200 BCE and starting around 8000 BCE.

The Etruscans Relationship To Other Distinctive Modern European Populations

Of course, all modern populations contain some contributions from later folk migrations and many Cardium Pottery populatioons would have integrated earlier indigeneous Paleolithic Europeans present in the places they migrated to were incorporated to some extent in the Cardium Pottery communities when they arrived.

 There is no place in Europe were zero or near zero gene flow between populations is very plausible as a hypothesis other than possibly in the Basques, where RH blood types created a natural genetic barrier to admixture.  But, some populations do show fairly strong traces of the early eras of European prehistory in their genes or culture, relative to the majority of Europeans.  The main examples are discussed below.

The Sardinians

In addition to the (now-extinct) Etuscan and Rhaetic populations of modern Europe (the Rhaetic language died in the 3rd century CE), another population with significant Cardium Ware ancestry that wasn't modified much later on probably include the Sardinians who show great genetic continuity even today with early Neolithic ancient DNA.

The Galicians of Northwest Spain

The Galician people of Northwest Spain in Europe also show signs of being particularly genetically ancient, with disproportionate shares of DNA haplogroups which pre-date the Bell Beaker folk migration (e.g. relatively low levels of Y-DNA haplogroup R1b and some of the lowest low levels of lactose tolerance for the region).  The absence of lactose persistance (LP) characteristic of the Galician people is common to both Upper Paleolithic and first wave Neolithic ancient DNA. The Cardium Pottery culture never reached Northwest Spain (in Iberia they were pretty much limited to the eastern coastal areas) and it also isn't clear how fully the population ancestral to them was assimilated into the Bell Beaker culture.  My guess would be that their antecedents are really pre-Bell Beaker megalithic peoples who adopted farming from cultures derived from the Cardium Pottery culture later on with greater degrees of incorporation of pre-Neolithic populations than in many other places where farming arrived via folk migrations.  Their assimilation to Indo-European Celtic culture was also on the late side within Europe although they were eventually thoroughly assimilated into the Celtic culture and are now Indo-European lingusitically.

The Basque Peoples

In contrast, the Basque have a cultural and genetic distinctiveness that probably has roots in a second wave of folk migration by Bell Beaker peoples in the Copper and early Bronze Ages (tracing just where to the East the Bell Beaker migrants had their origins is a not yet completed puzzle). The Bell Beaker peoples expanding into France from what are now non-Basque parts of Iberia (where the Bell Beaker culture first appears in Europe). In France, the Artenacian culture evolved from and was in continuity with the Bell Beaker culture and held off Indo-European penetration into Western Europe for about a thousand years. The Basque and Aquitani cultures in turn derived from southern migration of members of the Arrtenacian culture.

The Basque seen as having their cultural origins and ethnogenesis in a Copper Age folk migration to Iberia, and from there elsewhere in Western and Northern Europe, persisted when other pre-Copper Age first wave Neolithic and Paleolithic cultures of Europe (many of whom they wiped out) do not, because their technology is on a part with the new Indo-European wave, rather than being clearly inferior. In my view, Bell Beaker derived cultures and migrations fairly described as Vasconic in character, account for the predominance of Y-DNA R1b in Europe, and the distribution of R1b in Europe today roughly corresponds with the distribution of the Bell Beaker culture at its peak.  The substrate culture of most of Western and Northern Europe when the transition to Indo-European languages took place in those regions would have almost entirely been part of a Vasconic language family.

The Uralic Peoples Such As Finnish Speaking Finns

The linguistically Uralic peoples of Europe such as the Finnish speaking Finns (other than the Hungarians who acquired their language via language shift from a demically thin elite in the historic era), meanwhile, probably trace their cultural origins to the late Upper Paleolithic indigeneous hunter-gatherer populations of Northern Eurasia associated with the archaeolgical Pitted Ware culture.  They were able to persist in their language mostly because their far Northern environment was the least favorable in Europe to the food production methods that the Indo-Europeans and prior waves of Neolithic expansion relied upon for economic dominance.

Wednesday, February 6, 2013

Evidence For Only N eff =3 Weakened

Peter Woit, at Not Even Wrong reports:
Resonaances has an excellent posting about the latest WMAP9 CMB measurements, and the value Neff for the number of implied light degrees of freedom. When the WMAP numbers were released late last year, they quoted
Neff=3.89+/-.67, 3.26+/-.35, 2.83+/-.38
for the results of fits to their data and others (see section 4.3.2). Jester described this as “like finding a lump of coal under the Christmas tree”: the value Neff=3 implies no new light degrees of freedom beyond the known 3 light neutrinos. A rumor soon appeared on his blog that this result was in error and would be corrected.
The corrected version is now out, with new results
Neff=3.89+/-.67, 3.84+/-.40, 3.55+/-.49
and a note about the correction: “slight correction to Neff for case with BAO.” which seems reasonable if you regard the difference between finding no unknown degrees of freedom and discovering a new unknown one as “slight”.
 
His sardonic take on the understated "slight" statement in the correction is on target.  The uncorrected results were discussed at this blog a couple of weeks ago.

The uncorrected data was more consistent with a scenario in which there are only the three light neutrinos, i.e. with no beyond the Standard Model Particles, than it was with any beyond the Standard Model physics, particularly given that we already know from other evidence that the answers must come in discrete integer multiples and that the true value of N eff can't be lower than three.

The corrected WMAP data is more consistent with a fourth light particle in the formative stage of the universe than it is with only the three light neutrinos, e.g., an as yet undiscovered single species of keV mass scale sterile neutrino that could explain dark matter.  In other words, it suggests beyond the Standard Model physics of great cosmological importance.

Just a "slight" difference indeed.

Gambler's House

I call your attention to the latest addition to the blogroll, Gambler's House which focuses mostly on the prehistory of Native American civilizations North America, including a strong focus on those of the Mississippian culture and the American Southwest.

Monday, February 4, 2013

Diphoton Excess Gone?

If a rumor about Higgs boson data is correct, the odds that the Higgs boson observed is truly just precisely the Standard Model Higgs boson increases greatly. We will know for sure later this year.

Rumors say diphoton excess in early Higgs Boson search data was a statistical fluke

The rumor in the physics community is that the excess of the number of diphoton decays in the experiments that discovered the Higgs boson at the large hadron collider (LHC) over the Standard Model expectation (4.3 sigma in the first round of data) have disappeared or greatly subsided as the size of the data set grew. The diphoton excess was present in the originally released data from the CMS half of the experiment. But, CMS did not release is second round of diphoton data when its sister ATLAS experiment did and when it had originally planned to late last year.

[R]emember that CMS have not yet published the diphoton update with 13/fb at 8 TeV. Rumours revealed that this was because the excess had diminished. At the Edinburgh Higgs symposium some more details about the situation were given. The talks are not online but Matt Strassler who was there has told us that the results have now been deemed correct. It may be understandable that when the results are not quite what they hope for they will scrutinize them more carefully, but I find it wrong that they do not then publish the results once checked. It was clear that they intended to publish these plots at HCP2012 in November and would have done so if they showed a bigger excess. By not releasing them now they are introducing a bias in what is publicly known and theorists are left to draw conclusions based on the ATLAS results only which still show an over-excess in the diphoton channel.
I have previously discussed why an excess over the Standard Model prediction in the key diphoton discovery channel for a Higgs boson was not just possible but likely in the first round of data after the Higgs boson was discovered was not just possible but more likely than not given the bias involved in releasing data immediately after that channel met a certain statistical threshold.  While not every experimental data point so far in the Higgs boson search is exactly in line with the Standard Model prediction, only the diphoton decay rate has been extremely significantly about the expected level.

Thus, it appears that the strongest piece of experimental data in the early Higgs boson data pointing to beyond the Standard Model physics was just a statistical fluke.

New results from both experiments on the diphoton decay rates will be released later this year with 20/fb of data, however, so the only real harm caused by not releasing the CMS data will be the time wasted by theoretical physicists and lay people who like to think about and blog about physics, speculating about beyond the Standard Model physics that insiders already knew that the data didn't support. (The LHC will be down for upgrades for about a year, so when this is released it will be the last update of experimental data in the LHC Higgs boson search for about a year and a half.)

The Higgs boson mass discrepency also probably experimental error

As noted previously in another post at this blog, it also seems likely that the discrepency between the Higgs boson mass measurements is also an experimental error. Two versions of the measurement were conducted at each experiment. Three of the four results produced a value very close to 126 GeV. One version of the experiment produced a value about 1 GeV lower.

There are a variety of circumstantial reasons to think that this outlier is wrong, rather than representing new physics, such as the existence of two neutral Higgs bosons with very similar masses.  For example, if there were two Higgs bosons of similar masses, one would expect the two kinds of measurements at each of the experiments to be similar to each other forming two pairs of mass values and this isn't what happened.

The proton size discrepency in muonic hydrogen probably experimental or calculation error

Some combination of experimental error and underestimated theoretical calculation uncertainty (that arise because infinite series equal to the exact result are truncated to make numerical approximations of the theoretical amount with imperfectly estimated error amounts) or subtle omissions of important theoretical terms in the theoretical calculation, are overwhelmingly also the likely source of the observed discrepency between the proton radius measured in ordinary hydrogen an the proton radius measured in muonic hydrogen (the muonic hydrogen measurement which is about 4% different is probably close to a correct value that is the same in both cases).

A similar issue with experimental measurement or theoretical calculations probably accounts for the discrepency between the measured and calculated value of the muon's magnetic moment.

Other ongoing constraints on BSM physics

The observed Higgs boson mass cures potential Standard Model equation pathologies

The Higgs boson mass that is observed means that the equations of the Standard Model continue to be unitary (i.e. predict sets of possiblities whose probabilities equal exactly 100% rather than more than 100% or less than 100%) at all energy scales up to the Planck scale. It is also consistent with a vacuum that is at least "metastable" (i.e. stable everywhere for time periods approximating the current age of the universe or more, even if not absolutely perfectly stable). Neither of these conditions would have held for some Higgs boson masses other than 126 GeV that hadn't been ruled out until the discovery was made last year, which would have made beyond the Standard Model physics a necessity.

BSM physics generically found in alternatives to the Standard Model are tightly constrained by experiment

Developing a beyond the Standard Model theory that produces a particle with exactly the Standard Model Higgs boson spin, diphoton, four lepton and other measurable decays at the measured Higgs boson mass profoundly constrains the experimentally discernable phenomenological consequences any such model. Tevatron data reinterpreted in light of knowledge that there is probably a 126 GeV Higgs boson is also consistent with the Standard Model prediction for b quark decays from Higgs bosons although the data at Tevatron was never strong enough by itself to prediict that a 126 GeV Higgs boson existed on that basis alone.

Supersymmetry theories, generically, require at least two charged and three neutral Higgs bosons. Experiments are increasingly setting strict boundaries on the possible masses of the two or more charged SUSY Higgs bosons and the two more more neutral Higgs bosons beyond the Standard Model expectation. Experiments are also setting increasingly stringent boundaries on the minimum masses of supersymmetric superpartners and on the key SUSY parameter tan beta. There are enough moving parts in supersymmetry theories that the LHC can't absolute rule out all possible supersymmetry theories.

But, it takes increasingly fine tuned, unnatural and exotic version of these theories to fit the data: (1) a Higgs boson with exactly the properties of a 126 GeV Standard Model Higgs boson, and (2) the absence of any other beyond the Standard Model particle of 1 TeV or less in mass. One can imagine exotic particles that would escape detection at LEP, Tevatron and the LHC given their particular experimental limitations known after the fact, but any particle of 1 TeV or less not detected by any of these experiments by the time that the LHC experiment is concluded has to be exotic indeed.

Since string theories generically have a low energy approximation in the form of supersymmetry theories, the theory space of experimentally possible string vacua is also highly constrained to quite exotic versions of those theories.

The Standard Model provides that phenomena include proton decay, flavor changing neutral currents, neurinoless double beta decay, baryon number non-conservation, lepton number non-conservation, and "generations" of fermions beyond the Standard Model's three generations simply do not exist. But, many beyond the Standard Model theories frequently predict that such phenomena exist but are rare or otherwise hard to observe. Ever tightening experimental limits on these phenomena, none of which have been discovered, and some of which (like four or more generation of fermions) are even definitively ruled out by experiment, increasingly disfavor competiting models.

As previously noted at this blog, these constraints have even more power when constrained. It isn't too hard, for example, to adjust supersymmetry theories so that their particles are too heavy to be directly directed at the LHC. But, supersymmetry theories with more massive superpartners generically also imply higher levels of neutrinoless double beta decay that experiments independent of the LHC can increasingly rule out.

Tightening constrains on the minimum half-life of the proton and on the maximum rate of neutrinoless double beta decay by factors of about one hundred, which is something that should be possible within the next decade or so, rules out a whole swath of significantly investigated beyond the Standard Model theories to an extent similar to that of LHC experiments ruling out beyond the Standard Model particles up to ahout 1 TeV in mass.

Bottom line: the Standard Model is unstoppable.

The bottom line is that the "worst case scenario" for theoretical physicists, in which the LHC experimentally completes the task of discovering the last of the Standard Model particles, the Higgs boson, while discovering no beyond the Standard Model physics, is looking increasingly probable.

If beyond the Standard Model physics exists, it appears to exist in the realm of gravitational and neutrino and greatly super-TeV energy scale physics that are beyond the LHC's ability to detect it, and not at the non-gravitational 1 TeV +/- or less energy scale that the LHC can discern.

This greatly undermines the incentive for people who fund "big science" to spend lots of money in the near future on expensive new particle accellerators that can explore energies beyond those of the LHC, because it reduces the reason to suspect that there is anything interesting to discover at those energy scales.

This isn't just a problem for particle physicists either.

Issues For Theorists Related To Gravity and Dark Matter

Astrophysicists have long ago ruled out any of the fundamental particles of the Standard Model or any of its known stable composite particles as dark matter candidates. The LHC is rigorously ruling out another whole swath of dark matter candidates, and the numerous direct dark matter detection experiments are increasingly eliminating similar candidates.

The LHC is also tightly constraining theories relevant to gravitational physics that call for extra dimensions by limiting the scale and properties of these extra dimensions which matters because extra dimensions play a central role in allowing gravity to be as weak as it is in a "theory of everything" that describes the three Standard Model forces and gravity as manifestations of a single unifed force that has symmetries that are broken at low energies. "Large" extra dimensions have been ruled out by experimental data from the LHC.

These tight and precise constraints on particle physics candidates for dark matter greatly constrain the properties of any dark matter candidates that do exist, since particles of suitable mass that interact via the weak force are pretty much completely ruled out (and strong and electromagnetic force interactions for these particles had already been largely ruled out). And, they also make modifications of gravity more attractive relative to exotic dark matter, even though no known plausible modifications of gravity can describe phenomena observed in galactic clusters yet some significant amounts of at least non-exotic dim matter that is found at high level in galactic clusters but not in ordinary galaxies must be present to explain the data so far.

Difficulties In Explaining The Matter-Antimatter Asymmetry

The data also complicate cosmology models by tightly constraining the potential sources of the observed matter-antimatter asymmetry in particle physics at quite high energies. The overwhelmingly most popular modification of the Standard Model insistence on conservation of baryon number and lepton number to address the matter-antimatter asymmetry that we observe in the universe is to assume in a beyond the Standard Model theory that rather than conserving B and L independently, that under the right circumstances, the universe merely conserves the quantity B-L. But, so far, experiments tightly constrain any violation of independent B and L conservation, and this constraint could conceivably reach the point where experimentally permitted B and L non-conservation in lieu of B-L conservation is insufficient to account for the observed matter-antimatter asymmetry.

The one missing piece of the Standard Model that could yet address this issue is a determination of the CP violation phase of the PMNS matrix that governs differences between the behavior of matter and antimatter for leptons in the same way that a similar CP violation phase of the CKM matrix governs this for quarks. But, if the CP-violating phase for leptons isn't enough to explain the matter-antimatter asymmetry in the universe (and many theorists are already guessing that this may be nearly maximal) and the observations that B number conservation and L number conservation is maintained in all experimentally observable contexts up to those that the LHC can see continue to hold, then cosmologogists may need radical new ideas to explain this observed feature of the universe without the help of beyond the Standard Model particle physics.