Wednesday, May 23, 2018

Accurately Depicting The Internal Structure Of The Proton


Most common textbook and popular press illustrations of the internal structure of the proton get it wrong in light of the available evidence.  Sabine Hossenfelder at Backreaction, sets us straight. The middle illustration is preferred.

Replicating The 1894 Direct Measurement Of The Speed Of Light

The first decent direct measurement of the speed of light was made in 1894. 

A new paper provides step by step instructions on how to replicate that experiment with resources that would be available to most university physics programs as a widely accessible educational and public relations project for something that might otherwise seem much for difficult to pull off than it actually is in reality.

More Dark Matter And Modified Gravity Limitations

* Direct dark matter detection experiments are starting to impose meaningful limitations on dark matter particles sub-GeV masses for particle that interact via the weak force but not the other two Standard Model forces. 

Constraints are meaningful down to about 10 MeV. This is heavier than an up quark, a down quark, an electron, or any kind neutrino. But, for example, these results pretty convincingly disfavor weakly interacting dark matter particles with masses comparable to a muon (105.658 MeV), a strange quark (92-104 MeV) (which in any case is never observed outside of a hadron), or the lightest hadron (which is the neutral pion (134.977 MeV)), for which cross-sections of interaction more than 0.01 pb (10^-38 cm^2) are ruled out.

This progress was made possible by looking for interactions between electrons and dark matter, rather than atomic nuclei and dark matter.

In a related development the Standard Model prediction for the weak charge of the proton (which is relevant to the cross-section of interaction between particles that interact via the weak force and protons such as hypothetical WIMPs), and the experimentally measured value, confirm each other and are much lower than one would naively expect. This confirmation also functions as yet another of many meaningful global constraint on beyond the Standard Model physics near the electroweak scale. The measurement "can constrain new weakly coupled particles with masses up to a few TeV, or even 50 TeV particles if they are strongly coupled to matter (g*~4π)." But, the strongest constraints on deviations from the Standard Model prediction for the weak charge of the proton actually come mostly from earlier experiments rather than the most recent one.


These new direct dark matter detection doesn't impact "warm dark matter" scenarios with typical dark mark particle masses in the keV range or completely sterile dark matter candidates with no non-gravitational interactions. But, it is yet another nail in the coffin of the WIMP paradigm preferred in supersymmetry based dark matter scenarios.

* New direct dark matter search results from the Xenon1T experiment will be announced on Monday, May 28, 2018. In all likelihood, the results will exclude additional dark matter parameter space, because the rumor mill would be buzzing by now if the results would involve an actual claimed direction detection of dark matter.

* A credible argument has been made to slightly relax on of the empirical constraints on primordial black hole dark matter, by opening up more free parameters in the models considered, but only modestly.

* New empirical constraints have been placed on self-interacting sterile neutrinos.

* A new way to empirically test dark matter proposals by looking at the character of "stellar streams" around galaxies has been proposed. The theoretically predicted properties of stellar streams under various dark matter hypotheses have been determined, so now astronomers can can compare what is observed to these predictions, something that will be possible to observed with "next generation telescopes", so this is something today's graduate students and currently practicing astrophysicists will see in the medium term future. In general, heavier dark matter particles imply more clumping, more collisions, more gaps in the stream, and a concentration of stars closer to the center of the streams than lighter dark matter particles.

So far as I know, comparable predictions have not been made for theories like MOND in this system which is well within its domain of applicability but very much complicated by external field effects in these systems (which when present, tend to cause the system to behave closer to a Newtonian gravity model with no dark matter).

* A recent review article compiles limitations on the parameter space of simplified dark matter models from a variety of different kinds of evidence including both astronomy evidence and particle collider based limitations, and also conveniently and clearly explains the leading simplified dark matter models. If you click through to read the body text of any of the links in this post, you should do so for this study as it has a number of informative parameter space exclusion charts for a wide array of simplified dark matter models - almost all of which have large shaded excluded regions - that aren't suitable for including in this post as each one requires considerable explanation.

* Massive gravity is also pretty much dead due to new experimental constraints, after being revived due to a loophole that overcame earlier problems identified with it.

Massive gravity is interesting because it makes careful analysis of graviton self-interactions absolutely necessary in a way that is often glossed over in massless graviton theories, which is useful because while gravitons lack mass, they do not lack mass-energy. But, the predictions of massive gravity theories are wrong even in the limit as graviton mass approaches zero. 

* The gravitational wave event GW170817 constraints relativistic generalizations of MOND, for example, ruling out the first and most elegant of those (TeVeS), but does not rue out all relativistic generalizations of MOND.

Post-Script: QCD formulated in GR Terms

While not exactly on the point, one interesting recent paper that doesn't naturally fit in any neat box expresses the quantum chromodynamics a.k.a. QCD a.k.a. the Standard Model description of the strong force in an equation form closely analogous to the equations of general relativity. Usually, quantum gravity researchers go in the opposite direction and try to find formulas analogous to those of the other Standard Model forces for general relativity.

Tuesday, May 22, 2018

Anomalous Resonances As Hadron Molecules

A new pre-print quickly summarizes the hypothesis that most scalar mesons and axial vector mesons and a variety of other anomalous resonances not easily explained with two or three valence quarks are all basically "molecules" of pairs of mesons and/or baryons. For example:
There are many states that can be described from hadron-hadron interaction. Some well-known examples are the scalar mesons obtained from pseudoscalar-pseudoscalar interaction in S-wave and coupled channels: the a0(980) from KK¯ and πη in isospin 1, the f0(980) from KK¯ and ππ in isospin 0, and the f0(500) (σ meson) from ππ scattering in isospin 0. In the strange sector, from vector-pseudoscalar interaction one can describe the f1(1285) as a K∗K¯ + c.c. molecule. In the charm-strange sector there is the D∗ s0 (2317), which can be described as a DK bound state. Similarly, one of the most famous examples in charm sector is the X(3872) which can be explained as a DD¯ ∗ + c.c. molecule. These are just a few cases from meson-meson interaction. On the other hand, in meson-baryon interaction the best example would be the Λ(1405), which is widely accepted [1] as a quasi-bound state between the KN¯ and πΣ thresholds, generated mostly from the KN¯ scattering.
It doesn't acknowledge that this is not a consensus interpretation or what supports this interpretation relative to the alternatives, however. 

In other hadron physics news, the BFKL theorem, named after the authors of the paper that proposed it in the 1970s to explain how the strong force interactions of high energy particles change at high energies in a subtle but important way, has now largely been proven and refined.
One striking feature of particles which are strongly interacting (like the proton) is that if two of them are approaching each other, the chance of them actually colliding increases as the energy of the particles increases. This behaviour was well-known experimentally, and was modelled in a precursor to the Standard Model called “Regge theory”. Amongst other things, the BFKL approach offered, for the first time, a chance of understanding this behaviour from first principles using the Standard Model. . . .
The scattering probability for electrons and protons is generally expressed in terms of mathematical objects called structure functions, and the BFKL predictions said that one particular structure function should rise very rapidly as the fraction of the proton’s momentum involved in the collision got smaller. 
We measured that structure function, and it did rise. But there were problems to sort out before declaring BFKL vindicated. The structure function did not rise as quickly as might have been expected by BFKL. It was also possible to explain the rise using different calculations – not featuring their sums. Most importantly, none of these calculations, by BFKL or others, was very precise, and nor were the data. We were in a grey area. 
Over the years, many more data have come in, and better calculations have been made, by a generation of theorists and experimentalist wrestling with some formidable challenges. The qualitative impact of the BFKL sums is not now expected to be as dramatic as the initial calculations indicated, but it is still there, and still important.
A global analysis published on the arXiv this year by physicists from Amsterdam, Edinburgh, Genoa, Oxford and Rome pulls lots of this work together and makes the qualitative statements about the BFKL sums quantitative. Including these sums (in their newer and more precise form) gives a significantly better description of the data than is the case if they are omitted. 
What this means is that we have pushed our understanding of the strong force into a new, previously unobserved region, and verified a qualitatively new emergent behaviour. 
The formidable mathematics behind these calculations connects a deceptively simple underlying theory with a ubiquitous and counter-intuitive observational fact: scattering probabilities rise at high energies. This has implications for our understanding of many things, from the collisions at the Large Hadron Collider at CERN to the propagation and detection of high energy particles in cataclysmic cosmological events. It may even be important in understanding possible new strongly-interacting theories that may still to be discovered beyond the Standard Model.

How Big Was The Founding Population Of The Americas?


A current best estimate, based upon genetic data, places the size of the founding population of the Americas in the range of 229 to 300 with a best fit of about 284 people.

The source scientific journal article for the material in the link is as follows:
In spite of many genetic studies that contributed for a deep knowledge about the peopling of the Americas, no consensus has emerged about important parameters such as the effective size of the Native Americans founder population. Previous estimates based on genomic datasets may have been biased by the use of admixed individuals from Latino populations, while other recent studies using samples from Native American individuals relied on approximated analytical approaches. 
In this study we use resequencing data for nine independent regions in a set of Native American and Siberian individuals and a full-likelihood approach based on isolation-with-migration scenarios accounting for recent flow between Asian and Native American populations. Our results suggest that, in agreement with previous studies, the effective size of the Native American population was small, most likely in the order of a few hundred individuals, with point estimates close to 250 individuals, even though credible intervals include a number as large as ~4,000 individuals. 
Recognizing the size of the genetic bottleneck during the peopling of the Americas is important for determining the extent of genetic markers needed to characterize Native American populations in genome-wide studies and to evaluate the adaptive potential of genetic variants in this population.
Nelson J.R. Fagundes, et al., "How strong was the bottleneck associated to the peopling of the Americas? New insights from multilocus sequence data", 41(1) Genetics and Molecular Biology (2018).

This number is the "effective population size" of the Founding population of the Americas which is generally significantly smaller than the actual adult census size of the same population, and requires additional adjustment for non-reproductive age adults. Depending upon the circumstances, effective population size could be somewhat more than half to less than 1% of the total census size of the population.

The review of the literature in the paper recounts many previous estimates of the same quantity, after which the authors argue that their approach is better than their predecessor's approaches, despite a quite small data set that was analyzed.
The first quantitative approach to infer the effective population size of the founder Native American population was developed by Hey (2005), who did a meta-analysis of nine sequence loci, used a likelihood-based inference and assumed a isolation with migration (IM) population model to suggest an extreme population bottleneck with an effective population size of ~70 individuals. Since this pioneer work, other groups tried to replicate this result using multilocus autosomal data, with partial success. Kitchen et al. (2008) re-analyzed Hey’s dataset, adding mtDNA genomic data under different priors for migration rates and suggested an effective population size ranging from 1,000 to 5,400 individuals. Ray et al. (2010), using a dataset of 401 STRs, estimated an effective founder population size between 42 and 140 individuals (with a median of 87 individuals). Between these two extremes, Fagundes et al. (2007), based on the re-sequencing of 50 short loci, estimated an effective founder size of ~450 individuals (with a 95% credible interval (CI) ranging from 71 to 1,280 individuals). Recent autosomal data generated from admixed Latino populations also provided very different figures. Gutenkunst et al. (2009), based on a very large dataset of more than 13,000 SNPs, suggested a value of 800 effective individuals, with a confidence interval between 140 and 1,600 individuals; while Wall et al. (2011), using resequencing data, estimated a bottleneck effective population size not larger than 150 individuals. Gravel et al. (2013) proposed intermediate values of about 514 effective individuals, ranging between 316 and 2,264 individuals.