Wednesday, November 30, 2016

Academics Aren't Paid (Directly) For Their Publications

As 4Gravitons (who is a graduate student or post-grad at the Perimeter Institute in Waterloo, Canada, one of the premier theoretical physics shops in the world) recently pointed out in his blog:
In fact, academics don’t get paid by databases, journals, or anyone else that publishes or hosts our work. In the case of journals, we’re often the ones who pay publication fees. Those who write textbooks get royalties, but that’s about it on that front.
I grew up with a father who was a professor and a mother who was a university administrator who helped professors get grants for research and comply with human subjects requirements, so I've know this for as long as I knew it was something to know about. But, lots of people don't realize this fact.

Now, this doesn't mean that professors don't receive economic benefit from publishing. The business model of academia works like this:

1. You need to do research, ideally publishable in some form, to earn a PhD, and a PhD or good progress towards earning one in the very near future, is the basic prerequisite for being hired as a professor.

2.  Professors are initially hired for one to three year fixed terms as lecturers or as tenure track "Assistant Professors".

3.  An Assistant Professor is evaluated for tenure (usually after three years, but practice varies and sometimes there are multiple stints as an Assistant Professor at the same institution or successive ones). If you get tenure, you are usually simultaneously promoted to "Associate Professor" and get a raise. If you don't get tenure, you may be given another shot, but usually, you are terminated.

4.  An Associate Professor with tenure can then be evaluated for promotion to full "Professor" with greater prestige and higher pay.

In all of the main career steps in an academic's life: getting a PhD, getting hired as a tenure track professor, earning tenure, getting promoted to Associate Professor, and getting promoted to full Professor, the dominant consideration is what research you have published in peer reviewed scholarly journals and how significant that research is (e.g. measured by citations in other scholarly work). There are other factors, but that is the dominant one. Hence the phrase, "publish or perish".

Your publications are also the primary consideration in how prestigious and high paying a post you will be hired at (often after landing a first post elsewhere) and of your prestige in your field and the academic profession in general.

A professor at a research university has a teaching load expected to use about 25%-67% of his or her time, with the most esteemed professors having the lightest teaching loads. In the balance of your time you are expected, but not required (once you have tenure) to do research, most of which should be potentially publishable in peer reviewed journals.

Subsidies for universities and colleges from state governments that make this possible is the main way that state government finances basic research.

So professors have strong incentives to publish, but are not directly rewarded for the publications themselves (many of which have arguably weak claims to intellectual property protection due to exceptions for factual compilations and scientific principles).

This is a good thing, because, in the end, their incentive is to produce more papers, not necessarily for those papers to have lots of readers, and even very respectably cited papers are often read by a very small number of readers and purchased by few customers other than academic libraries.

Emergent Gravity

Sabine Hossenfelder's latest post at Backreaction on "Emergent Gravity" is one of her best educated layman oriented posts explaining concepts in fundamental physics yet.

Notably, she emphasizes the connection, lacking in many other explanations of this approach, to the similarities between thermodynamics (which is know is emergent via statistical mechanics from the mechanics of atoms and molecules) and gravity.
Emergent gravity has been in the news lately because of a new paper by Erik Verlinde
. . . 
Almost all such attempts to have gravity emerge from some underlying “stuff” run into trouble because the “stuff” defines a preferred frame which shouldn’t exist in general relativity. They violate Lorentz-invariance, which we know observationally is fulfilled to very high precision. 
An exception to this is entropic gravity, an idea pioneered by Ted Jacobson 20 years ago. Jacobson pointed out that there are very close relations between gravity and thermodynamics, and this research direction has since gained a lot of momentum. 
The relation between general relativity and thermodynamics in itself doesn’t make gravity emergent, it’s merely a reformulation of gravity. But thermodynamics itself is an emergent theory – it describes the behavior of very large numbers of some kind of small things. Hence, that gravity looks a lot like thermodynamics makes one think that maybe it’s emergent from the interaction of a lot of small things. . . . as long as you’re not looking at very short distances, it might not matter much exactly what gravity emerges from. Like thermodynamics was developed before it could be derived from statistical mechanics, we might be able to develop emergent gravity before we know what to derive it from.
This is only interesting, however, if the gravity that “emerges” is only approximately identical to general relativity, and differs from it in specific ways. For example, if gravity is emergent, then the cosmological constant and/or dark matter might emerge with it, whereas in our current formulation, these have to be added as sources for general relativity. 
So, in summary “emergent gravity” is a rather vague umbrella term that encompasses a large number of models in which gravity isn’t a fundamental interaction. The specific theory of emergent gravity which has recently made headlines is better known as “entropic gravity” and is, I would say, the currently most promising candidate for emergent gravity. It’s believed to be related to, or maybe even be part of string theory, but if there are such links they aren’t presently well understood.
She references the following article for a more technical description of many of the leading theories.
We give a critical overview of various attempts to describe gravity as an emergent phenomenon, starting from examples of condensed matter physics, to arrive to more sophisticated pregeometric models. The common line of thought is to view the graviton as a composite particle/collective mode. However, we will describe many different ways in which this idea is realized in practice.

Lorenzo Sindoni, Emergent Models for Gravity: an Overview of Microscopic Models (May 12, 2012).

The notion that you might be able to derive gravity from first principles through a clever macro-level understanding of particle physics is very exciting indeed. It would be miraculous enough for dark matter and dark energy to emerge naturally from a quantum gravity theory. But, it would be even more amazing if quantum gravity itself could be derived from and emerged naturally from fundamental particle physics. 

Certainly, that hasn't been established yet, but it does seem like a very plausible possibility.

Friday, November 25, 2016

Glueballs Still Elusive

Despite combing through 260 million events that should be able to produce a type of glueball resonance that can't be confused with quarkonium, the researchers once again after about four decades of fruitless searching, have come up empty. The quality of this particular search at Belle, and the relentless failure of searches over the decades to find any trace of glueballs despite increasingly sophisticated efforts to find them has me wondering: 

Does some, as yet unarticulated missing principle of quantum chromodynamics (QCD) forbids glueballs?

The particularly confounding aspect of this is that glueballs are relatively easy to describe mathematically, since they implicate just one of the Standard Model physical constants, the strong force coupling constant. Unlike other hadrons, no physical constants related to quark masses, the weak force coupling constant, or the CKM matrix needs to be known to describe them. 

The masses they are predicted to have are not very different from all sorts of other known hadrons (this search focused on glueballs predicted to have masses from 2.8 to 4.59 GeV (at two sigma extremes from the predicted masses), around the predicted mass of D mesons and B mesons, and well defined, distinctive quantum numbers. These are essentially completely defined from theory.

A search with 260 million events shouldn't be able to miss them if they are produced at all in the studied process at any meaningful rate, but the collaborators found nothing. Branching fractions of as much as 1 per 5,000 decays to glueballs from the Upsilon mesons whose decays were studied, which were promising candidates for giving rise to decay to glueballs at a detectable branching fraction, have been ruled out.

What do we not know about QCD that causes this? If they do exist, why can't we find traces of any of them?

The fact that these are "oddballs" can mix with quarkonium states is particularly notable, because the usual excuse for not being able to see glueball resonances doesn't apply here.

In QCD, the usual rule is that everything that is permitted is mandatory, so finding a decay that isn't prohibited by any of the rules of QCD that can't be detected is a big signal that we're missing something, although it is notable that no QCD theoretical estimate of the branching fraction was provided in this study, as some resonances are simply very rare.
The existence of bound states of gluons (so-called “glueballs”), with a rich spectroscopy and a complex phenomenology, is one of the early predictions of the non-abelian nature of strong interactions described by quantum chromodynamics (QCD). However, despite many years of experimental efforts, none of these gluonic states have been established unambiguously. Possible reasons for this include the mixing between glueballs and conventional mesons, the lack of solid information on the glueball production mechanism, and the lack of knowledge about glueball decay properties. Of these difficulties, from the experimental point of view, the most outstanding obstacle is the isolation of glueballs from various quarkonium states.
Fortunately, there is a class of glueballs with three gluons and quantum numbers incompatible with quark-antiquark bound states, called oddballs, that are free of this conundrum. The quantum numbers of such glueballs include J P C = 0 −−, 0 +−, 1 −+, 2 +−, 3 −+, and so on. Among oddballs, special attention should be paid to the 0 −− state (G0−− ), since it is relatively light and can be produced in the decays of vector quarkonium or quarkoniumlike states.

Two 0 −− oddballs are predicted using QCD sum rules with masses of (3.81 ± 0.12) GeV/c 2 and (4.33 ± 0.13) GeV/c 2 , while the lowest-lying state calculated using distinct bottom-up holographic models of QCD [3] has a mass of 2.80 GeV/c 2 . Although the masses have been calculated, the width and hadronic couplings to any final states remain unknown.

Possible G0−− production modes from bottomonium decays are suggested in Ref. [2] including Υ(1S, 2S) → χc1+G0−− , Υ(1S, 2S) → f1(1285)+G0−−, χb1 → J/ψ+G0−− , and χb1 → ω + G0−− . In this paper, we search for 0 −− glueballs in the production modes proposed above and define G(2800), G(3810), and G(4330) as the glueballs with masses fixed at 2.800, 3.810, and 4.330 GeV/c 2 , respectively. All the parent particles in the above processes are copiously produced in the Belle experiment, and may decay to the oddballs with modest rates.
Full pdf here.

The abstract and paper are as follows:
We report the first search for the J P C = 0−− glueball in Υ(1S) and Υ(2S) decays with data samples of (102 ± 2) million and (158 ± 4) million events, respectively, collected with the Belle detector. No significant signals are observed in any of the proposed production modes, and the 90% credibility level upper limits on their branching fractions in Υ(1S) and Υ(2S) decays are obtained. The inclusive branching fractions of the Υ(1S) and Υ(2S) decays into final states with a χc1 are measured to be B(Υ(1S) → χc1 + anything) = (1.90 ± 0.43(stat.) ± 0.14(syst.)) × 10−4 with an improved precision over prior measurements and B(Υ(2S) → χc1 + anything) = (2.24 ± 0.44(stat.) ± 0.20(syst.)) × 10−4 for the first time.
Belle Collaboration, "Search for the 0−− Glueball in Υ(1S) and Υ(2S) decays" (November 22, 2016).

UPDATE November 29, 2016:

A preprint of a back to the drawing board paper has been posted and notes these results while estimating a best fit for an oddball mass two GeV heavier than the mass used by the Belle Collaboration that is still consistent to within the large two sigma error bars with the heavier Belle Collaboration values.
We present the new results for the exotic glueball state 0−− in the framework of the QCD sum rules. It is shown that previously used three-gluon current does not couple to any glueball bound state. We suggest considering a new current which couples to this exotic state. The resulting values for mass and decay constant of the 0−− glueball state are MG = 6.3 +0.8 −1.1 GeV and FG = 67 ± 6 keV, respectively.
Alexandr Pimikov, Hee-Jung Lee, Nikolai Kochelev, Pengming Zhang, "Revision of exotic 0−− glueball" (November 26, 2016).

Their predicted mass at two sigma error bars is 4.1 GeV to 7.9 GeV which isn't too impressive for a pure theoretical calculation that is basically a function of just one experimentally measured Standard Model constant (the strong force coupling constant) that is known to a precision of about 1%. The Belle estimate has a mere 3% uncertainty.

The pseudo-scalar bottom eta meson which is a form of bottomonium has a mass of 9.398 +/- 0.0032 GeV. The measured mass of the parent mesons Y(1S) which is called an upsilon meson, is also a form of bottomonium and is 9.46030 +/- 0.00026 GeV. The measured mass of the Y(2S) which is an excited upsilon meson, which is another form of bottomonium isn't well established in sources I've found off the bat, but would be expected to be heavier than 9 .46 GeV. Measured masses of two kinds of Y bosons with the right quantum numbers whose exact excitation has not been determined are 10.81 GeV and 11.02 GeV. So, the parent would not be barred from decaying into an oddball of this type even if it is on the heavy side of the estimated range, by mass-energy conservation.

The introduction to this paper notes that:
The glueballs are composite particles that contain gluons and no valence quarks. The glueballs carry very important information about the gluonic sector of QCD and their study is one of the fundamental tasks for the strong interaction. While the glueballs are expected to exist in QCD theoretically, there was no clear experimental evidence and so the glueballs remain yet undiscovered (see reviews [1, 2]). This is the reason why the investigation of the possible glueball’s candidates are included in the programs of the running and projected experiments such as Belle (Japan), BaBar (SCAC, USA), BESIII (Beijing, China), RHIC (Brookhaven, USA), LHC (CERN), GlueX (JLAB, USA), NICA (Dubna, Russia), HIAF (China) and FAIR (GSI, Germany). 
One of the main problems of the glueball spectroscopy is the possible large mixing of the glueballs with ordinary meson states, which leads to the difficulties in disentangling the glueballs in the experiment. In this connection, the discovery of the exotic 0−− glueball, which can not be mixed with the qq¯ states, is one of the fundamental tasks of the glueball spectroscopy. Therefore, it is very important to investigate the properties of this glueball within the QCD’s based approach. One of such approaches is the QCD Sum Rules (SR). The first study of the 0−− glueball by the QCD SR method has been performed recently in [3] where the authors introduced a very specific interpolating current for this three-gluon state. Unfortunately, they only considered SR for the mass of the glueball and did not check the SR for the decay constant. Below we show that their current has pathology, which leads to the negative sign of the imaginary part of the corresponding correlator and, as the result, SR become inconsistent. Considering the fact the study of the glueball is a very hot topic nowadays and the prediction of the value of the exotic glueball mass is crucial for the experimental observation, the revision of the exotic glueball properties within QCD SR is required. 
In this Letter, a new interpolating current, which couples to the 0−− exotic glueball state, is suggested. We calculate the Operator Product Expansion (OPE) for the correlator with this current up to dimension-8 and show that there is a good stability of SR for both mass and decay constant of this state.
The paper then reaches the result stated in the abstract and goes on to conclude that:
Our final result is: MG = 6.3 +0.8 −1.1 GeV,  FG = 67 ± 6 keV. (11)
The SR analysis in full QCD (Nf = 3) and nonzero quark condensate hJ 2 i) leads to a reduction of the glueball mass by 0.2 GeV. The mass of the exotic glueball in Eq.(11) is not far away from the recent unquenched lattice result MG = 5.166 ± 1.0 GeV [12] obtained with a rather large pion mass mπ = 360 MeV. 
Here we would like to note that there are three sources of uncertainties in the above analysis for the mass and decay constant: the variation of gluon condensate, stability of SR triggering Borel parameter M2 dependence in terms of criteria δ min k , and roughly estimated SR uncertainty coming from the OPE truncation. The latter uncertainty for the decay constant comes from the definition of the fiducial interval, Eq. (9), in the standard assumption that the contribution from missing terms is of the order of the last included nonperturbative term squared: (1/3)2 ∼ 10%. The same error for mass can be expected to be suppressed since the related errors for R (SR) k+1 and R (SR) k are correlated. The best threshold value is s bf 0 = 52.4 +12.6% −16.2% GeV2 when only uncertainty of the gluon condensate is included. Note that the fiducial interval for the central value of the gluon condensate is M2 ∈ [3.7, 7.3] GeV2 . We also mention that here we present the results from the k = 0 case for SR (see Eqs.(8,10)). The mass estimation for higher values of k = 1, 2, 3 are in agreement with the considered k = 0 case within the error bars. 
Summarizing, we present the revision of the QCD SR result for the exotic three-gluon glueball state with quantum numbers J P C = 0−−. A new interpolating current for this glueball has been constructed. By using this current, we have analyzed the QCD SR consisting of contributions up to the operators of dimension-8 and obtained the estimation of the mass and decay constant of the exotic glueball. 
After the paper was completed we were informed about the negative result of the searching of the low mass exotic 0 −− glueball by the Belle Collaboration.
UPDATE (December 4, 2016): Marco Frasca has some interesting comments on the subject here.

Monday, November 21, 2016

Turkey History

Like pumpkins, squash and gourds, Turkeys were domesticated in the Americas in the pre-Columbian era, possibly near Oaxaca, Mexico, where I went on my honeymoon (a long, long time ago).
The turkeys we'll be sitting down to eat on Thursday have a history that goes way back. Archaeologists have unearthed a clutch of domesticated turkey eggs used as a ritual offering 1,500 years ago in Oaxaca, Mexico -- some of the earliest evidence of turkey domestication. . . .

"The fact that we see a full clutch of unhatched turkey eggs, along with other juvenile and adult turkey bones nearby, tells us that these birds were domesticated," says Feinman. "It helps to confirm historical information about the use of turkeys in the area." 
The eggs, according to Feinman, were an offering of ritual significance to the Zapotec people. The Zapotec people still live in Oaxaca today, and domesticated turkeys remain important to them. "Turkeys are raised to eat, given as gifts, and used in rituals," says Feinman. "The turkeys are used in the preparation of food for birthdays, baptisms, weddings, and religious festivals." 
The new information about when turkeys were domesticated helps amplify the bigger picture of animal domestication in Mesoamerica. "There were very few domesticated animals in Oaxaca and Mesoamerica in general compared with Eurasia," explains Feinman. "Eurasia had lots of different meat sources, but in Oaxaca 1,500 years ago, the only assuredly domestic meat sources were turkeys and dogs. And while people in Oaxaca today rely largely on meat from animals brought over by the Spanish (like chicken, beef, and pork), turkeys have much greater antiquity in the region and still have great ritual as well as economic significance today." 
The turkeys that are so important to the Zapotec today are similar birds to the ones that play a role in the American tradition of Thanksgiving. "These are not unlike the kinds of turkeys that would have been around at the first Thanksgiving, and similar to the birds that we eat today," says Feinman.
From here. Based upon the following journal article:
Highlights

• Mitla Fortress yields clear evidence for turkey domestication in southern Mexico. 
• Domesticated turkeys were present in the Valley of Oaxaca by the mid Classic period (ca. CE 400–600). 
• Turkeys were raised for subsistence, ritual offerings, and marketable goods. 
• Remains include juvenile and adult birds (hens and toms), whole eggs, and numerous eggshell fragments at least one egg-laying hen present. 
• SEM images of the eggshell reveals both unhatched and hatched eggs from a range of incubation stages. 
Abstract 
Recent excavations of two domestic residences at the Mitla Fortress, dating to the Classic to Early Postclassic period (ca. CE 300–1200), have uncovered the remains of juvenile and adult turkeys (both hens and toms), several whole eggs, and numerous eggshell fragments in domestic refuse and ritual offering contexts. Holistically, this is the clearest and most comprehensive evidence to date for turkey domestication in the Central Valleys of Oaxaca, Mexico. Juvenile turkeys range in age, from recently hatched poults to young juvenile birds. Medullary bone, which only forms in female birds before and during the egg-laying cycle, indicates the presence of at least one egg-laying hen. Scanning electron microscope (SEM) images of the eggshell reveals both unhatched and hatched eggs from a range of incubation stages, from unfertilized or newly fertilized eggs to eggs nearing the termination of embryogenesis to hatched poults. We present these new data and explore turkey husbandry, consumption, and use by two residential households at the Mitla Fortress.
Heather A. Lapham, Gary M. Feinman, Linda M. Nicholas. "Turkey husbandry and use in Oaxaca, Mexico: A contextual study of turkey remains and SEM analysis of eggshell from the Mitla Fortress." Journal of Archaeological Science: Reports (July 1, 2016)

Pumpkin History

Happy Thanksgiving (with a hat tip to CNN)!

Pre-Columbian hunters and gatherers may have proto-farmed pumpkins, squash and gourds, which were domesticated in the Americas, after their survival was threatened by the megafauna extinction caused by their arrival in the Americas. 


Significance 
Squashes, pumpkins, and gourds belonging to the genus Cucurbita were domesticated on several occasions throughout the Americas, beginning around 10,000 years ago. The wild forms of these species are unpalatably bitter to humans and other extant mammals, but their seeds are present in mastodon dung deposits, demonstrating that they may have been dispersed by large-bodied herbivores undeterred by their bitterness. However, Cucurbita may have been poorly adapted to a landscape lacking these large dispersal partners. Our study proposes a link between the disappearance of megafaunal mammals from the landscape, the decline of wild Cucurbita populations, and, ultimately, the evolution of domesticated Cucurbita alongside human cultivators. 
Abstract 
The genus Cucurbita (squashes, pumpkins, gourds) contains numerous domesticated lineages with ancient New World origins. It was broadly distributed in the past but has declined to the point that several of the crops’ progenitor species are scarce or unknown in the wild. We hypothesize that Holocene ecological shifts and megafaunal extinctions severely impacted wild Cucurbita, whereas their domestic counterparts adapted to changing conditions via symbiosis with human cultivators. 
First, we used high-throughput sequencing to analyze complete plastid genomes of 91 total Cucurbita samples, comprising ancient (n = 19), modern wild (n= 30), and modern domestic (n = 42) taxa. This analysis demonstrates independent domestication in eastern North America, evidence of a previously unknown pathway to domestication in northeastern Mexico, and broad archaeological distributions of taxa currently unknown in the wild. Further, sequence similarity between distant wild populations suggests recent fragmentation. Collectively, these results point to wild-type declines coinciding with widespread domestication. 
Second, we hypothesize that the disappearance of large herbivores struck a critical ecological blow against wild Cucurbita, and we take initial steps to consider this hypothesis through cross-mammal analyses of bitter taste receptor gene repertoires. Directly, megafauna consumed Cucurbita fruits and dispersed their seeds; wild Cucurbita were likely left without mutualistic dispersal partners in the Holocene because they are unpalatable to smaller surviving mammals with more bitter taste receptor genes. Indirectly, megafauna maintained mosaic-like landscapes ideal for Cucurbita, and vegetative changes following the megafaunal extinctions likely crowded out their disturbed-ground niche. Thus, anthropogenic landscapes provided favorable growth habitats and willing dispersal partners in the wake of ecological upheaval.

Pumpkins, squashes and gourds were domesticated by pre-Columbian Native Americans many thousands of years ago from five different wild species, three of which still exist today and two of which have no known wild ancestors that still exist. Genetic evidence shows no evidence of crossing of subspecies of Cucurbita. The family tree of these plant sub-species is as follows:


Genetic relationships among 104 accessions of Cucurbita pepo were assessed from polymorphisms in 134 SSR (microsatellite) and four SCAR loci, yielding a total of 418 alleles, distributed among all 20 linkage groups. Genetic distance values were calculated, a dendrogram constructed, and principal coordinate analyses conducted. The results showed 100 of the accessions as distributed among three clusters representing each of the recognized subspecies, pepo, texana, and fraterna. The remaining four accessions, all having very small, round, striped fruits, assumed central positions between the two cultivated subspecies, pepo and texana, suggesting that they are relicts of undescribed wild ancestors of the two domesticated subspecies. In both, subsp. texana and subsp. pepo, accessions belonging to the same cultivar-group (fruit shape) associated with one another. Within subsp. pepo, accessions grown for their seeds or that are generalists, used for both seed and fruit consumption, assumed central positions. Specialized accessions, grown exclusively for consumption of their young fruits, or their mature fruit flesh, or seed oil extraction, tended to assume outlying positions, and the different specializations radiated outward from the center in different directions. Accessions of the longest-fruited cultivar-group, Cocozelle, radiated bidirectionally, indicating independent selection events for long fruits in subsp. pepo probably driven by a common desire to consume the young fruits. Among the accessions tested, there was no evidence for crossing between subspecies after domestication.

The conclusion of the 2011 paper notes that:
Before domestication, Cucurbita pepo evolved into three lineages, subsp. fraterna, which grows wild in northeastern Mexico, subsp. texana, which grows wild in the southeastern United States, and subsp. pepo, which has not been discovered in the wild but may have originated in southern North America. Ornamental cultigens producing the smallest, round, striped fruits are located centrally between subsp. texana and subsp. pepo, suggesting that they are relicts of unknown wild progenitors. No cultigens closely related to subsp. fraterna have been identified. After domestication and long before the arrival of Europeans, subsp. texana evolved into several edible-fruited cultivar-groups characterized by fruit shapes deviating from roundness. In subsp. pepo, the evolution of long-fruited cultivar-groups is more recent and occurred primarily in Europe. Although there is no obvious genetic barrier to crossing among the subspecies, none of the cultivar-groups and none of the accessions examined appears to derive from such crossing.
The body of the 2011 paper notes that:
Cucurbita pepo subsp. texana is divided into distinct sub-clusters​, four of which are based on the edible-fruited cultivar-groups that were defined by fruit shape and named Acorn, Crookneck, Scallop, and Straightneck (Paris 1986). . . . the Scallop Group (T-SC) in the most central position within Cucurbita pepo subsp. texana and, of the edible-fruited cultivar-groups, closest to the wild (T-GT) gourds. Thus, it appears likely that it was the first edible-fruited cultivar-group to have evolved and diversified under the guidance of the pre-Columbian, Native Americans of what is now the United States. Findings of rind fragments of different fruit forms, including lobed, furrowed, and warted, at sites 2,500–3,000 years old in Kentucky, are suggestive of incipient differentiation into the Scallop (T-SC), Acorn (T-AC), and Crookneck (T-CN) Groups (Watson and Yarnell 1966, 1969; Cowan 1997). These three cultivar-groups can be thought of as products of pre-Columbian genetic drift and conservation in isolation that occurred in eastern North America. The Straightneck Group (T-SN) appears to be a much more recent development, judging from the low average GD value among straightneck cultivars (Table 2), consistent with its absence from the historical record prior to the late nineteenth century (Paris 2000a). Earlier results suggested that the Straightneck Group was derived from a cross between a cultivar of the Crookneck Group and a cultivar of the Acorn Group (Paris et al. 2003), but the present results favor a direct derivation from the Crookneck Group. . . .  
The five accessions of the Zucchini Group (ZU) are very closely related, having the lowest within-group GD value. The zucchinis form a distinct sub-cluster, as already observed using other markers. Another sub-cluster is formed by the three accessions representing a market type within the Pumpkin Group (PU), the oil-seed pumpkins, P-PU-GLE, P-PU-STY, and P-PU-WIE. Both, the zucchinis and the oil-seed pumpkins have a short history. The Zucchini Group apparently originated in the environs of Milan in the late nineteenth century (Paris 2000a). The oil-seed pumpkins derive from a recessive, hull-less mutation that occurred in Styria, Austria in the 1880s (Teppner 2000). The Pumpkin Group, the Vegetable Marrow Group (P-VM), and the Cocozelle Group (P-CO), which have considerably longer histories (Paris 2000a), have high within-group GD values and relative high heterozygosities . . . . .  
Even though domestication of subsp. pepo is thought to precede that of subsp. texana by approximately 5,000 years (Smith 2006), selection for deviation from fruit roundness occurred over 2,000 years earlier in subsp. texana, as indicated by rind fragments found in Kentucky.

Genetic Evidence Of Natural Selection In Human Ancestors

What genes were selected for in the course of the evolution of our primate ancestors? 

An examination of multiple primate genomes sheds some light on this question. It looks likes genes involved in immune response, sensory perception, metabolism and energy production were under particularly strong selective pressure in our primate ancestors.
Gene set enrichment approaches have been increasingly successful in finding signals of recent polygenic selection in the human genome. 
In this study, we aim at detecting biological pathways affected by positive selection in more ancient human evolutionary history, that is in four branches of the primate tree that lead to modern humans. 
We tested all available protein coding gene trees of the Primates clade for signals of adaptation in the four branches mentioned above, using the likelihood-based branch site test of positive selection. The results of these locus-specific tests were then used as input for a gene set enrichment test, where whole pathways are globally scored for a signal of positive selection, instead of focusing only on outlier "significant" genes. 
We identified several pathways enriched for signals of positive selection, which are mainly involved in immune response, sensory perception, metabolism, and energy production. These pathway-level results were highly significant, at odds with an absence of any functional enrichment when only focusing on top scoring genes. 
Interestingly, several gene sets are found significant at multiple levels in the phylogeny, but in such cases different genes are responsible for the selection signal in the different branches, suggesting that the same function has been optimized in different ways at different times in primate evolution.
Josephine Daub, Sebastien Moretti, Iakov Igorevich Davydov, Laurent Excoffier, Marc Robinson-Rechavi, "Detection of pathways affected by positive selection in primate lineages ancestral to humans" (Pre-Print November 21, 2016). doi: http://dx.doi.org/10.1101/044941

Earlier version of the paper were released in March and October of this year.

More On Gravitational Self-Interactions

Deur's research program is arguably the most promising in quantum gravity today.

He demonstrates how modeling the self-interaction of the graviton in the case of a scalar graviton can reproduce all dark matter phenomena, predict previously unobserved correlations between the extent to which an elliptical galaxy and its apparent dark matter content, and explain at least some dark energy phenomena, without (in principle at least) introducing any new fundamental physical constants to general relativity, or any new particles beyond the Standard Model other than the graviton.

His model also plausibly suggests that the key insights secured from the self-interacting scalar graviton regime and by analogy to QCD might not be materially altered by generalizing these result to the full tensor graviton.

The elegance with which his research explains a huge range of phenomena that seem to call for beyond the Standard Model (and general relativity) physics, with minimal tweaks to existing knowledge, and integrates quantum gravity into a Standard Model-like framework, in the face of extreme computational barriers to addressing these questions in a brute force numerical manner, is remarkable.
We study two self-interacting scalar field theories in their strong regime. 
We numerically investigate them in the static limit using path integrals on a lattice. We first recall the formalism and then recover known static potentials to validate the method and verify that calculations are independent of the choice of the simulation's arbitrary parameters, such as the space discretization size. The calculations in the strong field regime yield linear potentials for both theories. 
We discuss how these theories can represent the Strong Interaction and General Relativity in their static and classical limits.  
In the case of Strong Interaction, the model suggests an origin for the emergence of the confinement scale from the approximately conformal Lagrangian. The model also underlines the role of quantum effects in the appearance of the long-range linear quark-quark potential. 
For General Relativity, the results have important implications on the nature of Dark Matter. In particular, non-perturbative effects naturally provide flat rotation curves for disk galaxies, without need for non-baryonic matter, and explain as well other observations involving Dark Matter such as cluster dynamics or the dark mass of elliptical galaxies.
A. Deur, "Self-interacting scalar fields in their strong regime" (November 17, 2016) (Hat tip to Viljami from the comments).

From the body text:
The (gφ∂µφ∂µφ + g2φ2µφ∂µφ) theory calculations in strong field regime yield a static potential which varies approximately linearly with distance, see Figs. 5 and 6. This can be pictured as a collapse of the three dimensional system into one dimension. As discussed in Section V D and shown numerically, typical galaxy masses are enough to trigger the onset of the strong regime for GR.
Hence, for two massive bodies, such as two galaxies or two galaxy clusters, this would result in a string containing a large gravity field that links the two bodies –as suggested by the map of the large structures of the universe. That this yields quantitatively the observed dark mass of galaxy clusters and naturally explains the Bullet Cluster observation [18] was discussed in [19]. For a homogeneous disk, the potential becomes logarithmic. 
Furthermore, if the disk density falls exponentially with the radius, as it is the case for disk galaxies, it is trivial to show that a logarithmic potential yields flat rotation curves: a body subjected to such a potential and following a circular orbit (as stars do in disk galaxies to good approximation) follows the equilibrium equation:

v(r) = √ G'M(r), (14)


with v the tangential speed and M(r) the disk mass integrated up to r, the orbit radius. G' is an effective coupling constant of dimension GeV−1 similar to the effective coupling σ (string tension) in QCD. Disk galaxies density profiles typically fall exponentially: ρ(r) = M0e−r/r0/(2πr02 ), where M0 is the total galactic mass and r0/ is a characteristic length particular to a galaxy. Such a profile leads to, after integrating ρ up to r:

v(r) = √ G'M0 (1 − (r/r0 + 1)e−r/r0), (15)

At small r, the speed rises as v(r) approximately equal to √ G'M0r/r0 and flatten at large r: v approximately equal to √ G'M0.
This is what is observed for disk galaxies and the present approach yields rotation curves agreeing quantitatively with observations [19]. For a uniform and homogeneous spherical distribution of matter, the system remains three-dimensional and the static potential stays proportional to 1/r. The dependence of the potential with the system’s symmetry suggested a search for a correlation between the shape of galaxies and their dark masses [19]. Evidence for such correlation has been found [20].
VI. SUMMARY AND CONCLUSION 
We have numerically studied non-linearities in scalar field theory.
Limiting ourselves to static systems allowed us to greatly simplify and speed-up the numerical calculations while providing a possible description of the strong regimes of the Strong Interaction and of General Relativity in the static case. The overall validity of the method is verified by recovering analytically known potentials. We further verified the validity of the simplifications in the case of General Relativity by recovering the post-Newtonian formalism. 
Lattice gauge calculations of QCD are well advanced. What justifies developing the present approximation is that it provides fast calculations that can be run on any personal computer, and it may help to isolate the important ingredients leading to confinement. This method is able to provide: 1) the expected field function, 2) a mechanism –straightforwardly applicable to QCD– for the emergence of a mass scale out of a conformal Lagrangian, 3) a running of the field effective mass in qualitative agreement with that seen in QCD, and 4) a potential agreeing with the phenomenological Cornell linear potential up to 0.8 fm, the relevant range for hadronic physics. Quantum effects, which cause couplings to run, are necessary for producing the linear potential and must be supplemented to the approach. A non-running coupling, even with a large value, would only yield a short range Yukawa potential. 
An important benefit of this method is that it may apply to theories, such as General Relativity, that are too CPU-demanding to be easily computed on a lattice. That the method recovers the essential features of QCD supports its application to GR. Since GR is a classical theory, no running coupling needs to be supplemented.
In QCD, the strong regime arises for distances greater than 2 × 10−16 m, while it should arise for gravity at galactic scales: Two massive bodies, such as galaxies, or at larger scale two galaxy clusters, would then be linked by a QCD-like string/flux tube. This would explain the universe’s large scale stringy structure observed by weak gravitational lensing. It also agrees quantitatively with cluster dynamics [19]. Furthermore, in the case of massive disks of exponentially falling density, such as disk galaxies, the logarithmic potential resulting from the strong field non-linearities trivially yields flat rotation curves. Those agree quantitatively with observations [19]. Finally, for a uniform and homogeneous spherical distribution of matter, the non-linearity effects should balance out [19]. Evidences for the consequent correlation expected between galactic ellipticity and galactic dark mass have been found [20].
Alexandre Deur has had some key insights into how to explore quantum gravity. 

The first has been to exploit parallels arising from the fact that both the strong force theory of  QCD and gravity involve self-interacting carrier bosons. 

The second has been to simplify the analysis of gravity by starting from a scalar graviton rather than a spin-2 tensor graviton. This produces identical results in a static case.  Even in the dynamic case, the deviations from the static case due to tensor components of linear momentum, angular momentum, pressure and electromagnetic flux are often negligible.

Also, the adjustments due to using a self-interacting tensor graviton (full GR) relative to a self-interacting scalar graviton (simplified GR) are unlikely to cancel out the differences between a self-interacting scalar graviton (simplified GR) and a non-self interacting scalar graviton (Newtonian gravity plus propagation at the speed of light and couplings to energy as well as mass).

His description of the impact in GR as pertaining to the "strong field regime" is somewhat misleading, because the effects in question, while involving large masses that generate gravitational fields much stronger than those generated by small masses, are generally only observable when the gravitational field is weak. In stronger gravitational fields (such as those in the vicinity of black holes, or even those involving solar system gravitational fields) the first order effects of gravitons pulling on other objects overwhelm any visible second order effects due to graviton self-interactions (particularly in the circularly symmetrical case).

Thursday, November 17, 2016

Modified Gravity Can Explain Cluster Data

The galaxy cluster system Abell 1689 has been well studied and yields good lensing and X-ray gas data. Modified gravity (MOG) is applied to the cluster Abell 1689 and the acceleration data is well fitted without assuming dark matter. Newtonian dynamics and Modified Newtonian dynamics (MOND) are shown not to fit the acceleration data, while a dark matter model based on the Navarro-Frenk-White (NFW) mass profile is shown not to fit the acceleration data below ~ 200 kpc.
J. W. Moffat and M. H. Zhoolideh Haghighi, "Modified gravity (MOG) can fit the acceleration data for the cluster Abell 1689" (16 Nov 2016).

The introduction observes that:
MOG has passed successful tests in explaining rotation velocity data of spiral and dwarf galaxies (Moffat & Rahvar (2013)), (Zhoolideh Haghighi & Rahvar (2016)), globular clusters (Moffat & Toth (2008b)) and clusters of galaxies (Moffat & Rahvar (2014)). Recently, it was claimed (Nieuwenhuizen (2016)) that no modified gravity theory can fit the Abell 1689 acceleration data without including dark matter or heavy (sterile) neutrinos. The cluster A1689 is important, for good lensing and gas data are available and we have data from 3kpc to 3Mpc. We will show that MOND (Milgrom (1983)) does not fit the A1689 acceleration data, nor does the dark matter model based on an NFW mass profile. However, MOG does fit the A1689 acceleration data without dark matter.
The conclusion of the paper notes:
The fully covariant and Lorentz invariant MOG theory fits galaxy dynamics data and cluster data. It also fits the merging clusters Bullet Cluster and the Train Wreck Cluster (Abell 520) without dark matter (Brownstein & Moffat (2007); Israel & Moffat (2016)). A MOG application to cosmology without dark matter can explain structure growth and the CMB data (Moffat & Toth (2013)). The fitting of the cluster A1689 data adds an important success for MOG as an alternative gravity theory without dark matter.
I will leave a detailed explanation of MOG theory and any analysis of this paper's conclusions for another day.

Tuesday, November 15, 2016

Metabolism and Mass Are Deeply Related And Powerfully Drive Evolution

A new preprint describes in detail how metabolism and body mass are intimately related to each other and to other key aspects of how they have evolved. Natural selection has been a unifying force explaining this process from the scale of viruses all of the way up to elephants and whales. 
I show that the natural selection of metabolism and mass is selecting for the major life history and allometric transitions that define lifeforms from viruses, over prokaryotes and larger unicells, to multicellular animals with sexual reproduction. 
The proposed selection is driven by a mass specific metabolism that is selected as the pace of the resource handling that generates net energy for self-replication. This implies that an initial selection of mass is given by a dependence of mass specific metabolism on mass in replicators that are close to a lower size limit. 
A maximum dependence that is sublinear is shown to select for virus-like replicators with no intrinsic metabolism, no cell, and practically no mass. A maximum superlinear dependence is instead selecting for prokaryote-like self-replicating cells with asexual reproduction and incomplete metabolic pathways. 
These self-replicating cells have selection for increased net energy, and this generates a gradual unfolding of a population dynamic feed-back selection from interactive competition. The incomplete feed-back is shown to select for larger unicells with more developed metabolic pathways, and the completely developed feed-back to select for multicellular animals with sexual reproduction. 
This model unifies natural selection from viruses to multicellular animals, and it provides a parsimonious explanation where allometries and major life history transitions evolve from the natural selection of metabolism and mass.
Lars Witing, "The natural selection of metabolism and mass selects lifeforms from viruses to multicellular animals" (November 15, 2016). doi: http://dx.doi.org/10.1101/087650

Monday, November 14, 2016

Population Waves In Greenland

Paleo-Eskimos arrived in Greenland around 2500 BCE and has a diet including a substantial component of Bowhead whales based upon ancient DNA analysis of the contents of middens from 2000 BCE.
Previously it was thought the Thule culture was the first to hunt and eat whales extensively, 800 to 600 years ago. Evidence of hunting large mammals prior to this was largely missing because of the lack of bones and weapons for hunting. 
However, samples show whale was very much part of the diets of humans before 1200 CE. Most notably, findings revealed bowhead whales and other large mammals were being exploited by the Saqqaq culture 4,000 years ago. 
At one of the sites, the bowhead whale was the most abundant species identified, making up almost half of the DNA analysed. At another site, it was the second or third most utilised species. 
The team believes these prehistoric Greenlanders would have transported large carcasses from the shore to the settlement as a result of their size – a bowhead whale can reach up to 60ft and weight between 75 and 100 tonnes. 
"The underrepresentation of whale bones in archaeological sites is a well-known phenomenon, typically ascribed to difficulties in transporting large carcasses from shore to the settlement in combination with the higher value of blubber or meat compared with bones," they wrote. 
"In the Arctic, several studies have suggested that the fossil record may underestimate the importance of whales to ancient Arctic cultures, however, the lack of suitable methods to detect remains of tissue like blubber and meat in sediment have prevented further investigations on this matter. As such, our findings represent the first tangible evidence that bone counts alone may underestimate large whales in Arctic midden remains." 
Concluding, they added: "These findings expand our current knowledge of the Paleo-Inuit and illustrates that the Saqqaq people had a wider diet-breadth than was previously thought and were able to exploit most of the mammals available to them."
Vikings arrived around 1000 CE, but had vanished by the 15th century, possibly as a consequence of a failure to adapt to climate changes during the Little Ice Age. But, the demise of the four or five century long occupation was more complex than that:
Over the last decade, however, new excavations across the North Atlantic have forced archaeologists to revise some of these long-held views. An international research collective called the North Atlantic Biocultural Organisation (NABO) has accumulated precise new data on ancient settlement patterns, diet, and landscape. The findings suggest that the Greenland Norse focused less on livestock and more on trade, especially in walrus ivory, and that for food they relied more on the sea than on their pastures. There's no doubt that climate stressed the colony, but the emerging narrative is not of an agricultural society short on food, but a hunting society short on labor and susceptible to catastrophes at sea and social unrest.
Read the whole thing which is dense with paleo-climate data and a rich description of Viking Greenlander's history.

A Short History Of Nuclear Binding Energy And The Nuclear Force

In the Standard Model of particle physics, the nuclear binding energy that binds protons and neutrons into atoms arises as a spillover from the strong force that binds quarks together into hadrons via an exchange of gluons according to the rules of quantum chromodynamics (QCD) and is mediated mostly via pions, rho mesons, and omega mesons exchanged between protons and neutrons in the nucleus of an atom. 

But, for almost all practical purposes, what matters is the nuclear binding energy in an atom that arises from these interactions and not the details of the process that give rise to nuclear binding energy. Nuclear binding energy is the most important thing you need to know in order to do engineering and make predictions related to nuclear fission and nuclear fusion.


The nuclear binding energy of atoms in real life is summarized in the chart above. 

Weizsäcker's formula

For a nucleus with A nucleons, including Z protons and N neutrons, a semi-empirical formula (also called Weizsäcker's formula, or the Bethe–Weizsäcker formula, or the Bethe–Weizsäcker mass formula) for the binding energy (BE) per nucleon is:
where the coefficients are given by: ; ; ; .
The first term is called the saturation contribution and ensures that the binding energy per nucleon is the same for all nuclei to a first approximation. The term  is a surface tension effect and is proportional to the number of nucleons that are situated on the nuclear surface; it is largest for light nuclei. The term  is the Coulomb electrostatic repulsion; this becomes more important as  increases. The symmetry correction term  takes into account the fact that in the absence of other effects the most stable arrangement has equal numbers of protons and neutrons; this is because the n-p interaction in a nucleus is stronger than either the n-n or p-p interaction. The pairing term  is purely empirical; it is + for even-even nuclei and - for odd-odd nuclei.

According to Wikipedia the formula: "gives a good approximation for atomic masses and several other effects, but does not explain the appearance of magic numbers of protons and neutrons, and the extra binding-energy and measure of stability that are associated with these numbers of nucleons. . . . The semi-empirical mass formula provides a good fit to heavier nuclei, and a poor fit to very light nuclei, especially 4He. This is because the formula does not consider the internal shell structure of the nucleus. For light nuclei, it is usually better to use a model that takes this structure into account."
Magic Numbers

What are the magic numbers (again per Wikipedia)?
In nuclear physics, a magic number is a number of nucleons (either protons or neutrons, separately) such that they are arranged into complete shells within the atomic nucleus. The seven most widely recognized magic numbers as of 2007 are 2, 8, 20, 28, 50, 82, and 126. Atomic nuclei consisting of such a magic number of nucleons have a higher average binding energy per nucleon than one would expect based upon predictions such as the semi-empirical mass formula and are hence more stable against nuclear decay. 
The unusual stability of isotopes having magic numbers means that transuranium elements can be created with extremely large nuclei and yet not be subject to the extremely rapid radioactive decay normally associated with high atomic numbers
Large isotopes with magic numbers of nucleons are said to exist in an island of stability. Unlike the magic numbers 2–126, which are realized in spherical nuclei, theoretical calculations predict that nuclei in the island of stability are deformed. Before this was realized, higher magic numbers, such as 184, 258, 350, and 462, were predicted based on simple calculations that assumed spherical shapes: these are generated by the formula (see binomial coefficient). It is now believed that the sequence of spherical magic numbers cannot be extended in this way. Further predicted magic numbers are 114, 122, 124, and 164 for protons as well as 184, 196, 236, and 318 for neutrons. . . . 
Nuclei which have neutron number and proton (atomic) numbers each equal to one of the magic numbers are called "double magic", and are especially stable against decay. Examples of double magic isotopes include helium-4, oxygen-16, calcium-40, calcium-48, nickel-48, nickel-78, and lead-208. 
Double-magic effects may allow existence of stable isotopes which otherwise would not have been expected. An example is calcium-40, with 20 neutrons and 20 protons, which is the heaviest stable isotope made of the same number of protons and neutrons. Both calcium-48 and nickel-48 are double magic because calcium-48 has 20 protons and 28 neutrons while nickel-48 has 28 protons and 20 neutrons. Calcium-48 is very neutron-rich for such a light element, but like calcium-40, it is made stable by being double magic. Nickel-48, discovered in 1999, is the most proton-rich isotope known beyond helium-3. At the other extreme, nickel-78 is also doubly magical, with 28 protons and 50 neutrons, a ratio observed only in much heavier elements apart from tritiumwith one proton and two neutrons (Ni-78: 28/50 = 0.56; U-238: 92/146 = 0.63). 
Magic number shell effects are seen in ordinary abundances of elements: helium-4 is among the most abundant (and stable) nuclei in the universe and lead-208 is the heaviest stable nuclide
Magic effects can keep unstable nuclides from decaying as rapidly as would otherwise be expected. For example, the nuclides tin-100 and tin-132 are examples of doubly magic isotopes of tin that are unstable, and represent endpoints beyond which stability drops off rapidly.
The nuclear shell model forms the basis of the "magic number" determination.

The Nuclear Force a.k.a. Residual Strong Force 


Nuclear binding energy arises from the nuclear force a.ka. the residual strong force. The neutral rho meson plays a part together with pions (pseudoscalar mesons made of up and down quarks), charged rho mesons (made up up and antidown or antiup and down quarks), and omega mesons (vector mesons that combine the components of the neutral rho meson in a different way) in carrying the nuclear force (a.k.a. residual strong force a.k.a. strong nuclear force), not to be confused with the gluon mediated strong interaction of QCD from which it derives residually, that gives rise to nuclear binding energy within atoms. Per Wikipedia:
The nuclear force is powerfully attractive between nucleons at distances of about 1 femtometer (fm, or 1.0 × 10−15 metres) between their centers, but rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsive component is responsible for the physical size of nuclei, since the nucleons can come no closer than the force allows. By comparison, the size of an atom, measured in angstroms (Å, or 1.0 × 10−10 m), is five orders of magnitude larger. The nuclear force is not simple, however, since it depends on the nucleon spins, has a tensor component, and may depend on the relative momentum of the nucleons.

A chart demonstrating the nuclear force per distance from Wikipedia that ignores more complex aspects of the nuclear force.
The Yukawa potential [of Yukawa per his 1934 theory shown above] (also called a screened Coulomb potential) is a potential of the form 

where g is a magnitude scaling constant, i.e., the amplitude of potential,  is the Yukawa particle mass, r is the radial distance to the particle. The potential is monotone increasingimplying that the force is always attractive. The constants are determined empirically. The Yukawa potential depends only on the distance between particles, r, hence it models a central force. . . .
To disassemble a nucleus into unbound protons and neutrons requires work against the nuclear force. Conversely, energy is released when a nucleus is created from free nucleons or other nuclei: the nuclear binding energy. Because of mass–energy equivalence (i.e. Einstein's famous formula E = mc2), releasing this energy causes the mass of the nucleus to be lower than the total mass of the individual nucleons, leading to the so-called "mass defect". 
The nuclear force is nearly independent of whether the nucleons are neutrons or protons. This property is called charge independence. The force depends on whether the spins of the nucleons are parallel or antiparallel, as it has a non-central or tensor component. This part of the force does not conserve orbital angular momentum, which under the action of central forces is conserved. 
The symmetry resulting in the strong force, proposed by Werner Heisenberg, is that protons and neutrons are identical in every respect, other than their charge. This is not completely true, because neutrons are a tiny bit heavier, but it is an approximate symmetry. Protons and neutrons are therefore viewed as the same particle, but with different isospin quantum number. The strong force is invariant under SU(2) transformations, just as are particles with intrinsic spin. Isospin and intrinsic spin are related under this SU(2) symmetry group. There are only strong attractions when the total isospin is 0, which is confirmed by experiment. 
Our understanding of the nuclear force is obtained by scattering experiments and the binding energy of light nuclei. 
The nuclear force occurs by the exchange of virtual light mesons, such as the virtual pions, as well as two types of virtual mesons with spin (vector mesons), the rho mesons and the omega mesons. The vector mesons account for the spin-dependence of the nuclear force in this "virtual meson" picture.  
The nuclear force is distinct from what historically was known as the weak nuclear force. The weak interaction is one of the four fundamental interactions, and plays a role in such processes as beta decay. The weak force plays no role in the interaction of nucleons, though it is responsible for the decay of neutrons to protons and vice versa.
Another presentation of these concepts can be found here.


Feynman diagram of a strong protonneutroninteraction mediated by a neutral pion. Time proceeds from left to right.
Historical Timing
The semi-empirical formula for nuclear binding energy was first formulated in 1935 by German physicist Carl Friedrich von Weizsäcker, and although refinements have been made to the coefficients over the years, the structure of the formula remains the same today.
Magic number shell effects were first noted in 1933, although the idea was largely dropped until it was rediscovered in 1948.
In 1934, Hideki Yukawa made the earliest attempt to explain the nature of the nuclear force. According to his theory, massive bosons (mesons) mediate the interaction between two nucleons. Although, in light of quantum chromodynamics (QCD), meson theory is no longer perceived as fundamental, the meson-exchange concept (where hadrons are treated as elementary particles) continues to represent the best working model for a quantitative NN potential. 
Throughout the 1930s a group at Columbia University led by I. I. Rabi developed magnetic resonance techniques to determine the magnetic moments of nuclei. These measurements led to the discovery in 1939 that the deuteron also possessed an electric quadrupole moment. This electrical property of the deuteron had been interfering with the measurements by the Rabi group. The deuteron, composed of a proton and a neutron, is one of the simplest nuclear systems. The discovery meant that the physical shape of the deuteron was not symmetric, which provided valuable insight into the nature of the nuclear force binding nucleons. In particular, the result showed that the nuclear force was not a central force, but had a tensor character. Hans Bethe identified the discovery of the deuteron's quadrupole moment as one of the important events during the formative years of nuclear physics.
Historically, the task of describing the nuclear force phenomenologically was formidable. The first semi-empirical quantitative models came in the mid-1950s, such as the Woods–Saxon potential(1954). There was substantial progress in experiment and theory related to the nuclear force in the 1960s and 1970s. One influential model was the Reid potential (1968). In recent years, experimenters have concentrated on the subtleties of the nuclear force, such as its charge dependence, the precise value of the πNN coupling constant, improved phase shift analysis, high-precision NN data, high-precision NN potentials, NN scattering at intermediate and high energies, and attempts to derive the nuclear force from QCD.
By comparison, the muon was discovered (it was a surprise and not predicted when it was discovered) in 1936, and the discovery was confirmed in 1937, although it was originally believed to be the pion, a meson predicted by Hideki Yukawa in 1934, which was not discovered until 1947.

Kaons, the first particles discovered with the property of "strangeness" (i.e. a strange quark component) were first discovered in 1947 and elucidate further through 1955, although the way that these mesons fit into the larger picture was not ascertained until the quark model was proposed in 1964.

The neutrino was predicted in 1930 and first experimentally detected in 1956. The muon neutrino was detected in 1962 and the tau neutrino was detected in 2000 (with each predicted not long after the charged counterpart was detected).

More of the history of nuclear mass measurements and evaluation can be found in this 2006 paper