пятница, 21 сентября 2018 г.

Dzudzuana Ice Age foragers: a different type of Caucasus hunter-gatherer (Lazaridis et...

Over at bioRxiv at this LINK. Below is the abstract. Emphasis is mine:

The earliest ancient DNA data of modern humans from Europe dates to ~40 thousand years ago, but that from the Caucasus and the Near East to only ~14 thousand years ago, from populations who lived long after the Last Glacial Maximum (LGM) ~26.5-19 thousand years ago. To address this imbalance and to better understand the relationship of Europeans and Near Easterners, we report genome-wide data from two ~26 thousand year old individuals from Dzudzuana Cave in Georgia in the Caucasus from around the beginning of the LGM. Surprisingly, the Dzudzuana population was more closely related to early agriculturalists from western Anatolia ~8 thousand years ago than to the hunter-gatherers of the Caucasus from the same region of western Georgia of ~13-10 thousand years ago. Most of the Dzudzuana population’s ancestry was deeply related to the post-glacial western European hunter-gatherers of the ‘Villabruna cluster’, but it also had ancestry from a lineage that had separated from the great majority of non-African populations before they separated from each other, proving that such ‘Basal Eurasians’ were present in West Eurasia twice as early as previously recorded. We document major population turnover in the Near East after the time of Dzudzuana, showing that the highly differentiated Holocene populations of the region were formed by ‘Ancient North Eurasian’ admixture into the Caucasus and Iran and North African admixture into the Natufians of the Levant. We finally show that the Dzudzuana population contributed the majority of the ancestry of post-Ice Age people in the Near East, North Africa, and even parts of Europe, thereby becoming the largest single contributor of ancestry of all present-day West Eurasians.

Lazaridis et al., Paleolithic DNA from the Caucasus reveals core of West Eurasian ancestry, bioRxiv, posted September 21, 2018, doi: https://doi.org/10.1101/423079


Simulations uncover why supernova explosions of some white dwarfs produce so much...

Figure 1: An artist’s conception of a single-degenerate Type Ia supernova scenario. Due to the stronger gravitational force from the white dwarf on the left, the outer material of the bigger, slightly evolving main-sequence star on the right is torn away and it flows onto the white dwarf, eventually increasing the mass of the white dwarf toward the Chandrasekhar mass. This carbon-oxygen white dwarf will later explode as a Type Ia supernova. (Credit: Kavli IPMU)

Figure 2: The colour plot of the temperature distribution of the benchmark Type Ia supernova model at about 1 second after explosion. The deflagration model with deflagration-detonation transition is used to produce this result. (Credit: Leung et al.) 

Figure 3: Distributions of representative elements ejecta velocity in the typical Type Ia supernova after all major nuclear reactions have ended. Colours represent the sites where the corresponding elements are produced. The arrow indicates the motion of ejecta’s. (Credit: Leung et al.)

Figure 4: The 57Ni against 56Ni for the models presented in this work. The observed data from Type Ia supernova SN 2012cg is also included. The data points along the line in the described direction stand for white dwarf models of masses from 1.30 to 1.38 solar mass respectively. (Credit: Leung et al.)

Figure 5: X-ray, optical & infrared composite image of 3C 397 (credit: X-ray: NASA/CXC/Univ of Manitoba/S.Safi-Harb et al, Optical: DSS, Infrared: NASA/JPL-Caltech)

Figure 6: Mass ratio Mn/Fe against Ni/Fe for the models presented in this work. The observed data from Type Ia supernova remnant 3C 397 is also included. The data points along the line in the described direction stand for white dwarf models of masses from 1.30 to 1.38 solar mass respectively. (Credit: Leung et al.)

Researchers have found white dwarf stars with masses close to the maximum stable mass (called the Chandrasekhar mass) are likely to produce large amounts of manganese, iron, and nickel after it orbits another star and explodes as Type Ia supernovae (figure 1).

A Type Ia supernova is a thermonuclear explosion (figure 2) of a carbon-oxygen white dwarf star with a companion star orbiting one another, also known as a binary system. In the Universe, Type Ia supernovae are the main production sites for iron-peak elements, including manganese, iron, and nickel, and some intermediate mass elements including silicon and sulfur (figure 3).

However, researchers today cannot agree on what kind of binary systems triggers a white dwarf to explode. Moreover, recent extensive observations have revealed a large diversity of nucleosynthesis products, the creation of new atomic nuclei from the existing nuclei in the star by nuclear fusion, of Type Ia supernovae and their remnants, in particular, the amount of manganese, stable nickel, and radioactive isotopes of 56-nickel and 57-nickel (figure 4).

To uncover the origin of such diversities, Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) Project Researcher Shing-Chi Leung and Senior Scientist Ken’ichi Nomoto carried out simulations using the most accurate scheme to date for multi-dimensional hydrodynamics of Type Ia supernova models. They examined how chemical abundance patterns and the creation of new atomic nuclei from existing nucleons depend on white dwarf properties and their progenitors.

“The most important and unique part of this study is that this is so far the largest parameter survey in the parameter space for the Type Ia supernova yield using the Chandrasekhar mass white dwarf,” said Leung.

A particularly interesting case was the supernova remnant 3C 397 (figure 5). 3C 397 is located in the Galaxy about 5.5 kpc from the center on the galactic disk. Its abundance ratios of stable manganese/iron and nickel/iron were found to be two and four times that of the Sun respectively. Leung and Nomoto found the abundance ratios among manganese, iron and nickel are sensitive to white dwarf mass and metallicity (how abundant it is in elements heavier than hydrogen and helium). The measured values of 3C 397 can be explained if the white dwarf has a mass as high as the Chandrasekhar mass and high metallicity (figure 6).

The results suggest remnant 3C 397 could not be the result of an explosion of a white dwarf with relatively low mass (a sub-Chandrasekhar mass). Moreover, the white dwarf should have a metallicity higher than the Sun’s metallicity, in contrast to the neighboring stars which have a typically lower metallicity.

It provides important clues to the controversial discussion of whether the mass of the white dwarf is close to the Chandrasekhar mass, or sub-Chandrasekhar mass, when it explodes as a Type Ia supernova.

The results will be useful in future studies of chemical evolution of galaxies for a wide range of metallicities, and encourage researchers to include super-solar metallicity models as a complete set of stellar models.

Leung says the next step of this study would involve further testing their model with more observational data, and to extend it to another subclass of Type Ia supernovae.

These results were published in the July 10 issue of The Astrophysical Journal.

Paper details

Journal: The Astrophysical Journal
Title: Explosive Nucleosynthesis in Near-Chandrasekhar Mass White Dwarf Models for Type Ia supernovae: Dependence on Model Parameters
Authors: Shing-Chi Leung (1), Ken’ichi Nomoto (1)

Author affiliations:

1. Kavli Institute for the Physics and Mathematics of the Universe, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8583, Japan

DOI: 10.3847/1538-4357/aac2df (Published 13 July, 2018)

Research contact

Shing Chi Leung
Project Researcher
Kavli Institute for the Physics and Mathematics of the Universe
University of Tokyo
E-mail: shingchi.leun@ipmu.jp

Ken’ichi Nomoto
Principal Investigator and Project Professor
Kavli Institute for the Physics and Mathematics of the Universe
University of Tokyo
TEL: +81-04-7136-6567
E-mail: nomoto@astron.s.u-tokyo.ac.jp

Media contact

Motoko Kakubayashi
Press officer
Kavli Institute for the Physics and Mathematics of the Universe
University of Tokyo
E-mail: press@ipmu.jp

Source: Kavli IPMU

Archive link

Unprecedented ice loss in Russian ice cap

In the last few years, the Vavilov Ice Cap in the Russian High Arctic has dramatically accelerated, sliding as much as 82 feet a day in 2015, according to a new multi-national, multi-institute study led by CIRES Fellow Mike Willis, an assistant professor of Geology at CU Boulder. That dwarfs the ice’s previous average speed of about 2 inches per day and has challenged scientists’ assumptions about the stability of the cold ice caps dotting Earth’s high latitudes.

Unprecedented ice loss in Russian ice cap
In the last few years, the Vavilov Ice Cap in the Russian High Arctic has dramatically accelerated, sliding as much as
82 feet a day in 2015, according to a new multi-national, multi-institute study led by CIRES Fellow Mike Willis, an
assistant professor of Geology at CU Boulder. That dwarfs the ice’s previous average speed of about 2 inches
per day and has challenged scientists’ assumptions about the stability of the cold ice caps dotting Earth’s
high latitudes [Credit: Whyjay Zheng/Cornell University using Landsat imagery by NASA/USGS]

“In a warming climate, glacier acceleration is becoming more and more common, but the rate of ice loss at Vavilov is extreme and unexpected,” said Mike Willis, CIRES Fellow and lead author of the paper published this week in Earth and Planetary Science Letters.

Glaciers and ice caps like Vavilov cover nearly 300,000 square miles of Earth’s surface and hold about a foot of potential sea-level rise. Scientists have never seen such acceleration in this kind of ice cap before, and the authors of the new paper wrote that their finding raises the possibility that other, currently stable ice caps may be more vulnerable than expected.

For the new assessment, researchers played the part of forensic ice detectives, piecing together the ice cap’s deterioration by spying on the advancing ice with remote sensing technology from a constellation of satellites operated by DigitalGlobe Inc, headquartered in Westminster, Colorado. The project also relied on support from the National Science Foundation and the National Geospatial-Intelligence Agency, which funded the development of high-resolution topographic maps of the Arctic.

By satellite, they watched ice on the cap creep slowly forward for several years, before it accelerate slowly in 2010, surging rapidly forward in 2015. The initial very slow advance is thought to have been caused by a shift in the direction of precipitation that occurred about 500 years ago. Before this time snow and rain came from the southeast, after this time rain and snow came from the southwest. As the western part of the ice cap advanced into the ocean the ice surged forward.

“Cold” ice caps, like Vavilov, occur in polar “deserts” with very little precipitation, and they’re normally frozen to their beds, flowing only due to bending of the ice under the force of gravity. With beds above sea level, they are normally insulated from the kinds of changes that have hit glaciers in less frigid regions: melting from below by warm sea water, for example, or sliding faster when warm surface meltwater drains to the bed of the ice.

Researchers suspect the ice cap began to dramatically advance when the bottom of the ice cap became wetter and the front of the glacier advanced onto very slippery marine sediments. The ice began to speed up, and friction caused some of the ice underneath the glacier to melt, which supplied more water to the bottom of the ice, reducing friction, which caused the ice to speed up, which in turn, again produced more water. Some of this water might have combined with clay underneath the glacier, reducing the friction beneath the glacier even further and allowing the truly extraordinary sliding speeds to occur.

By 2015, the sediments and rock at the bed beneath the ice had become so slippery that the material couldn’t stop the ice from flowing. It took just two years for the ice cap base to reach that tipping point, transforming into a near frictionless zone, which is well-lubricated and highly mobile. The glacier continues to slide today at accelerated speeds of 5-10 meters per day.

The Vavilov Ice Cap thinned by a total of a few meters, advanced about 2 km, and lost about 1.2 km3 in total volume into the ocean in the 30 years before the speedup. In the one year between 2015 and 2016, the ice advanced about 4 kilometers and thinned by about 100 meters (~0.3 m per day). The ice cap lost about 4.5 km3 of ice, enough to cover Manhattan with about 250 feet of water, or the entire state of Washington with an inch. And it’s unlikely the ice cap will ever be able to recover ice mass in today’s warming climate, the paper states.

Many scientists have assumed that polar ice caps that sit above sea level will only respond slowly to a warming climate — but the authors of this study urge that this assumption be questioned. The rapid collapse of the Vavilov Ice Cap has significant ramifications for glaciers in other polar regions, especially those fringing Antarctica and Greenland.

“We’ve never seen anything like this before, this study has raised as many questions as it has answered.” said Willis. “And we’re now working on modeling the whole situation to get a better handle of the physics involved.”

Author: Katie Weeman | Source: University of Colorado at Boulder [September 19, 2018]



Searching for errors in the quantum world

There is likely no other scientific theory that is as well supported as quantum mechanics. For nearly 100 years now, it has repeatedly been confirmed with highly precise experiments, yet physicists still aren’t entirely happy. Although quantum mechanics describes events at the microscopic level very accurately, it comes up against its limits with larger objects – especially objects for which the force of gravity plays a role. Quantum mechanics can’t describe the behaviour of planets, for instance, which remains the domain of the general theory of relativity. This theory, in turn, can’t correctly describe small-scale processes. Many physicists therefore dream of combining quantum mechanics with the theory of relativity to form a coherent worldview.

Searching for errors in the quantum world
What does a physicist see when he examines a quantum object? The same as the observer
of the physicist – or just the opposite? [Credit: Philip Burli/Visualeyes International]

Toward larger objects

But how is it possible to combine two theories that, despite both describing the physical processes in their domains very accurately, differ so greatly? One possibility is to conduct quantum physics experiments with increasingly larger objects in the hope that discrepancies will eventually appear that point to possible solutions. But physicists must work within tight constraints. The famous double-slit experiment, for instance, which can be used to show that solid particles simultaneously behave like waves, can’t be performed with everyday objects.

Thought experiments, on the other hand, can be used to transcend the boundaries of the macroscopic world. That’s exactly what Renato Renner, Professor for Theoretical Physics, and his former doctoral student Daniela Frauchiger have now done in a publication that appears in Nature Communications magazine today. Roughly speaking, in their thought experiment, the two consider a hypothetical physicist examining a quantum mechanical object and then use quantum mechanics to calculate what that physicist will observe. According to our current worldview, this indirect observation should yield the same result as direct observation, yet the pair’s calculations show that precisely this is not the case. The prediction as to what the physicist will observe is exactly the opposite of what would be measured directly, creating a paradoxical situation.

No simple solutions

Although the thought experiment is only now being officially published in a scientific journal, it has already become a topic of discussion among experts. As the publication process was repeatedly delayed, various other publications are already addressing the findings – itself a paradoxical situation, Renner notes.

The most common initial reaction of his colleagues in the field is to question the calculations, Renner says, but so far, no one has managed to disprove them. One reviewer conceded that he had meanwhile made five attempts to find an error in the calculations – without success. Other colleagues presented concrete explanations as to how the paradox can be resolved. Upon closer inspection, though, they always turned out to be ad hoc solutions that don’t actually fix the problem.

Perplexing conclusions

Renner finds it remarkable that the issue evidently polarises people. He was surprised to note that some of his colleagues reacted very emotionally to his findings – probably due to the fact that the two obvious conclusions from Renner’s and Frauchiger’s findings are equally perplexing. The one explanation is that quantum mechanics is apparently not, as was previously thought, universally applicable and thus can’t be applied to large objects. But how is it possible for a theory to be inconsistent when it has repeatedly been so clearly confirmed by experiments? The other explanation is that it is evidently not only politics that suffers from a lack of clear facts, but also physics, and that there are other possibilities besides what we deem to be true.

Renner has difficulties with both of these interpretations. He rather believes that the paradox will be resolved in some other way: “When we look back at history, at moments like this, the solution often came from an unexpected direction,” he explains. The general theory of relativity, for instance, which solved contradictions in Newtonian physics, is based on the realisation that the concept of time as it was commonly understood back then was wrong.

“Our job now is to examine whether our thought experiment assumes things that shouldn’t be assumed in that form,” Renner says, “and who knows, perhaps we will even have to revise our concept of space and time once again.” For Renner, that would definitely be an appealing option: “It’s only when we fundamentally rethink existing theories that we gain deeper insights into how nature really works.”

Source: ETH Zurich [September 18, 2018]



Study links natural climate oscillations in north Atlantic to Greenland ice sheet melt

Scientists have known for years that warming global climate is melting the Greenland Ice Sheet, the second largest ice sheet in the world. A new study from the Woods Hole Oceanographic Institution (WHOI), however, shows that the rate of melting might be temporarily increased or decreased by two existing climate patterns: the North Atlantic Oscillation (NAO), and the Atlantic Multidecadal Oscillation (AMO).

Study links natural climate oscillations in north Atlantic to Greenland ice sheet melt
Scientists stand on the edge of a crevasse formed by meltwater flowing across the top of the Greenland Ice Sheet
during a WHOI-led expedition in 2007 [Credit: Sarah Das, Woods Hole Oceanographic Institution]

Both patterns can have a major impact on regional climate. The NAO, which is measured as the atmospheric pressure difference between the Azores and Iceland, can affect the position and strength of the westerly storm track. The study, published in Geophysical Research Letters, found that when the NAO stays in its negative phase (meaning that air pressure is high over Greenland) it can trigger extreme ice melt in Greenland during the summer season. Likewise, the AMO, which alters sea surface temperatures in the North Atlantic, can cause major melting events when it is in its warm phase, raising the temperature of the region as a whole.
If global climate change continues at its current rate, the Greenland ice sheet may eventually melt entirely — but whether it meets this fate sooner rather than later could be determined by these two oscillations, says Caroline Ummenhofer, a climate scientist at WHOI and co-author on the study. Depending on how the AMO and NAO interact, excess melting could happen two decades earlier than expected, or two decades later this century.

“We know the Greenland ice sheet is melting in part because of warming climate, but that’s not a linear process,” Ummenhofer said. “There are periods where it will accelerate, and periods where it won’t.”

Scientists like Ummenhofer see a pressing need to understand out how natural variability can play a role in speeding up or slowing down the melting process. “The consequences go beyond just the Greenland Ice Sheet — predicting climate on the scale of the next few decades will also be useful for resource management, city planners and other people who will need to adapt to those changes,” she added.

Actually forecasting environmental conditions on a decadal scale isn’t easy. The NAO can switch between positive and negative phases over the course of a few weeks, but the AMO can take more than 50 years to go through a full cycle. Since scientists first started tracking climate in the late 19th century, only a handful of AMO cycles have been recorded, making it extremely difficult to identify reliable patterns. To complicate things even more, the WHOI scientists needed to tease out how much of the melting effect is caused by human-related climate change, and how much can be attributed to the AMO and NAO.

To do so, the team relied on data from the Community Earth System Model’s Large Ensemble, a massive set of climate model simulations at the National Center for Atmospheric Research. From that starting point, the researchers looked at 40 different iterations of the model covering 180 years over the 20th and 21st century, with each one using slightly different starting conditions.

Although the simulations all included identical human factors, such as the rise of greenhouse gases over two centuries, they used different conditions at the start — a particularly cold winter, for example, or a powerful Atlantic storm season — that led to distinct variability in the results.The team could then compare those results to each other and statistically remove the effects caused by climate change, letting them isolate the effects of the AMO and NAO.

“Using a large ensemble of model output gave more statistical robustness to our findings,” said Lily Hahn, the paper’s lead author. “It provided many more data points than a single model run or observations alone. That’s very helpful when you’re trying to investigate something as complex as atmosphere-ocean-ice interactions.”

Source: Woods Hole Oceanographic Institution [September 18, 2018]



Global trade in exotic pets threatens endangered parrots through the spread of a virus

Beak and feather disease virus (BFDV) in wild parrot populations has been detected in eight new countries, raising concerns for threatened species.

Global trade in exotic pets threatens endangered parrots through the spread of a virus
This is a juvenile CITES Endangered Mauritius ‘Echo’ Parakeet (Psittacula eques) displaying
severe symptoms of PBFD including feather dystrophy [Credit: Deborah Fogell]

The new countries where BFDV was found are Bangladesh, Pakistan, Japan, Nigeria, Seychelles, Vietnam, Senegal and The Gambia and were identified in a study led by Deborah Fogell in the University of Kent’s Durrell Institute of Conservation and Ecology (DICE) in collaboration with The World Parrot Trust, Zoological Society of London, Mauritian Wildlife Foundation, Seychelles Island Foundation and Vinh University.

The study highlights the need for greater awareness of the risks of the spread of infectious disease associated with the international trade in live parrots.

Parrots are among the most threatened bird groups and are susceptible to a number of infectious diseases. They are also among the most frequently traded birds listed by the Convention of International Trade in Endangered Species (CITES) and the illegal trade has already driven the cross-border movement of over 19 million parrots since 1975.

This movement has aided the establishment of numerous parrot populations outside of their native distributions, most notably the highly invasive Rose-ringed parakeet which is now known to have breeding populations in over 35 countries across five continents.

BFDV, believed to have originated in Australasia, is a well-known cause of infectious disease in captive parrots. Affected birds can develop feather abnormalities, claw and beak deformities and the disease may lead to eventual death, particularly in juveniles.

The first detection of BFDV in wild parrots native to southern and Southeast Asia and western Africa in this study highlights the need for further research in these regions and may have implications for the conservation of vulnerable species that also exist there.

This study indicates that there are very close relationships between genetic sequences from wild populations across globally distinct regions and that there have been multiple introduction events to western Africa.

Deborah Fogell said: ‘The successful establishment of invasive species like Rose-ringed parakeets can be devastating to small island populations or threatened species. Not only through competition for resources, but by exposing them to a virus like BFDV which may pose an important additional threat to species that are already suffering the pressures of low genetic diversity and habitat loss’.

The study is published in Conservation Biology.

Author: Sandy Fleming | Source: University of Kent [September 18, 2018]



DNA tests of illegal ivory link multiple ivory shipments to same dealers

The international trade in elephant ivory has been illegal since 1989, yet African elephant numbers continue to decline. In 2016, the International Union for Conservation of Nature cited ivory poaching as a primary reason for a staggering loss of about 111,000 elephants between 2005 and 2015 – leaving their total numbers at an estimated 415,000.

DNA tests of illegal ivory link multiple ivory shipments to same dealers
Tusks from an ivory seizure in 2015 in Singapore after they have been sorted into pairs by the process developed 
by Wasser and his team [Credit: Center for Conservation Biology/University of Washington]

In a paper published in the journal Science Advances, an international team led by scientists at the University of Washington reports that DNA test results of large ivory seizures made by law enforcement have linked multiple ivory shipments over the three-year period when this trafficking reached its peak to the same network of dealers operating out of a handful of African ports. The researchers linked these ivory shipments together after developing a rigorous sorting and DNA testing regimen for tusks in different ivory shipments. This method allowed the scientists to identify tusk pairs that had been separated and shipped in different consignments to different destinations around the world — yet had been shipped out of the same port, nearly always within 10 months of each other, with high overlap in the geographic origins of tusks in the matching shipments.
“Our prior work on DNA testing of illegal ivory shipments showed that the major elephant ‘poaching hotspots’ in Africa were relatively few in number,” said lead and corresponding author Samuel Wasser, director of the UW Center for Conservation Biology and a professor of biology. “Now, we’ve shown that the number and location of the major networks smuggling these large shipments of ivory out of Africa are also relatively few.”

DNA tests of illegal ivory link multiple ivory shipments to same dealers
By using DNA testing to match tusk pairs smuggled in separate consignments, Wasser and his team are able to link multiple
 ivory shipments to the same smugglers. Each map indicates separate shipments, with the location, date and weight of the 
seizure shown in the lower left. The blue circles show the geographic origins of the tusks based on genetic analysis; 
origins of a small number of poached elephant corpses matched to tusks are shown in open red circles. Linked pairs 
are shown by double-headed arrows, with thickness indicating the number of pairs. Shipments labelled “1” 
or “2” are linked by other lines of evidence [Credit: Wasser et al. 2018/Science Advances]

Using this protocol, the team identified what appear to be the three largest ivory smuggling cartels in Africa, operating out of Mombasa, Kenya; Entebbe, Uganda; and Lomé, Togo. Out of 38 large ivory shipments analyzed, the team was able to link 11 of these shipments together by identifying tusk pairs that had been separated after poaching, yet shipped out of the same port during the 2011-2014 period when trafficking was at its peak.
Large shipments currently dominate the illegal ivory trade. About 70 percent of ivory seizures between 1996 and 2011 were in large consignments of at least half a metric ton, or about 0.55 U.S. tons, according to a 2013 study in PLOS ONE. Linking multiple large ivory shipments to the same smuggling networks will help build evidence against the cartels that are responsible for the bulk of illegal ivory trade and shipment, Wasser said. These efforts could add multiple counts of trafficking charges against the leaders of smuggling operations, who most often are tried for single, high-profile and occasionally controversial events; the recent acquittal of Feisal Mohamed Ali in Kenya being a case in point.

DNA tests of illegal ivory link multiple ivory shipments to same dealers
African elephants examine a bone from a fellow elephant [Credit: Karl Ammann]

“We reveal connections between what would otherwise be isolated ivory seizures — linking seizures not just to specific criminal networks operating in these ports, but to poaching and transport networks that funnel the tusks hundreds of miles to these cartels,” said Wasser. “It is an investigative tool to help officials track these networks and collect evidence for criminal cases.”

Wasser and his team had previously developed DNA testing of large ivory shipments to identify what populations of African elephants were most targeted by poachers. For this endeavor, they created a “genetic reference map” of elephant populations across Africa, using DNA samples extracted primarily from elephant dung. Then, the team sampled ivory from elephant tusks seized by law enforcement officials and extracted DNA from them. The researchers matched key regions in the ivory DNA samples to the genetic reference map, which let them identify the region that the elephant had come from, often to within about 300 kilometers, or about 186 miles. In a 2015 paper published in the journal Science, they announced that the bulk of seized tusks came from two “poaching hotspots” on the continent based on these DNA analyses.

DNA tests of illegal ivory link multiple ivory shipments to same dealers
Wasser (left) and his team sort tusks from a seizure in Singapore in 2015 and use saws to cut away 
ivory samples for subsequent DNA extraction and genetic analysis [Credit: Kate Brooks]

While conducting those analyses, Wasser and his team developed a protocol to representatively subsample hundreds of tusks as efficiently as possible.
“We have neither the time nor the money to collect samples and extract DNA from every tusk in a shipment,” said Wasser. “We needed to find a way to sample only a fraction of the tusks in a shipment, but that method also needed to let us get a glimpse at the diversity of poached elephants within that shipment.”

In each large ivory seizure, they would identify pairs by sorting tusks by the diameter of the base, color, and gum line, which indicates where the lip rested on the tusk. This allowed the researchers to extract DNA from only one tusk in the pair. Using this sorting approach, Wasser and his team noticed that many tusks in large shipments were orphans. The partner tusk was not present. But through comparing DNA samples from tusks among 38 large ivory consignments confiscated from 2011 to 2014, they matched up 26 pairs of tusks among 11 shipments, even though they were only testing, on average, about one-third of the tusks in each seizure.

“There is so much information in an ivory seizure — so much more than what a traditional investigation can uncover,” said Wasser. “Not only can we identify the geographic origins of the poached elephants and the number of populations represented in a seizure, but we can use the same genetic tools to link different seizures to the same underlying criminal network.”

Author: James Urton | Source: University of Washington [September 20, 2018]



Scientists predict extinction risk for hard-to-track species

Species are going extinct all over the world: Scientists believe that Earth is losing between 200 and 2,000 species every year. That number is squishy, partly because there are so many species for which they lack good data—particularly those living in the oceans, which are difficult to track but still critically important to ecosystems and livelihoods. Even the most comprehensive evaluation of extinction risk—the international Red List of Threatened Species—has only spotty data for many species around the globe.

Scientists predict extinction risk for hard-to-track species
This map shows global extinction risk in for modern marine bivalves. Warm colors are high risk, cold colors are low risk.
Clockwise from top left: Pitar rudis, Spondylus gaederopus, Tridacna squamosa, Spondylus tenellus, Gloripallium
pallium, Nemocardium enigmaticum, Hysteroconcha lupanaria, Euvola hancocki, Strigilla carnaria,
Adamussium colbecki [Credit: Katie Collins]

A new study published in the Proceedings of the Royal Society B: Biological Sciences offers a tool to predict extinctions for hard-to-count species. Their method takes advantage of the fact that while some species are hard to monitor while alive, many of them leave extensive fossil records.

“Today’s extinction work tends to focus on animals like us—mammals and other vertebrates that live on land,” said Katie Collins, a postdoctoral researcher at the University of Chicago and first author on the paper. “But most things that live on the planet don’t have a backbone, and a huge part of the world’s biodiversity lives in the sea, where our picture is really incomplete. It’s much harder to get a handle on extinction there.”

Collins and David Jablonski, the William R. Kenan Jr. Distinguished Service Professor of Geophysical Sciences, along with a team of researchers at UChicago, the Smithsonian and the University of California, San Diego, wanted a way to estimate extinction risk for species that are short of directly measured data.

A key part of the puzzle came when they realized that the fossil record could help. Jablonski has been building a database of fossils of marine bivalves—creatures like scallops, mussels and oysters—for many years. That database allowed them to review the history of extinctions and develop a set of predictors about which species are most likely to go extinct.

“Fossils give you a bird’s-eye view of lineages. You can see the first and last occurrences for different species, and it can also tell you how often these lineages split into new species, how often they go extinct, and where they were when they did,” Collins said.

How likely a species is to go extinct depends on a lot of factors, but there are a few key predictors. One is range size. If the species can only live in a small part of the Gulf of Mexico, a single oil spill could wipe out the entire population. Temperature tolerance matters too: If it can survive under a wider range of temperatures and conditions, its chances are better. “Some widespread species can also handle wide temperature changes, but a huge number of warm-water species are actually tracking a narrow band of temperatures,” Jablonski said. “That means that a geographically widespread species can still get clobbered if the temperature changes beyond its ability to cope.”

They built these predictors into a metric they called PERIL, or Paleontological Extinction Risk In Lineages. The first step was to test it, by winding the clock back two and a half million years, to the end of the Pliocene epoch. They had the tool “predict” the fates of species living in two widely separated ecosystems: off the coasts of California and New Zealand. The result: “It does a very good job of predicting who’s going to live and who’s going to die out,” Collins said.

From there, they applied the metric to the present day, mapping out the oceans on a scale from high to low risk of extinction.

“This gives us a new, global picture of extinction risk in this economically important marine group, far beyond what’s available from the Red List,” said Jablonski. For example, Collins said, “There’s nearly 6,000 species of bivalves in the ocean; the Red List has only been able to assess 29 of them, and of those, 15 are marked ‘data-deficient.'”

A couple of hot zones jumped out immediately, the scientists said: The coast of Southeast Asia is precarious; so are areas in the Antarctic, the Caribbean and New Zealand. “There are some scary situations where key foodstock species live in very fragile areas,” Jablonski said.

Though they tried it with marine bivalves, the process could be repeated with any group of living things with a reasonable fossil record, the scientists said.

“Our goal is to produce a method that can be used alongside the Red List, and provide assistance for conservationists dividing up limited resources—where to get the biggest bang for your conservational buck, so to speak,” Jablonski said. “The PERIL metric is a new tool for pinpointing species and places that would benefit most from protection and management.”

Author: Louise Lerner | Source: University of Chicago [September 20, 2018]



2018 September 21 IrregularGalaxy NGC 55 Image Credit &…

2018 September 21

IrregularGalaxy NGC 55
Image Credit & Copyright: Martin Pugh

Explanation: Irregular galaxy NGC 55 is thought to be similar to the Large Magellanic Cloud (LMC). But while the LMC is about 180,000 light-years away and is a well known satellite of our own Milky Way Galaxy, NGC 55 is more like 6 million light-years distant and is a member of the Sculptor Galaxy Group. Classified as an irregular galaxy, in deep exposures the LMC itself resembles a barred disk galaxy. Spanning about 50,000 light-years, NGC 55 is seen nearly edge-on though, presenting a flattened, narrow profile in contrast with our face-on view of the LMC. Just as large star forming regions create emission nebulae in the LMC, NGC 55 is also seen to be producing new stars. This highly detailed galaxy portrait highlights a bright core crossed with dust clouds, telltale pinkish star forming regions, and young blue star clusters in NGC 55.

∞ Source: apod.nasa.gov/apod/ap180921.html

NASA’s New Planet Hunter Reveals a Sky Full of Stars


NASA’s newest planet-hunting satellite — the Transiting

Exoplanet Survey Satellite, or TESS for short
— has just released its first science image using all

of its cameras to capture a huge swath of the sky! TESS is NASA’s next step in the

search for planets outside our solar system, called exoplanets.


This spectacular image, the first released

using all four of TESS’ cameras, shows the satellite’s full field of view. It

captures parts of a dozen constellations, from Capricornus

(the Sea Goat) to Pictor

(the Painter’s Easel) — though it might be hard to find familiar constellations

among all these stars! The image even includes the Large and Small Magellanic

Clouds, our galaxy’s two largest companion galaxies.

The science community calls this image “first

light,” but don’t let that fool you — TESS has been seeing light since it

launched in April. A first light image like this is released to show off the

first science-quality image taken after a mission starts collecting science

data, highlighting a spacecraft’s capabilities.


TESS has been busy since it launched from NASA’s Kennedy Space Center in Cape Canaveral, Florida. First TESS needed to get into position, which required a push from the Moon. After nearly a month in space, the satellite

passed about 5,000 miles from the Moon, whose gravity gave it the boost it needed to get into a special orbit

that will keep it stable and maximize its view of the sky.


During those first few weeks, we also got a

sneak peek of the sky through one of TESS’s four cameras. This test image

captured over 200,000 stars in just two seconds! The spacecraft was pointed

toward the constellation Centaurus when it snapped this picture. The bright

star Beta

is visible at the lower left edge, and the edge

of the Coalsack

is in the right upper corner.


After settling into orbit, scientists ran a

number of checks on TESS, including testing its ability to collect a set of

stable images over a prolonged period of time. TESS not only proved its ability

to perform this task, it also got a surprise! A comet named C/2018 N1 passed through TESS’s cameras

for about 17 hours in July.

The images show a treasure

trove of cosmic curiosities
. There are some stars whose

brightness changes over time and asteroids visible as small moving white dots.

You can even see an arc of stray light from Mars, which is located outside the

image, moving across the screen.


Now that TESS has settled into orbit and has

been thoroughly tested, it’s digging into its main mission of finding planets around other stars.

How will it spot something as tiny and faint as a planet trillions of miles

away? The trick is to look at the star!

So far, most

of the exoplanets we’ve found
were detected by looking

for tiny dips in the brightness of their host stars. These dips are caused by

the planet passing between us and its star – an event called a transit. Over

its first two years, TESS will stare at 200,000 of the nearest and brightest stars

in the sky to look for transits to identify stars with planets.


TESS will be building on the legacy of NASA’s Kepler spacecraft, which also used

transits to find exoplanets. TESS’s target stars are about 10 times closer than

Kepler’s, so they’ll tend to be brighter. Because they’re closer and brighter,

TESS’s target stars will be ideal candidates for follow-up studies with current

and future observatories.


TESS is challenging over 200,000 of our

stellar neighbors to a staring contest! Who knows what new amazing planets

we’ll find?


TESS mission is led by MIT

and came together with the help of many

different partners
. You can keep up

with the latest from the TESS mission by following mission updates.


sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com

Anatolian Hunter-Gatherer: not very different from later Neolithic Anatolian farmers...

Over at bioRxiv at this LINK. I’ll update this blog post later today. Here’s the abstract:

Anatolia was home to some of the earliest farming communities. It has been long debated whether a migration of farming groups introduced agriculture to central Anatolia. Here, we report the first genome-wide data from a 15,000 year-old Anatolian hunter-gatherer and from seven Anatolian and Levantine early farmers. We find high genetic continuity between the hunter-gatherer and early farmers of Anatolia and detect two distinct incoming ancestries: an early Iranian/Caucasus related one and a later one linked to the ancient Levant. Finally, we observe a genetic link between southern Europe and the Near East predating 15,000 years ago that extends to central Europe during the post-last-glacial maximum period. Our results suggest a limited role of human migration in the emergence of agriculture in central Anatolia.

Feldman et al., Late Pleistocene human genome suggests a local origin for the first farmers of central Anatolia, biRxiv, posted September 20, 2018, doi: https://doi.org/10.1101/422295 Source

NASA-funded ELFIN To Study How Electrons Get Lost

NASA logo.

September 20, 2018

Three hundred and ten miles above our planet’s surface, near-Earth space is abuzz with action. Here begin the Van Allen Belts, a pair of concentric rings of fast-moving particles and intense radiation that extends more than 30,000 miles farther into space. For the most part these particles are confined to this special region, spiraling along Earth’s magnetic field lines. But sometimes they come too close and crash into our atmosphere — creating the eye-catching diffuse red aurora, but also potentially interfering with critical communications and GPS satellites that we depend on every day.

Image above: An artist’s depiction of the Van Allen Belts, showing Earth’s magnetic field lines and the trajectories of charged particles trapped by them. The twin ELFIN spacecraft are shown following their inclined polar orbit, traced in yellow. Image Credits: UCLA EPSS/NASA SVS.

A new CubeSat mission called The Electron Losses and Fields Investigation, or ELFIN, will study one of the processes that allows energetic electrons to escape the Van Allen Belts and fall into Earth. ELFIN was launched from the Vandenburg Air Force Base in California on Sept. 15, 2018.

When magnetic storms form in near-Earth space, they create waves that jiggle Earth’s magnetic field lines, kicking electrons out of the Van Allen Belts and down into our atmosphere. ELFIN aims to be the first to simultaneously observe this electron precipitation while also verifying the causal mechanism, measuring the magnetic waves and the resulting “lost” electrons.

UCLA sends student-built satellite into space

Video above: During the last five years nearly 250 students have spent thousands of hours designing and building ELFIN, more formally the Electron Losses and Fields Investigation CubeSat. Video Credits: UCLA.

Funded by NASA, The National Science Foundation, and industry partners, ELFIN is a CubeSat mission. CubeSats are small and lightweight satellites, measured in standardized 10-by-10-by-10 cubic centimeter units, that are comparatively quick to develop and come with a price tag at a fraction of larger satellite missions. ELFIN uses two identical 3U, or 3 cubic unit, CubeSats — both about the size of a loaf of bread. By using two satellites instead of one, ELFIN will be able to measure how the precipitated electrons vary across space and time. Designed, built and tested by a team of 250 UCLA students over five years, ELFIN will be the first satellite developed, managed and operated entirely by UCLA. A key advantage of CubeSats is that they allow an inexpensive means to engage students in all phases of satellite development, operation and exploitation through real-world, hands-on research and development experience.

Image above: The twin ELFIN CubeSats. Image Credits: UCLA EPSS.

Small satellites, including CubeSats, are playing an increasingly larger role in exploration, technology demonstration, scientific research and educational investigations at NASA. These miniature satellites provide a low-cost platform for NASA missions, including planetary space exploration; Earth observations; fundamental Earth and space science; and developing precursor science instruments like cutting-edge laser communications, satellite-to-satellite communications and autonomous movement capabilities.

Related article: 

NASA, ULA Launch Mission to Track Earth’s Changing Ice

Small Satellite Missions: http://www.nasa.gov/mission_pages/smallsats

CubeSats: http://www.nasa.gov/cubesats/

Images (mentioned), Video (mentioned), Text, Credits: NASA/Rob Garner/Goddard Space Flight Center, by Miles Hatfield.

Greetings, Orbiter.chArchive link

Launch Slips One Day as Station Boosts Orbit and Life Science Continues

ISS – Expedition 56 Mission patch.

September 20, 2018

The launch of a Japanese resupply ship to the International Space Station was postponed till Saturday. Meanwhile, the Expedition 56 crew moved on with critical space research and orbital lab maintenance.

Inclement weather at the Tanegashima Space Center in Japan led managers at JAXA (Japan Aerospace Exploration Agency) to postpone the launch of its HTV-7 resupply ship by one day. The HTV-7 is now due to launch atop the H-IIB rocket Saturday at 1:52 p.m. EDT loaded with over five tons of cargo, including new science experiments and science hardware. Its arrival at the station is now planned for Thursday at 7:54 a.m.

Image above: Japan’s HTV-3 resupply ship launches aboard an H-IIB rocket from the Tanegashima Space Center in southern Japan on July 20, 2012, during Expedition 32. Image Credit: JAXA.

The station’s Zvezda service module fired its engines today slightly boosting the space lab’s orbit. The reboost enables a crew swap taking place next month when Expedition 57 begins. Three Expedition 56 crew members will depart on Oct. 4 and return to Earth inside the Soyuz MS-08 spacecraft. A new pair of Expedition 57 crew members will arrive aboard the Soyuz MS-10 crew ship to replace them Oct. 11

Astronauts Ricky Arnold and Serena Auñón-Chancellor conducted a variety of biomedical research today sponsored by scientists from around the world. The duo partnered up for ultrasound scans inside Europe’s Columbus lab module as doctors on the ground monitored in real-time. Arnold also worked throughout the day processing blood and urine samples inside the Human Research Facility’s centrifuge.

International Space Station (ISS). Image Credit: NASA

The biological sample work is supporting a pair of ongoing experiments observing the physiological changes to humans in space. The Repository study analyzes blood and urine samples collected from astronauts before, during and after a space mission. The Biochemical Profile study also researches these samples for markers of astronaut health.

Commander Drew Feustel and Fight Engineer Alexander Gerst worked throughout the orbital lab on housekeeping tasks. Fuestel was in the Unity module installing computer network gear on an EXPRESS rack that can support multiple science experiments. Gerst relocated smoke detectors in the Tranquility module then moved on to computer maintenance in the Destiny lab module.

Small Satellite Demonstrates Possible Solution for ‘Space Junk’. Image Credit: NASA

The International Space Station serves as humanity’s orbital research platform, conducting a variety of experiments and research projects while in orbit around the planet.

On June 20, 2018, the space station deployed the NanoRacks-Remove Debris satellite into space from outside the Japanese Kibo laboratory module. This technology demonstration was designed to explore using a 3D camera to map the location and speed of orbital debris or “space junk.”

The NanoRacks-Remove Debris satellite successfully deployed a net to capture a nanosatellite that simulates debris. Collisions in space could have have serious consequences to the space station and satellites, but research has shown that removing the largest debris significantly reduces the chance of collisions.

Related links:

Expedition 56: https://www.nasa.gov/mission_pages/station/expeditions/expedition56/index.html

Expedition 57: https://www.nasa.gov/mission_pages/station/expeditions/expedition57/index.html

Science hardware: https://www.nasa.gov/centers/marshall/news/news/2017/nasa-international-partners-ready-new-research-facility-for-space-station.html

Human Research Facility: https://www.nasa.gov/mission_pages/station/research/experiments/explorer/Facility.html?#id=67

Repository: https://www.nasa.gov/mission_pages/station/research/experiments/explorer/Investigation.html?#id=954

Biochemical Profile: https://www.nasa.gov/mission_pages/station/research/experiments/explorer/Investigation.html?#id=980

EXPRESS: https://www.nasa.gov/mission_pages/station/research/experiments/explorer/Facility.html?#id=598

NanoRacks-Remove Debris: https://www.nasa.gov/mission_pages/station/research/experiments/explorer/Investigation.html#id=7350

Small Satellite Missions: http://www.nasa.gov/mission_pages/smallsats

CubeSats: http://www.nasa.gov/cubesats/

NASA TV: https://www.nasa.gov/nasatv

Space Station Research and Technology: https://www.nasa.gov/mission_pages/station/research/index.html

International Space Station (ISS): https://www.nasa.gov/mission_pages/station/main/index.html

Images (mentioned), Text, Credits: NASA/Mark Garcia/Yvette Smith.

Best regards, Orbiter.chArchive link

Closest planet ever discovered outside solar system could be habitable with a dayside...

In August of 2016, astronomers from the European Southern Observatory (ESO) confirmed the existence of an Earth-like planet around Proxima Centauri – the closest star to our solar system. In addition, they confirmed that this planet (Proxima b) orbited within its star’s habitable zone. Since then, multiple studies have been conducted to determine if Proxima b could in fact be habitable.

Closest planet ever discovered outside solar system could be habitable with a dayside ocean
Artist’s conception of the surface of Proxima Centauri b. The Alpha Centauri binary system can be seen
in the background, to the upper right of Proxima [Credit: ESO/M. Kornmesser]

Unfortunately, most of this research has not been very encouraging. For instance, many studies have indicated that Proxima b’s sun experiences too much flare activity for the planet to sustain an atmosphere and liquid water on its surface. However, in a new NASA-led study, a team of scientists has investigated various climate scenarios that indicate that Proxima b could still have enough water to support life.

The study, titled “Habitable Climate Scenarios for Proxima Centauri b with a Dynamic Ocean,” recently appeared in the scientific journal Astrobiology. The study was led by Anthony D. Del Genio of NASA’s Goddard Institute for Space Studies (GISS) and included members from the NASA Goddard Space Flight Center (GSFC), Columbia University, and Trinnovim LLC – an IT company that provides institutional and mission support for the GSFC.

To break it down, planets like Proxima b – which orbit M-type (red dwarf) stars – face a lot of challenges when it comes to habitability. For one, its close orbit to its star would have likely led to a runaway greenhouse effect early in its history. It would also be subject to intense radiation (X-ray and extreme ultraviolet fluxes) and solar wind – which would lead to catastrophic atmospheric and water loss.

However, there is a lot we don’t know about Proxima b’s evolutionary history, and there are scenarios in which habitability could be a possibility. As Anthony D. Del Genio told Universe Today via email:

Closest planet ever discovered outside solar system could be habitable with a dayside ocean
Artist’s impression of Proxima b, which was discovered using the Radial Velocity method
[Credit: ESO/M. Kornmesser]

“First and foremost, we don’t know whether Prox b even has an atmosphere, and if it does, whether it has any water. Without those, life as we know it cannot exist. It could be the Prox b formed initially with no atmosphere, or that it formed with an atmosphere but in a stellar system that was water-poor. Or it could have formed with a modest atmosphere and lots of water. Or it could have formed with a very thick atmosphere. We just don’t know yet.

“Second, Proxima Centauri is an M star, or ‘red dwarf.’ These stars are much smaller and cooler than our sun, so a planet has to be very close to such a star for it to receive enough starlight to have a habitable climate. The problem with that is that M stars tend to be very active, throughout their lifetimes.”

“Third, in their early lives, M stars are very bright and hot, meaning that if Prox b started out habitable, it might have heated up and lost its water early on, before life had a chance to take hold.”

Flare activity is an especially big concern when it comes to Proxima Centauri, which is variable and unstable even by red dwarf standards. In fact, in recent years, two particularly powerful flares have been spotted coming from the system. The second was so powerful that it could be seen with the naked eye, which indicates that any planet that orbits Proxima Centauri would have its atmosphere stripped away over time.

Closest planet ever discovered outside solar system could be habitable with a dayside ocean
Artist’s impression of a flaring red dwarf star, orbited by an exoplanet
[Credit: NASA, ESA, and G. Bacon (STScI)]

However, as they indicate in their study, there are many possible scenarios in which Proxima b could still support life. What’s more, there is a range of uncertainty when it comes to the things that are hostile to life that could provide Proxima b with some wiggle-room. According to Del Genio, these include the possibility that Proxima b formed farther away from its star and gradually migrated inward, which would mean it was not subject to early harsh conditions.

Second, it might have formed with ten times the water that Earth did; so even if Proxima Centauri’s harsh radiation stripped away 90% of its water, it would still have enough water to have an ocean. It also could have formed with a thick hydrogen envelop which could have been stripped away, leaving behind a “habitable core” of an atmosphere.

“We just don’t know,” said Del Genio. “Thus, to provide reference points for future observers, we imagine that it does have an atmosphere and water, and we ask, given the star it orbits and the distance from that star, how easy or difficult is it to imagine an atmosphere and ocean that together could produce habitable conditions at the surface (defined as warm enough to sustain liquid water but not so warm as to evaporate it all).”

To address these possibilities, Del Genio and his colleagues conducted a series of 3-D simulations using the Resolving Orbital and Climate Keys of Earth and Extraterrestrial Environments with Dynamics (ROCKE -3-D) software. As a planetary adaptation of the NASA GISS Model E2 Earth Global Climate Modelling software, ROCKE-3-D has been used to simulate past and future periods in Earth’s history and a potentially-habitable ancient Venus.

Closest planet ever discovered outside solar system could be habitable with a dayside ocean
This infographic compares the orbit of the planet around Proxima Centauri (Proxima b)
with the same region of the Solar System [Credit: Pale Red Dot]

Using this software, the team modeled a range of different types of potential atmospheres for Prox b, which included a Earth-like atmosphere (dominated by nitrogen with small amounts of CO2 to warm the planet) and a more Mars-like atmosphere (pure CO2). They also considered if its atmosphere would be thinner or thicker than Earth’s, its oceans more or less salty (as well as deeper or more shallow), and whether or not the ocean covered the entire planet.

Last, but not least, they considered whether the planet is tidally locked to its star or (like Mercury) had a 3:2 orbital resonance – where the planet rotates three times on its axis for every two orbits it makes. As Del Genio explained:

“For each configuration that we imagine, we run a 3-D global climate model that is adapted from the Earth climate model that we use to project 21st Century warming due to the addition of greenhouse gases to the atmosphere by humans. The key feature of our climate for this purpose is that we include a “dynamic” ocean, i.e., an ocean that has currents that move warm water to cooler places. Previous studies of Prox b had used a ‘statis’ ocean that warms and cools but does not move.”

From this, Del Genio and his colleagues found that every case they could think of produced a planet that had at least some surface liquid water. They also found that in the case of a tidally-locked planet, heat transport between the sun-facing side and dark side could also allow the whole planet to be habitable.

Closest planet ever discovered outside solar system could be habitable with a dayside ocean
Artist’s depiction of a watery exoplanet orbiting a distant red dwarf star. New research
indicates that Proxima b could be especially watery [Credit: CfA]

“So if it has an atmosphere and has water, Prox b has a pretty good chance to be habitable,” said Del Genio. “We also found that the ocean currents carried warm water from the dayside to the nightside, keeping parts of the nightside habitable even though they never see any light. And if the ocean is very salty, almost the entire planet could be covered by liquid, but with temperatures below the usual freezing point almost everywhere.”

For those who have been treated to a steady diet of bad news about Proxima b lately, this latest research is quite encouraging. Even though observations have shown that Proxima Centauri is variable and has produced some significant flares, there are still many scenarios in which Proxima b could still be habitable. Whether or not this is the case, however, will depend upon future observations. As Del Genio put it:

“Unfortunately, as viewed from Earth, Prox b does not seem to transit, which makes it harder to detect an atmosphere and tell what is in it. However, in the fairly near future, astronomers will be able to monitor the heat emitted to space by Prox b as it moves in its orbit. Our results show that it should be possible to distinguish a planet with an atmosphere from one without, and a thin cold atmosphere from a thick warm atmosphere.”

It could also extend to other rocky planets that orbit M-type (red dwarf) stars, which is even more encouraging. Given that these stars account for over 70% of the stars in the Milky Way galaxy alone, the likelihood that they support potentially habitable planets increases the odds of finding extra-terrestrial life significantly.

Closest planet ever discovered outside solar system could be habitable with a dayside ocean
Artist’s impression of a habitable exoplanet orbiting a red dwarf star. The habitability of the planets
of red dwarf stars is conjectural [Credit: ESO/M. Kornmesser]

In the coming years, next-generation instruments are expected to play a major role in the detection and characterization of exoplanets. These include the James Webb Space Telescope (JWST), the Wide-Field Infrared Survey Telescope (WFIRST), and ground-based instruments like the Extremely Large Telescope (ELT) and the Giant Magellan Telescope (GMT). And you can bet some of their time will be dedicated to studying the closest exoplanet to Earth!

Author: Matt Williams | Source: Universe Today [September 17, 2018]




https://t.co/hvL60wwELQ — XissUFOtoday Space (@xufospace) August 3, 2021 Жаждущий ежик наслаждается пресной водой после нескольких дней в о...