суббота, 2 марта 2019 г.

NASA, SpaceX Launch First Flight Test of Space System Designed for Crew

SpaceX – COTS C-1 Mission patch.

March 2, 2019

Image above: Crowd gathers to watch as NASA and SpaceX make history by launching the first commercially-built and operated American crew spacecraft and rocket to the International Space Station. The SpaceX Crew Dragon spacecraft lifted off at 2:49 a.m. EST Saturday on the company’s Falcon 9 rocket at NASA’s Kennedy Space Center in Florida. Image Credit: NASA.

For the first time in history, a commercially-built and operated American crew spacecraft and rocket, which launched from American soil, is on its way to the International Space Station. The SpaceX Crew Dragon spacecraft lifted off at 2:49 a.m. EST Saturday on the company’s Falcon 9 rocket from Launch Complex 39A at NASA’s Kennedy Space Center in Florida.

SpaceX Demo-1: Falcon 9 launches Crew Dragon. Image Credit: NASA

“Today’s successful launch marks a new chapter in American excellence, getting us closer to once again flying American astronauts on American rockets from American soil,” said NASA Administrator Jim Bridenstine. “I proudly congratulate the SpaceX and NASA teams for this major milestone in our nation’s space history. This first launch of a space system designed for humans, and built and operated by a commercial company through a public-private partnership, is a revolutionary step on our path to get humans to the Moon, Mars and beyond.”

Known as Demo-1, SpaceX’s inaugural flight with NASA’s Commercial Crew Program is an important uncrewed mission designed to test the end-to-end capabilities of the new system. It brings the nation one-step closer to the return of human launches to the space station from the United States for the first time since 2011 – the last space shuttle mission. Teams still have work to do after this flight to prepare the spacecraft to fly astronauts. The best way to advance the system design was to fly this spacecraft and uncover any other areas or integrated flight changes that might be required.

SpaceX Demo-1: Falcon 9 launches Crew Dragon & Falcon 9 first stage landing

The program demonstrates NASA’s commitment to investing in commercial companies through public-private partnerships and builds on the success of American companies, including SpaceX, already delivering cargo to the space station. Demo-1 is a critical step for NASA and SpaceX to demonstrate the ability to safely fly missions with NASA astronauts to the orbital laboratory.

“First a note of appreciation to the SpaceX team. It has been 17 years to get to this point, 2002 to now, and an incredible amount of hard work and sacrifice from a lot of people that got us to this point…I’d also like to express great appreciation for NASA,” said Elon Musk, CEO and lead designer at SpaceX. “SpaceX would not be here without NASA, without the incredible work that was done before SpaceX even started and without the support after SpaceX did start.”

The public-private partnership combines commercial companies’ unique, innovative approaches to human spaceflight and NASA’s decades-long experience in design, development and operations of a crew space system.

“We are watching history being made with the launch of the SpaceX Demo-1 mission,” said Steve Stich, launch manager and deputy manager of NASA’s Commercial Crew Program. “SpaceX and NASA teams have been working together for years, and now we are side-by-side in control rooms across the country for launch, in-orbit operations and, eventually, splashdown of the Crew Dragon right here off Florida’s coast.”

SpaceX controlled the launch of the Falcon 9 rocket from Kennedy’s Launch Control Center Firing Room 4, the former space shuttle control room, which SpaceX has leased as its primary launch control center. As Crew Dragon ascended into space, SpaceX commanded the Crew Dragon spacecraft from its mission control center in Hawthorne, California. NASA teams will monitor space station operations throughout the flight from Mission Control Center at the agency’s Johnson Space Center in Houston.

The SpaceX Crew Dragon spacecraft is on its way to the space station for a 6:05 a.m. Sunday, March 3 docking to the low-Earth orbit destination. Live coverage of the rendezvous and docking will air on NASA Television and the agency’s website beginning at 3:30 a.m. Coverage will resume at 8:30 a.m. with the hatch opening, followed at 10:45 a.m. with a crew welcoming ceremony.

Teams in the space station mission center at Johnson will monitor station crew members’ opening of the spacecraft hatch, entering Crew Dragon and unpacking the capsule.

Mission Objectives

All the launch pad and vehicle hardware, and the launch day operations, were conducted in preparation for the next flight with crew aboard, including the control teams and ground crews. The mission and testing continues once the Falcon 9 lifts off the pad.

During the spacecraft’s approach, in-orbit demonstrations will include rendezvous activities from a distance of up to 2.5 miles (4 kilometers), known as far field, and activities within one mile (1.6 kilometers), known as near field. As the spacecraft approaches the space station, it will demonstrate its automated control and maneuvering capabilities by reversing course and backing away from the station before the final docking sequence.

The docking phase, as well as the return and recovery of Crew Dragon, include many first-time events that cannot be totally modeled on the ground and, thus, are critical to understanding the design and systems ability to support crew flights. Previous cargo Dragon vehicles have been attached to the space station after capture by the station’s robotic arm. The Crew Dragon will approach to dock using new sensor systems, new propulsion systems and the new international docking mechanism to attach to the station’s Harmony module forward port, fitted with a new international docking adapter. Astronauts installed the adapter during a spacewalk in August 2016, following its delivery to the station in the trunk of a SpaceX Dragon spacecraft on its ninth commercial resupply services mission.  

For Demo-1, Crew Dragon is carrying more than 400 pounds of crew supplies and equipment to the space station and will return some critical research samples to Earth. A lifelike test device named Ripley also will travel on the Crew Dragon, outfitted with sensors to provide data on potential effects on humans traveling in Crew Dragon.

Artist’s view of  SpaceX Crew Dragon. Image Credit: SpaceX

For operational missions, Crew Dragon will be able to launch as many as four crew members and carry more than 220 pounds of cargo, enabling the expansion of the crew members, increasing the time dedicated to research in the unique microgravity environment, and returning more science back to Earth.

The Crew Dragon is designed to stay docked to station for up to 210 days, although the Crew Dragon used for this flight test will not have that capability. This spacecraft will remain docked to the space station only five days, departing Friday, March 8. After undocking from the station, Crew Dragon will begin its descent to Earth. Live coverage of the undocking will air on NASA Television and the agency’s website beginning at 2 a.m., with deorbit and landing coverage resuming at 7:30 a.m. 

Additional spacecraft mission objectives include a safe departure from the station, followed by a deorbit burn and parachute deployment to slow the spacecraft before splashdown in the Atlantic Ocean, off the Florida Space Coast. SpaceX’s recovery ship, Go Searcher, will retrieve Crew Dragon and transport it back to port. Teams will be closely monitoring the parachute system and entry control system operation, which have been changed from cargo Dragons to provide higher reliability for crew flights.

NASA and SpaceX will use data from Demo-1, along with planned upgrades and additional qualification testing, to further prepare for Demo-2, the crewed flight test that will carry NASA astronauts Bob Behnken and Doug Hurley to the International Space Station. NASA will validate the performance of SpaceX’s systems before putting crew on board for the Demo-2 flight, currently targeted for July.

NASA’s Commercial Crew Program is working with Boeing and SpaceX to design, build, test and operate safe, reliable and cost-effective human transportation systems to low-Earth orbit. Both companies are focused on test missions, including abort system demonstrations and crew flight tests, ahead of regularly flying crew missions to the space station. Both companies’ crewed flights will be the first times in history NASA has sent astronauts to space on systems owned, built, tested and operated by private companies.

Learn more about NASA’s Commercial Crew program at: https://www.nasa.gov/commercialcrew

Commercial Space: http://www.nasa.gov/exploration/commercial/index.html

International Space Station (ISS): https://www.nasa.gov/mission_pages/station/main/index.html

NASA Television: https://www.nasa.gov/live/

SpaceX: https://www.spacex.com/

Images (mentioned), Video, Text, Credits: NASA/Katherine Brown/Josh Finch/Stephanie Schierholz/KSC/Stephanie Martin/Marie Lewis/JSC/Kyle Herring/Dan Huot/NASA TV/SciNews.

Best regards, Orbiter.chArchive link

Analysing Autism The foundations of autism are thought to take…

Analysing Autism

The foundations of autism are thought to take hold during early brain development, but precisely when and where is hard to pinpoint. To narrow it down, researchers watched as stem cells – precursor cells that can develop into any cell type – derived from patients developed into brain cells (pictured, with specific cell types stained in red and green to help track progress). The cells grew both faster and in different patterns to those from healthy people at a very early stage of development. The team

identified key characteristics of the stem cells, and were even able to trigger this autism-related mis-development in normal cells, demonstrating the initial steps of mastering the process. Autism in its many forms, from severe to very manageable, is relatively common, but causes and treatments remain unclear. These findings might eventually help identify it earlier in life, when preventative interventions could potentially divert the disorder.

Written by Anthony Lewis

You can also follow BPoD on Instagram, Twitter and Facebook

Archive link

Maykop: a multi-ethnic layer cake?

Let’s speculate about the linguistic affinities of the currently available ancient populations from the Caucasus and surrounds. I put together a series of outgroup f3-stats to help things along. They’re available for download here.

Georgian 0.258224
Abkhasian 0.257899
Latvian 0.257376
Swedish 0.257301
Turkish_Trabzon 0.256996
Basque_Spanish 0.256589
Chechen 0.256514
Icelandic 0.256418
Norwegian 0.256325
Lezgin 0.256272
Irish 0.256227
Tabasaran 0.256092
Italian_Bergamo 0.25605
English_Cornwall 0.256032
Polish_East 0.255991
Scottish 0.255955
Adygei 0.255913
Latvian 0.261845
Russian_North 0.26145
Estonian 0.260355
Finnish 0.260211
Lithuanian 0.260072
Udmurd 0.259804
Ingrian 0.259663
Surui 0.259637
Vepsa 0.259608
Karelian 0.259532
Karitiana 0.259482
Russian_West 0.259397
Russian_Central 0.259274
Wichi 0.259106
Saami 0.258982
Komi 0.258945
Icelandic 0.258854
Swedish 0.258814
Mordovian 0.258604
Irish 0.25859

Eyeballing the stats might be enough to get a general impression about what they mean, but to understand them properly it’s necessary to get technical with something like PAST3 (see here). That’s because f3-stats pick up shared genetic drift from all drift paths, and don’t especially focus on more recently shared ancestry. This can often lead to confusing outcomes.
But below are a few examples of linear models based on my f3-stats. Note that many Indo-European speakers, especially from Northern Europe, are foremost attracted to ancient samples from the Pontic-Caspian steppe. On the other hand, non-Indo-European speakers, from such far flung locations as the Caucasus and Iberia, show relatively stronger affinity to ancient samples from Anatolia and the Caucasus. Moreover, Uralic speakers show elevated affinity to ancient hunter-gatherer samples from Eastern Europe and Siberia. Makes sense, right?

Based on these and other data, I’d say that Maykop and the culturally related Steppe Maykop were something of a multi-ethnic polity, with many near and far related languages spoken by its people, including perhaps Kartvelian, Northwest Caucasian, Yeniseian and Indo-European. But it seems to me that Proto-Indo-European was spoken by steppe foragers turned pastoralists just outside of the Maykop zone. And I’m quite sure that after the Maykop collapse various early Indo-European groups pushed across the Caucasus and deep into the Near East. Just take a look at the f3-stats and linear model for Hajji_Firuz_BA to see what I mean.
See also…
The Steppe Maykop enigma
Steppe Maykop: a buffer zone?
On Maykop ancestry in Yamnaya


2019 March 2 NGC 6302: The Butterfly Nebula Image Credit: NASA,…

2019 March 2

NGC 6302: The Butterfly Nebula
Image Credit: NASA, ESA, Hubble, HLA; Reprocessing & Copyright: Robert Eder

Explanation: The bright clusters and nebulae of planet Earth’s night sky are often named for flowers or insects. Though its wingspan covers over 3 light-years, NGC 6302 is no exception. With an estimated surface temperature of about 250,000 degrees C, the dying central star of this particular planetary nebula has become exceptionally hot, shining brightly in ultraviolet light but hidden from direct view by a dense torus of dust. This sharp close-up was recorded by the Hubble Space Telescope in 2009. The Hubble image data is reprocessed here, showing off the remarkable details of the complex planetary nebula. Cutting across a bright cavity of ionized gas, the dust torus surrounding the central star is near the center of this view, almost edge-on to the line-of-sight. Molecular hydrogen has been detected in the hot star’s dusty cosmic shroud. NGC 6302 lies about 4,000 light-years away in the arachnologically correct constellation of the Scorpion (Scorpius).

∞ Source: apod.nasa.gov/apod/ap190302.html

Galaxy physics beyond the halo boundary

The VIVA Survey (VLA Imaging of Virgo in Atomic gas) is an imaging survey in 48 spiral or spiral-like galaxies, and five irregular systems. These 53 systems cover a wide range in masses and are located throughout the cluster, from the dense core to the low-density outer parts. They span a range in star formation properties and include the best candidates for strong ram-pressure interactions. © VIVA team

Models of the large-scale structure of galaxies in the Universe suffer from serious limitations, when artificial boundaries are imposed at the virial radius of the dark matter halo. As MPA scientists demonstrate, environmental effects vary smoothly across the traditionally adopted halo boundary and need to be taken into account even in low-density environments.

Our understanding of the formation and evolution of galaxies has improved significantly during the past few decades. On the theoretical side, numerical simulations have been key to elucidating how large-scale structure in the dark matter component of the Universe emerges in the form of a cosmic web of filaments, sheets, and clusters of galaxies surrounding large under-dense regions, or voids. Galaxies form at the intersections of filaments and sheets, where gas reaches high enough densities so that particle interactions cause it to cool, to lose pressure support, and to collapse to form stars.

Computer simulations that include the hydrodynamics of gas are computationally very expensive. A well-defined, computationally efficient way to apply the theory of galaxy formation to a very large volume of the Universe is to use semi-analytical models. Such a semi-analytical model applies a set of equations describing the main physical processes influencing galaxies to a set of halo merger trees extracted from the simulation. These halo merger trees document, which collapsed structures, or halos, merge with each other as structures in the Universe build up under the influence of gravitational forces. Combining these with the semi-analytic models allows predictions for the observed properties of galaxies as a function of cosmic epoch.

After the gas has cooled and collapsed under gravity to form a centrifugally supported disk galaxy within a dark matter halo similar to our own Milky Way, environmental processes can influence the subsequent evolution of the system. Direct observations have shown that the properties of galaxies depend on the environment in which they reside. The morphology-density relation derives from the observed fact that galaxies in massive clusters are more likely to be non-star-forming (or quenched). In such dense environments, gravitational tidal effects on galaxies become strong and this strips dark matter and stars from their outer regions. In physics, a body moving through a fluid medium is subject to ram pressure, where the relative bulk motion of the fluid exerts a drag force on the body. In galaxy groups and clusters, ram-pressure forces become strong enough to overcome the gravitational binding energy of the gas, so that the gas is stripped out of the galaxy and star formation ceases.

Velocity distribution of particles in the background shell of a satellite galaxy in the Millennium Simulation, projected along each of the three spatial dimensions. There are about 50,000 particles in the shell. The colours show the fraction of particles with a certain velocity. Cyan circles denote the velocity of the galaxy; white circles denote the derived mean velocity of the local background after decontamination. The radii of the solid circles are equal to the velocity dispersion of the two modelled Gaussians, while dashed circles show twice that value. The fraction of contaminant particles in the shell is 0.22. © MPA

An important question that has largely been neglected up to now is the radius from the cluster or group centre where these effects start to become important. Most analytic and semi-analytical models of galaxy formation, including our in-house model, L-Galaxies, adopts the virial radius as the boundary beyond which environmental effects are no longer important. However, a number of observational studies have suggested that environmental processes may influence gas and star formation in galaxies well beyond this radius. For example, so-called galactic conformity effects, the observed large-scale correlation between star formation in neighbouring galaxies, extends out to distances of several megaparsec, which is much larger than the virial radius of even the most massive clusters. In addition, hydrodynamical simulations suggest that the shock-heated gas of a dark matter halo can extend beyond its virial radius.

Environmental properties of galaxies in the Millennium Simulation within three times the virial radius of a massive halo. Each circle shows a galaxy, in total 582 galaxies are visible. The colour of each circle corresponds to the local background density (in units of the mean density of the universe, see colour bar at the bottom). The arrows illustrate the velocity of each galaxy (blue) and the bulk velocity of its local background (purple). The dashed black circle corresponds to the virial radius for the main halo. The size of circles is equal to the subhalo size for central galaxies and larger satellites, smaller satellite galaxies are simply shown as dots. While galaxies tend to be smaller in the denser inner region, there is no abrupt change in density or bulk velocity at any radius. © MPA

A realistic and accurate model of galaxy formation and evolution needs to contain prescriptions for environmental effects for all galaxies in a simulation. We have developed a new way to measure the properties of the Local Background Environment (LBE), including its density and bulk motion, for every galaxy in the Millennium Simulation. The local background properties are measured in a spherical shell around each galaxy. Care must be taken to deconvolve simulation particles that are truly part of the background from those which are gravitationally bound to the galaxy. As an example, Figure 2 shows the velocity distribution of the particles in the shell of one of the galaxies in the simulation. This is formed by two Gaussians, therefore, the deconvolution of the particles (into background and bound particles) is possible using Gaussian mixture modelling techniques.

In our work, we have analysed the properties of the local background as a function of distance from the group/cluster centre. We show that there is no abrupt change in density or bulk velocity at any radius, indicating that environmental effects vary smoothly across the traditionally adopted halo boundary. Preliminary results indicate that gas stripping may also play an unexpectedly important role in some galaxies in low-density environments, particularly those accelerated by a previous, close encounter with another galaxy, or those falling through parts of the cosmic web. In future work, we will explore the implications of these findings for cosmological studies of galaxy surveys, as well as for models for the formation and evolution of galaxy properties.


Ayromlou, Mohammadreza
PhD student
Phone: 2248
Email: ayromlou@mpa-garching.mpg.de
Room: 144

Kauffmann, Guinevere
Phone: 2013
Email: gamk@mpa-garching.mpg.de
Room: 121

Original publication

1. Ayromlou at al. 

A New Method to Quantify Environmental Effects and Ram-Pressure Stripping in Cosmological Simulations 

in preparation

More Information

VIVA survey

L-Galaxies Semi-analytical model of galaxy formation developed at MPA

Millennium Simulation – Simulation of the large-scale structure evolution of the Universe

Archive link

LHC: pushing computing to the limits

CERN – European Organization for Nuclear Research logo.

1 March, 2019

The Large Hadron Collider produced unprecedented volumes of data during its two multi-year runs, and, with its current upgrades, more computing challenges are in store 

Image above: Racks of computers in CERN’s computing centre are just a fraction of the hardware needed to store and process the data from the LHC (Image: Anthony Grossir/CERN).

At the end of 2018, the Large Hadron Collider (LHC) completed its second multi-year run (“Run 2”) that saw the machine reach a proton–proton collision energy of 13 TeV, the highest ever reached by a particle accelerator. During this run, from 2015 to 2018, LHC experiments produced unprecedented volumes of data with the machine’s performance exceeding all expectations.

This meant exceptional use of computing, with many records broken in terms of data acquisition, data rates and data volumes. The CERN Advanced Storage system (CASTOR), which relies on a tape-based backend for permanent data archiving, reached 330 PB of data (equivalent to 330 million gigabytes) stored on tape, an equivalent of over 2000 years of 24/7 HD video recording. In November 2018 alone, a record-breaking 15.8 PB of data were recorded on tape, a remarkable achievement given that it corresponds to more than what was recorded during the first year of the LHC’s Run 1.

The distributed storage system for the LHC experiments exceeded 200 PB of raw storage with about 600 million files. This system (EOS) is disk-based and open-source, and was developed at CERN for the extreme LHC computing requirements. As well as this, 830 PB of data and 1.1 billion files were transferred all over the world by File Transfer Service. To face these computing challenges and to better support the CERN experiments during Run 2, the entire computing infrastructure, and notably the storage systems, went through major upgrades and consolidation over the past few years.

Graphic above: Data (in terabytes) recorded on tape at CERN month-by-month. This plot shows the amount of data recorded on tape generated by the LHC experiments, other experiments, various back-ups and users. In 2018, over 115 PB of data in total (including about 88 PB of LHC data) were recorded on tape, with a record peak of 15.8 PB in November (Image: Esma Mobs/CERN).

New IT research-and-development activities have already begun in preparation for the LHC’s Run 3 (foreseen for 2021 to 2023). “Our new software, named CERN Tape Archive (CTA), is the new tape storage system for the custodial copy of the physics data and a replacement for its predecessor, CASTOR. The main goal of CTA is to make more efficient use of the tape drives, to handle the higher data rate anticipated during Run 3 and Run 4 of the LHC,” explains German Cancio, who leads the Tape, Archive & Backups storage section in CERN’s IT department. CTA will be deployed during the ongoing second long shutdown of the LHC (LS2), replacing CASTOR. Compared to the last year of Run 2, data archival is expected to be two-times higher during Run 3 and five-times higher or more during Run 4 (foreseen for 2026 to 2029).

The LHC’s computing will continue to evolve. Most of the data collected in CERN’s data centre is highly valuable and needs to be preserved and stored for future generations of physicists. CERN’s IT department will therefore be taking advantage of LS2, the current maintenance and upgrade of the accelerator complex, to perform the required consolidation of the computing infrastructure. They will be upgrading the storage infrastructure and software to face the likely scalability and performance challenges when the LHC restarts in 2021 for Run 3.


CERN, the European Organization for Nuclear Research, is one of the world’s largest and most respected centres for scientific research. Its business is fundamental physics, finding out what the Universe is made of and how it works. At CERN, the world’s largest and most complex scientific instruments are used to study the basic constituents of matter — the fundamental particles. By studying what happens when these particles collide, physicists learn about the laws of Nature.

The instruments used at CERN are particle accelerators and detectors. Accelerators boost beams of particles to high energies before they are made to collide with each other or with stationary targets. Detectors observe and record the results of these collisions.

Related links:

Large Hadron Collider (LHC): https://home.cern/science/accelerators/large-hadron-collider

CERN Advanced Storage system (CASTOR): http://castor.web.cern.ch/

This system (EOS): https://eos.web.cern.ch/

File Transfer Service: https://fts.web.cern.ch/

For more information about European Organization for Nuclear Research (CERN), Visit: https://home.cern/

Image (mentioned), Graphic (mentioned), Text, Credits: CERN/Esra Ozcesmeci.

Best regards, Orbiter.chArchive link

More support for Planet Nine

Corresponding with the three-year anniversary of their announcement hypothesizing the existence of a ninth planet in the solar system, Caltech’s Mike Brown and Konstantin Batygin are publishing a pair of papers analyzing the evidence for Planet Nine’s existence. The papers offer new details about the suspected nature and location of the planet, which has been the subject of an intense international search ever since Batygin and Brown’s 2016 announcement.

More support for Planet Nine
This illustration depicts orbits of distant Kuiper Belt objects and Planet Nine. Orbits rendered in purple are primarily
controlled by Planet Nine’s gravity and exhibit tight orbital clustering. Green orbits, on the other hand, are strongly
 coupled to Neptune, and exhibit a broader orbital dispersion. Updated orbital calculations suggest that Planet
Nine is an approximately 5 Earth mass planet that resides on a mildly eccentric orbit with a period
of about ten thousand years [Credit: James Tuttle Keane/Caltech]

The first, titled “Orbital Clustering in the Distant Solar System,” was published in The Astronomical Journal. The Planet Nine hypothesis is founded on evidence suggesting that the clustering of objects in the Kuiper Belt, a field of icy bodies that lies beyond Neptune, is influenced by the gravitational tugs of an unseen planet. It has been an open question as to whether that clustering is indeed occurring, or whether it is an artifact resulting from bias in how and where Kuiper Belt objects are observed.

To assess whether observational bias is behind the apparent clustering, Brown and Batygin developed a method to quantify the amount of bias in each individual observation, then calculated the probability that the clustering is spurious. That probability, they found, is around one in 500.

“Though this analysis does not say anything directly about whether Planet Nine is there, it does indicate that the hypothesis rests upon a solid foundation,” says Brown, the Richard and Barbara Rosenberg Professor of Planetary Astronomy.

The second paper is titled “The Planet Nine Hypothesis,” and is an invited review that will be published in the next issue of Physics Reports. The paper provides thousands of new computer models of the dynamical evolution of the distant solar system and offers updated insight into the nature of Planet Nine, including an estimate that it is smaller and closer to the sun than previously suspected. Based on the new models, Batygin and Brown–together with Fred Adams and Juliette Becker (BS ’14) of the University of Michigan–concluded that Planet Nine has a mass of about five times that of the earth and has an orbital semimajor axis in the neighborhood of 400 astronomical units (AU), making it smaller and closer to the sun than previously suspected–and potentially brighter. Each astronomical unit is equivalent to the distance between the center of Earth and the center of the sun, or about 149.6 million kilometers.

“At five Earth masses, Planet Nine is likely to be very reminiscent of a typical extrasolar super-Earth,” says Batygin, an assistant professor of planetary science and Van Nuys Page Scholar. Super-Earths are planets with a mass greater than Earth’s, but substantially less than that of a gas giant. “It is the solar system’s missing link of planet formation. Over the last decade, surveys of extrasolar planets have revealed that similar-sized planets are very common around other sun-like stars. Planet Nine is going to be the closest thing we will find to a window into the properties of a typical planet of our galaxy.”

Batygin and Brown presented the first evidence that there might be a giant planet tracing a bizarre, highly elongated orbit through the outer solar system on January 20, 2016. That June, Brown and Batygin followed up with more details, including observational constraints on the planet’s location along its orbit.

Over the next two years, they developed theoretical models of the planet that explained other known phenomena, such as why some Kuiper Belt objects have a perpendicular orbit with respect to the plane of the solar system. The resulting models increased their confidence in Planet Nine’s existence.

After the initial announcement, astronomers around the world, including Brown and Batygin, began searching for observational evidence of the new planet. Although Brown and Batygin have always accepted the possibility that Planet Nine might not exist, they say that the more they examine the orbital dynamics of the solar system, the stronger the evidence supporting it seems.

“My favorite characteristic of the Planet Nine hypothesis is that it is observationally testable,” Batygin says. “The prospect of one day seeing real images of Planet Nine is absolutely electrifying. Although finding Planet Nine astronomically is a great challenge, I’m very optimistic that we will image it within the next decade.”

Author: Robert Perkins | Source: California Institute of Technology [February 27, 2019]



Anemic galaxy reveals deficiencies in ultra-diffuse galaxy formation theory

A team of astronomers led by the University of California Observatories (UCO) have studied in great detail a galaxy so faint and in such pristine condition it has acted as a time capsule, sealed shortly after the dawn of our universe only to be opened by the newest technology at W. M. Keck Observatory.

Anemic galaxy reveals deficiencies in ultra-diffuse galaxy formation theory
The ultra-diffuse galaxy DGSAT I [Credit: Aaron Romanowsky/University of California Observatories/
D. Martinez-Delgado/ARI]

Using the Keck Cosmic Web Imager (KCWI), the team discovered a bizarre, solitary ultra-diffuse galaxy (UDG). This transparent, ghost-like galaxy, named DGSAT I, contradicts the current theory on the formation of UDGs. All previously studied UDGs have been in galaxy clusters, which informed the basis for the theory that they were once “normal” galaxies, but with time have been blasted into a fluffy mess due to violent events within the cluster.

“There seemed to be a relatively tidy picture of the origins of galaxies, from spirals to ellipticals, and from giants to dwarfs,” said lead author Ignacio Martín-Navarro, a postdoctoral scholar at UCO. “However, the recent discovery of UDGs raised new questions about how complete this picture is. All of the UDGs that have been studied in detail so far were within galaxy clusters: dense regions of violent interaction where the galaxies’ characteristics at birth have been scrambled up by a difficult adolescence.”

Because DGSAT I is a rare exception of an UDG found away from a cluster, it can provide a clearer window into the past. There has not been much activity around it that could taint its composition and evolution. In order to find out what caused this galaxy to be so sparse in starlight, the team used KCWI to map the composition of the galaxy.

“The chemical composition of a galaxy provides a record of the ambient conditions when it was formed, like the way that trace elements in the human body can reveal a lifetime of eating habits and exposure to pollutants,” said co-author Aaron Romanowsky, a UCO astronomer and Associate Professor of the Physics and Astronomy Department at San José State University.

DGSAT I surprised researchers with its chemical makeup. Today’s galaxies typically have more heavy elements in them, like iron and magnesium, compared to ancient galaxies born just after the Big Bang. But KCWI revealed that DGSAT I appears to be anemic; the galaxy’s iron content is remarkably low, as if it formed from a nearly pristine gas cloud that was unpolluted by the supernova death of previous stars. And yet DGSAT I’s magnesium levels are normal, consistent with what astronomers expect to find in modern galaxies. This is strange because both of these elements are released in supernova events; you typically don’t find one without the other.

“We don’t understand this combination of pollutants, but one of our ideas is that extreme blasts of supernovae caused the galaxy to pulsate in size during its adolescence, in a way that retains magnesium preferentially to iron,” said Romanowsky.

UDGs are a relatively new class of galaxies that were first discovered in 2015. They are as large as the Milky Way but have between 100 to 1000 times fewer stars than our own galaxy, making them barely visible and difficult to study.

KCWI is designed to overcome that obstacle with its extreme sensitivity and ability to capture high-resolution spectra of the faintest and farthest objects in our universe like UDGs.

“There is only one other instrument in the world with the capabilities of KCWI that allows us to measure the chemical composition of low surface brightness galaxies,” said co-author Jean Brodie, Professor of Astronomy and Astrophysics at UCO. “But that one is in the Southern hemisphere where we don’t have a good view of DGSAT I, which lies in the North.”

KCWI performs a type of observation called integral field spectroscopy, which captures data in 3-D instead of 2-D. Traditionally, there were two ways for astronomers to study celestial objects, either through imaging or through spectroscopy. This instrument breaks the barrier between the two. In a single observation, KCWI captures both the image, as well as the spectrum of each pixel in the image, which reveals the object’s physical properties such as composition, temperature, velocity, and more.

“It’s these kinds of observations that we built KCWI for; to continue pushing the frontier of getting the most information out of the faintest objects,” said John O’Meara, chief scientist at Keck Observatory. “We’re very excited to see how many more objects like DGSAT I we can study with Keck and continue to transform our understanding of how galaxies form and change with time.”

The researchers plan to use KCWI again, this time to conduct a deeper observation of another UDG similar to DGSAT I; they plan to tease out its composition in greater detail in hopes of unraveling more data that could help astronomers zero in on the origin of UDGs.

“Various ideas have been presented, from the mundane to the exotic. One intriguing possibility is that some of these ghostly galaxies are living fossils from the dawn of the universe when stars and galaxies emerged in a much different environment than today,” said Romanowsky. “Their birth is truly a fascinating mystery that our team is working on solving.”

The team’s results are published in the Monthly Notices of the Royal Astronomical Society.

Source: W. M. Keck Observatory [February 27, 2019]



New buzz around insect DNA analysis and biodiversity estimates

In the face of declining numbers of insects across the globe, scientists continue to expand our knowledge about invertebrate organisms and their biodiversity across the globe. Insects are the most abundant animals on planet Earth – they outweigh all humanity by a factor of 17. Their abundance, variety, and ubiquity mean insects play a foundational role in food webs and ecosystems, from the bees that pollinate the flowers of food crops to the termites that recycle dead trees. With insect populations dwindling worldwide, there are still new species being discovered.

New buzz around insect DNA analysis and biodiversity estimates
The researchers cross a dry stream bed on the remote island of Hauturu, in search of samples
[Credit: Andrew Dopheide]

Researchers on the remote forested island of Hauturu, New Zealand (also known as Little Barrier Island) have compiled a staggering inventory of invertebrate biodiversity using DNA sequencing, adding a significant number of invertebrates to GenBank – an open access database of all publicly available DNA sequences. The results are published this week in the Ecological Society of America’s journal Ecological Applications.

The number of invertebrate species that exist globally is uncertain, and it is difficult to characterize entire invertebrate communities using traditional methods that require the examination of individual specimens by an expert taxonomist.

This is where DNA sequencing comes in. This method is hailed as a tool for resolving the biodiversity of earth’s underexplored ecosystems. It allows for the identification of invertebrate specimens based on more efficient molecular analysis.

Andrew Dopheide – a researcher at the University of Auckland – and colleagues employed a combination of old-school field biology with next-generation DNA sequencing to explore the use of combined datasets as a basis for estimating total invertebrate biodiversity on Hauturu island. They collected specimens from leaf litter samples, pitfall traps, and the soil itself.

“In a New Zealand context, we are not aware of any other ecosystem-wide DNA-based surveys of terrestrial invertebrate biodiversity on this scale,” explained Dopheide. “Additionally, there was no information about invertebrate biodiversity on Hauturu, despite this being one of New Zealand’s most pristine and important natural ecosystems.”

At the end of the study, they estimated that the above-ground community of invertebrates includes over 1000 arthropod species (having an exoskeleton, a segmented body, and paired jointed appendages), of which 770 are insects, and 344 are beetles.

The soil they sequenced yielded even richer samples. Soils are a promising substrate for DNA analyses of biodiversity because they contain diverse communities of organisms as well as biological debris including DNA molecules. Scientists know much less about soil communities than about above-ground communities.

From the soil samples they were able to estimate 6856 arthropod species (excluding mites), of which almost 4000 are insects.

Beetles (order Coleoptera) were most abundant, followed by sawflies, wasps, bees and ants (order Hymenoptera), flies (Diptera), butterflies and moths (Lepidoptera), and various Amphipoda – a diverse order of small, shrimp-like crustaceans that mostly occur in the ocean, but also in freshwater and some terrestrial habitats.

In total, they added over 2500 new DNA sequences to GenBank, which houses data from more than 100,000 distinct organisms, and has become an important database for research in biological fields.

“We were surprised that so few of the invertebrates were already represented in GenBank,” said Dopheide, “which suggested that we had recovered mostly new or little-studied species despite using very traditional collection methods, and emphasized the lack of knowledge about these important organisms… It’s likely that many of the invertebrates without DNA sequences in GenBank are indeed new species, but we don’t know for sure.”

With insect populations dwindling worldwide, at least there are still new species being sequenced and documented. This work by Dopheide et al. has marked the trail, and set the bar, for mixing old-school natural science with DNA sequencing to characterize species that dominate the structure and function of ecosystems… while marveling at how many of them are beetles.

Source: Ecological Society of America [February 27, 2019]



How fungi influence global plant colonization

The symbiosis of plants and fungi has a great influence on the worldwide spread of plant species. In some cases, it even acts like a filter. This has been discovered by an international team of researchers with participation from the University of Göttingen. The results appeared in the journal Nature Ecology & Evolution.

How fungi influence global plant colonization
It isn’t just differences in climate and geology, but also the availability of symbionts such as the mycorrhizal fungus,
 that influence plant diversity at different locations, for example on the dry east coast of Tenerife
[Credit: Holger Kreft]

In the colonisation of islands by plant species, it isn’t just factors like island size, isolation and geological development that play an important role, but also the interactions between species. The scientists found that the symbiosis of plant and fungus — the mycorrhiza — is of particular importance. The two organisms exchange nutrients via the plant’s fine root system: the fungus receives carbohydrates from the plant; the plant receives nutrients that the fungus has absorbed from the soil.
“For the first time, new data on the worldwide distribution of plant species in 1,100 island and mainland regions allows us to investigate the influence of this interaction on a global scale,” says Dr Patrick Weigelt from the University of Göttingen’s Department of Biodiversity, Macroecology and Biogeography, who worked on the study. The results: mycorrhiza-plant interactions, which are naturally less frequent on islands because the two organisms rely on each other, mean that the colonisation of remote islands is hindered.

How fungi influence global plant colonization
The nearer to the equator, the more frequently the plant-fungus symbiosis occurs – for example in the species-rich
 tropical rainforest of the Amboró National Park in Bolivia [Credit: Patrick Weigelt]

The lack of this symbiotic relationship may act like a brake on the spread of the plants. This is not the case for plant species introduced by humans, as fungi and plants are often introduced together. Head of Department, Professor Holger Kreft, adds, “The proportion of plant species with mycorrhiza interactions also increases from the poles to the equator.” One of the most prominent biogeographic patterns, the increase in the number of species from the poles to the tropics, is closely related to this symbiosis.
Dr Camille Delavaux, lead author from the University of Kansas (US), explains, “We show that the plant symbiotic association with mycorrhizal fungi is an overlooked driver of global plant biogeographic patterns. This has important consequences for our understanding of contemporary island biogeography and human-mediated plant invasions.” The results show that complex relationships between different organisms are crucial for understanding global diversity patterns and preserving biological diversity. “The absence of an interaction partner can disrupt ecosystems and make them more susceptible to biological invasions,” Weigelt stresses.

Source: University of Göttingen [February 27, 2019]



Ice-free Arctic summers could happen on earlier side of predictions

The Arctic Ocean could become ice-free in the summer in the next 20 years due to a natural, long-term warming phase in the tropical Pacific that adds to human-caused warming, according to a new study.

Ice-free Arctic summers could happen on earlier side of predictions
Arctic sea ice likely reached its 2018 lowest extent on Sept. 19 and again on Sept. 23, according to NASA and the
NASA-supported National Snow and Ice Data Center (NSIDC) [Credit: NASA Goddard/ Katy Mersmann]

Computer models predict climate change will cause the Arctic to be nearly free of sea ice during the summer by the middle of this century, unless human greenhouse gas emissions are greatly reduced.

But a closer examination of long-term temperature cycles in the tropical Pacific points towards an ice-free Arctic in September, the month with the least sea ice, on the earlier side of forecasts, according to a new study in the AGU journal Geophysical Research Letters.

“The trajectory is towards becoming ice-free in the summer but there is uncertainty as to when that’s going to occur,” said James Screen, an associate professor in climate science at the University of Exeter in the U.K. and the lead author of the new study.

There are different climate models used by researchers to predict when the first ice-free Arctic September will occur. Most models project there will fewer than 1 million square kilometers of sea ice around the middle of this century, but projections of when that will occur vary within 20-year windows due to natural climate fluctuations.

The climate model used in the new study predicts an ice-free Arctic summer sometime between 2030 and 2050, if greenhouse gases continue to rise.

By accounting for a long-term warming phase in the tropical Pacific, the new research shows an ice-free Arctic is more likely to occur on the earlier side of that window, closer to 2030 than 2050.

Long-term temperature changes

Ocean temperatures in the Pacific always vary from month-to-month and from year-to-year, but slowly evolving ocean processes cause long-term temperature shifts lasting between 10 and 30 years. These shifts in temperature, known as the Interdecadal Pacific Oscillation (IPO), translate into an approximately 0.5 degree Celsius (0.9 degree Fahrenheit) shift in ocean surface temperature in the tropics over the 10- to 30-year cycle.

Around five years ago, the Pacific began to switch from the cold to warm phase of the IPO. Screen and his co-author plotted predictions of when an ice-free Arctic would occur in model experiments where the IPO was shifting in the same direction as the real world. They compared these to predictions where the IPO was moving in the opposite direction, that is, switching from a warm to cold phase.

They found model predictions that were in sync with actual conditions showed an earlier ice-free Arctic, by seven years on average, than those predictions that were out of step with reality.

Screen says these results need to be interpreted as part of a bigger picture. Human-caused climate change is the main reason for sea ice loss, so the timing of the first ice-free summer will also depend considerably on whether greenhouse gas emissions continue to rise or are curtailed. But the new results do suggest that we are more likely to see an ice-free September on the earlier side of the 20-year window of predictions.

“You can hedge your bets,” he said. “The shift in the IPO means there’s more chance of it being on the earlier end of that window than on the later end.”

Alexandra Jahn, an assistant professor in atmospheric and oceanic studies at the University of Colorado Boulder who was not involved in the new study, said the paper is very interesting and will likely inspire more research.

“The finding that we may be able to use the IPO phase to narrow the uncertainty range of over 20 years of when we may first see an ice-free Arctic Ocean in September is very promising,” she said.

Jennifer Kay, an assistant professor in environmental science at the University of Colorado Boulder who was also not involved in the new research, said the study “is an important advance in our understanding of regional Arctic sea ice loss, the chaotic nature of ice loss, and the connections between Arctic sea ice loss and extrapolar regions.”

Source: American Geophysical Union [February 27, 2019]



Biologists find the long and short of it when it comes to chromosomes

A team of biologists has uncovered a mechanism that determines faithful inheritance of short chromosomes during the reproductive process. The discovery, reported in the journal Nature Communications, elucidates a key aspect of inheritance — deviation from which can lead to infertility, miscarriages, or birth defects such as Down syndrome.

Biologists find the long and short of it when it comes to chromosomes
A team of biologists has uncovered a mechanism that determines faithful inheritance of short chromosomes during the
 reproductive process. The discovery elucidates a key aspect of inheritance—deviation from which can lead to
 infertility, miscarriages, or birth defects such as Down syndrome [Credit: Dr Microbe/Getty Images]

The research centers on how short chromosomes can secure a genetic exchange. Genetic exchanges are critical for chromosome inheritance, but are in limited supply.

How short chromosomes ensure a genetic exchange is of great interest to scientists given the vulnerability of short chromosomes.

“Short chromosomes are at a higher risk for errors that can lead to genetic afflictions because of their innate short lengths and therefore have less material for genetic exchange,” explains Viji Subramanian, a post-doctoral researcher at New York University and the paper’s lead author. “However, these chromosomes acquire extra help to create a high density of genetic exchanges — but it hadn’t been understood as to how short chromosomes received this assistance.”

To explore this question, the researchers, who also included Andreas Hochwagen, an associate professor in NYU’s Department of Biology, studied this process in yeast — a model organism that shares many fundamental processes of chromosome inheritance with humans.

Overall, they found that vast regions near the ends of both short and long chromosomes are inherently primed for a high density of genetic exchanges — the scientists labeled these end-adjacent regions (EARs). Of particular note, a high density of genetic exchanges in EARs is conserved in several organisms, including birds and humans.

Significantly, the researchers noted that EARs are of similar size on all chromosomes. This means that EARs only occupy a limited fraction of long chromosomes but almost the entirety of short chromosomes. This difference drives up the density of genetic exchanges, specifically on short chromosomes, and does so without cells having to directly measure chromosome lengths.

Source: New York University [February 27, 2019]



Castle Fraser Prehistoric Recumbent Stone Circle, Aberdeenshire, Scotland, 20.2.19.

Castle Fraser Prehistoric Recumbent Stone Circle, Aberdeenshire, Scotland, 20.2.19.

Source link

Castle Fraser Stone Row, Aberdeenshire, Scotland, 20.2.19.

Castle Fraser Stone Row, Aberdeenshire, Scotland, 20.2.19.

Source link

Auchquhorthies Recumbent Stone Circle, Portlethen, Aberdeenshire, 23.2.19.I’m...

Auchquhorthies Recumbent Stone Circle, Portlethen, Aberdeenshire, 23.2.19.

I’m pretty sure the recumbent stone has been badly damaged at this site recently. It needs further inspection. #historicscotland

Source link


https://t.co/hvL60wwELQ — XissUFOtoday Space (@xufospace) August 3, 2021 Жаждущий ежик наслаждается пресной водой после нескольких дней в о...