Showing posts with label Hubble Constant. Show all posts
Showing posts with label Hubble Constant. Show all posts

Thursday, April 10, 2025

Shedding Light on Candles That Burn a Bit Too Bright

Kepler's Supernova, the remnant of which is shown here in X-ray observations from the Chandra X-ray Observatory, is the most recent known Type Ia supernova in the Milky Way. It was discovered in 1604.  Credit:
NASA/CXC/Univ of Texas at Arlington/M. Millard et al.



Title: 1991T-Like Type Ia Supernovae as an Extension of the Normal Population
Authors: John T. O’Brien et al.
First Author’s Institution: Michigan State University
Status: Published in ApJ

Figure 1: Example of a “Branch classification” diagram for Type Ia supernovae. This figure compares the width of two silicon lines in Type Ia supernovae. Four groups are shown: shallow silicons (SS), broad lines (BL), cools (CL), and core normals (CN). Event SN 1991T is a member of the shallow silicons group (green triangles), indicating that the widths of the minor and major silicon lines are smaller than normal Type Ia supernovae (core normals). Credit:
Burrow et al. 2020

Figure 2: A plot showing the fraction of intermediate-mass elements (IME) as a function of the ionization ratio of the authors’ simulations. Moving to the right on the bottom axis indicates higher ionization states, whereas moving up on the left axis indicates more intermediate-mass elements for a given total ejecta mass. The break between blue stars (normal Type Ia supernovae) and orange stars (1991T-like supernovae) is called the “turnover.” Because the turnover is fairly smooth, it suggests that the progenitor, or stellar origin, of 1991T-like events might be similar to normal events. Credit: O’Brien et al. 2024


Famously, Type Ia supernovae have been used to measure the local Hubble constant, or the rate at which our universe expands. These objects earned the nickname “standard candles” since their near-constant intrinsic luminosities allow us to measure distances in space. Slowly but surely, however, we’ve learned that some of our standard candles aren’t that “standard” after all…

Historically, Type Ia supernovae were proposed to develop from the transfer of mass between two stars, where the star receiving the mass was a carbon–oxygen white dwarf — the core of a low-mass to intermediate-mass star that’s reached the end of its life. After the white dwarf accretes a certain amount of mass, it explodes as a Type Ia supernova. Spectroscopic studies of these supernovae over the decades have shown a wide range of absorption features, one major absorption line being silicon, a key element produced in the explosion. In fact, a subclassification scheme of Type Ia supernovae — often referred to as the Branch classification — emerged based on the relative strengths of particular absorption features commonly identified in the spectra of these events (see Figure 1). One of these subclassifications is “shallow silicon,” which signifies a lack of silicon produced in the explosion. This subclassification (compared to other subclassifications in Figure 1) shows how Type Ia supernovae are like snowflakes: they have very similar structures yet vary in detail.

The supernova SN 1991T was the first observed event of its kind. What was so special about it? This event was considered over-luminous, or more luminous than the typical “near-intrinsic luminosity” of the average Type Ia supernova. Later, as observations improved, more events like SN 1991T were detected, contributing to the growing class of aptly named “1991T-like” events. The spectra of these events have shallow silicon lines compared to the normal range of Type Ia supernovae. The peculiarity of these absorption lines hints at something unique about these events, and the answer lies in studying the ejecta, or the ejected material in which chemical elements are produced. This article is a step toward understanding what differentiates these events from the norm and what we can infer about their origins.

Outside of this work, recent hydrodynamic simulations of various progenitor models, or stellar origins, have successfully recreated some of the observable signatures of Type Ia supernovae, including synthetic, or computed, optical spectra of theoretical events. Except, as previously mentioned, the observable signatures of Type Ia supernovae can vary quite a bit amongst all these subtypes and classifications! Instead of hydrodynamic simulations, the authors of this article chose to reconstruct the supernova ejecta using Bayesian inference and active learning conducted on early-time (within a few days after explosion) optical spectra of already observed normal and 1991T-like events.

This is the time when 1991T-like events show their features! After training the model on this data, the authors developed a model to link the optical spectra and the ejecta properties corresponding to normal and 1991T-like events.
The team’s emulator successfully recreated both normal and 1991T-like events, at least with 68% confidence (think one sigma!). Furthermore, the authors discovered that the variety in the parameters used in their model illuminates some differences between these 1991T-like events and normal Type Ia supernovae. Remember those silicon features? They recreated those pesky absorption lines, particularly the major iron and silicon features experts look for. Their model successfully recreated silicon absorption features that were suppressed, or not as deep. This indicates a low fraction of intermediate-mass elements, which range from lithium to iron, produced in the explosion compared to the total mass. They also matched the deep, major iron line seen in 1991T-like events. Fewer intermediate-mass elements in 1991T-like supernovae suggest that these elements exist at higher ionization states than in normal Type Ia supernovae (see Figure 2). This suggests that there isn’t just a single mechanism that produces a 1991T-like supernova; it’s likely a combination of different physical processes.

The question now becomes: what can we learn about 1991T-like origins from this? Can a single progenitor model lead to different pathways? Or do we need different progenitor models to explain these differences in spectroscopic features? The authors believe fewer intermediate-mass elements and higher ionization states hint at normal and 1991T-like events sharing similar progenitor systems. In other words, 1991T-like events might just be an extension, or extreme, of the normal population. Perhaps the candle just burned a bit too bright!

Aside from this work, in addition to these over-luminous 1991T-like events, there also exists another interesting class of Type Ia supernovae dubbed “super-luminous,” which are roughly one, maybe two, magnitudes brighter than normal Type Ia supernovae. (Only in astronomy could the words over-luminous and super-luminous mean different things, right?) Because of this, researchers advocate for Type Ia supernovae to be called “standardizable” candles instead because, as you now know, their intrinsic luminosities really aren’t that uniform after all.

Original astrobite edited by Ansh Gupta and Dee Dunne.




About the author, Mckenzie Ferrari:

I’m a grad student at the University of Chicago. Most of my research focuses on simulations of Type Ia supernovae and galaxy formation and evolution.



Editor’s Note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. As part of the partnership between the AAS and astrobites, we occasionally repost astrobites content here at AAS Nova. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org.


Tuesday, April 01, 2025

A New Cosmic Ruler: Measuring the Hubble Constant with Type II Supernovae

Figure 1: Type II supernova sample used for the Hubble constant measurement. The images show the host galaxies of the ten supernovae, with the explosion sites marked by red star symbols. The images are aligned with a redshift scale reflecting the relative distances of the supernovae from Earth. © MPA

Figure 2: Spectral fitting and the Hubble diagram for Type II supernovae. The top panels show two examples of spectral fits used to determine the supernova distances. By comparing observed spectra (black) with model predictions (colour), researchers can extract key physical properties and infer the intrinsic brightness, enabling a direct distance measurement. The bottom panel presents a Hubble diagram, where the measured luminosity distances of the supernovae are plotted against their redshifts. The data points represent individual spectral observations, meaning multiple measurements can exist for each supernova. The dashed black line represents the best-fit relationship between distance and redshift, and its slope is determined by the Hubble constant. The grey-shaded regions indicate the uncertainties for this fit (68% and 95% confidence intervals). The best-fit value for the Hubble constant and its 68% confidence interval are H₀ = 74.9 ± 1.9 km/s/Mpc. © MPA

Figure 3: Artist’s impression of the Hubble tension, showing the two different approaches to measuring the Hubble constant as two bridges that do not quite connect. The depicted early-Universe measurements yield an average value of 67.4 km/s/Mpc, the local measurements an average value of 73.0 km/s/Mpc. The new measurement from this study, based on Type II supernovae (orange), is completely independent of all other measurements and provides compelling support for the Hubble tension. The local route also includes results from various incarnations of the cosmic distance ladder, as well as other direct methods such as gravitational lensing and water masers. Image Credit: Original image by NOIRLab/NSF/AURA/J. da Silva, sourced from NOIRLab (CC BY 4.0), modified by S. Taubenberger.



The expansion rate of the Universe, quantified by the Hubble constant (H₀), remains one of the most debated quantities in cosmology. Measurements based on nearby objects yield a higher value than those inferred from observations of the early Universe—a discrepancy known as the "Hubble tension". Researchers at the Max Planck Institute for Astrophysics and their collaborators have now presented a new, independent determination of H₀ using Type II supernovae. By modeling the light from these exploding stars with advanced radiation transport techniques, they were able to directly measure distances without relying on the traditional distance ladder. The resulting H₀ value agrees with other local measurements and adds to the growing body of evidence for the Hubble tension, offering an important cross-check and a promising path toward resolving this cosmic puzzle.

One of the biggest puzzles in modern cosmology is the ongoing discrepancy in measurements of the Hubble constant (H₀) between local and early Universe probes, known as the “Hubble tension”. Since H₀ describes the current expansion rate of the Universe, it is a local quantity and can only be directly measured using nearby objects. In contrast, methods based on the early Universe, such as those using the cosmic microwave background (CMB), do not measure H₀ directly. Instead, they infer its value by assuming a cosmological model to extrapolate from the conditions 13 billion years ago to today. The fact that these two approaches yield conflicting values—with local distance-ladder measurements giving a higher H₀ than early-Universe methods—suggests that our standard cosmological model may be incomplete, potentially pointing to new physics.

Researchers at the Max Planck Institute for Astrophysics (MPA) and their collaborators have explored an independent way of measuring H₀ using Type II supernovae (SNe II). Unlike traditional approaches, this method does not rely on the cosmic distance ladder, making it a powerful cross-check against existing techniques. Their results provide a new, highly precise measurement of H₀ and further contribute to the debate over the expansion rate of the Universe.

Determining the Hubble constant requires accurate measurements of distances to astronomical objects at different redshifts. The most widely used technique, the cosmic distance ladder, relies on several interconnected steps: distances to nearby objects (such as Cepheid variable stars) are used to calibrate further reaching indicators such as Type Ia supernovae (SNe Ia), which then serve as standard candles to measure distances to faraway galaxies.

However, the reliance on multiple steps introduces possible systematic uncertainties, and different teams report slightly different results. A direct measurement based on known physics offers a valuable complementary approach, as it is affected by different systematics and does not depend on empirical calibrations. This is where Type II supernovae provide an exciting alternative.

Type II supernovae occur when massive, hydrogen-rich stars explode at the end of their lives. While their brightness varies depending on factors such as temperature, expansion velocity, and chemical composition, it can be accurately predicted using radiation transport models. This allows researchers to determine their intrinsic luminosity and use them as distance indicators, independent of empirical calibration methods.

A critical step in this process is identifying the best-fitting model for each observed supernova. Key physical properties leave distinct imprints on the supernova spectrum: temperature shapes the overall continuum, expansion velocity sets the width of spectral lines via Doppler broadening, and chemical composition determines the strength of specific absorption and emission features. By systematically comparing observed spectra to simulated spectra from radiative transfer models, researchers can find the model that most accurately describes the supernova’s physical conditions. With such a well-matched model the intrinsic brightness—and thus the distance—can be precisely determined.

To make this process efficient, the team used a spectral emulator, an advanced machine-learning tool trained on precomputed simulations. Instead of running time-intensive radiation transport calculations for every supernova, the emulator rapidly interpolates between models, allowing for fast and accurate spectral fitting.

The research team applied their spectral modeling approach to a sample of ten Type II supernovae at redshifts between 0.01 and 0.04, using publicly available data not specifically designed for distance measurements (Fig. 1). Despite the limitations of the dataset, their method yielded reliable distances. By constructing a Hubble diagram from these measurements (Fig. 2), they obtained an independent estimate of H₀: H₀ = 74.9 ± 1.9 km/s/Mpc

This value is consistent with most other local measurements, such as those from Cepheid-calibrated supernovae and supports the tension with early-Universe probes. The achieved precision is comparable to the most competitive techniques, demonstrating that Type II supernovae are a promising tool for cosmology (Fig. 3).

This study serves as a proof of concept, showing that Type II supernovae can provide precise and reliable distance measurements in the Hubble flow. Future work will focus on increasing the sample size and improving the accuracy of the technique by using dedicated observations. To this end, the researchers have assembled the adH0cc dataset (https://adh0cc.github.io/), a collection of Type II supernova observations from the ESO Very Large Telescope, specifically designed for precise distance measurements. This dataset will serve as a key resource for refining the method. By providing an independent check on the local determination of H₀, Type II supernovae help astrophysicists tackle one of the most pressing questions in cosmology today: Is the Hubble tension real, and if so, what does it tell us about the fundamental nature of the Universe?





Authors:

Christian Vogl
Postdoc
2297

cvogl@mpa-garching.mpg.de

Stefan Taubenberger
2019

tauben@mpa-garching.mpg.de

Wolfgang Hillebrandt
Emeritus Director


Original publication

Vogl, Christian; Taubenberger, Stefan; et al.
No rungs attached: A distance-ladder free determination of the Hubble constant through type II supernova spectral modelling
submitted to A&A

Saturday, January 18, 2025

NASA Celebrates Edwin Hubble's Discovery of a New Universe

M31 Cepheid Variable Star V1
Credits/Image: NASA, ESA, Hubble Heritage Project (STScI, AURA)
Acknowledgment: Robert Gendler

Compass Scale Image of V1 in M31
Credits/Image: NASA, ESA, Hubble Heritage Project (STScI, AURA)

Cepheid Variable Star V1 in Andromeda Galaxy
Credits/Image: NASA, ESA, Hubble Heritage Project (STScI, AURA)
Acknowledgment: Robert Gendler



For humans, the most important star in the universe is our Sun. The second-most important star is nestled inside the Andromeda galaxy. Don't go looking for it — the flickering star is 2.2 million light-years away, and is 1/100,000th the brightness of the faintest star visible to the human eye.

Yet, a century ago, its discovery by Edwin Hubble, then an astronomer at Carnegie Observatories, opened humanity's eyes as to how large the universe really is, and revealed that our Milky Way galaxy is just one of hundreds of billions of galaxies in the universe ushered in the coming-of-age for humans as a curious species that could scientifically ponder our own creation through the message of starlight. Carnegie Science and NASA are celebrating this centennial at the 245th meeting of the American Astronomical Society in Washington, D.C.

The seemingly inauspicious star, simply named V1, flung open a Pandora's box full of mysteries about time and space that are still challenging astronomers today. Using the largest telescope in the world at that time, the Carnegie-funded 100-inch Hooker Telescope at Mount Wilson Observatory in California, Hubble discovered the demure star in 1923. This rare type of pulsating star, called a Cepheid variable, is used as milepost markers for distant celestial objects. There are no tape-measures in space, but by the early 20th century Henrietta Swan Leavitt had discovered that the pulsation period of Cepheid variables is directly tied to their luminosity.

Many astronomers long believed that the edge of the Milky Way marked the edge of the entire universe. But Hubble determined that V1, located inside the Andromeda "nebula," was at a distance that far exceeded anything in our own Milky Way galaxy. This led Hubble to the jaw-dropping realization that the universe extends far beyond our own galaxy.

In fact Hubble had suspected there was a larger universe out there, but here was the proof in the pudding. He was so amazed he scribbled an exclamation mark on the photographic plate of Andromeda that pinpointed the variable star.

As a result, the science of cosmology exploded almost overnight. Hubble's contemporary, the distinguished Harvard astronomer Harlow Shapley, upon Hubble notifying him of the discovery, was devastated. "Here is the letter that destroyed my universe," he lamented to fellow astronomer Cecilia Payne-Gaposchkin, who was in his office when he opened Hubble's message.

Just three years earlier, Shapley had presented his observational interpretation of a much smaller universe in a debate one evening at the Smithsonian Museum of Natural History in Washington. He maintained that the Milky Way galaxy was so huge, it must encompass the entirety of the universe. Shapley insisted that the mysteriously fuzzy "spiral nebulae," such as Andromeda, were simply stars forming on the periphery of our Milky Way, and inconsequential.

Little could Hubble have imagined that 70 years later, an extraordinary telescope named after him, lofted hundreds of miles above the Earth, would continue his legacy. The marvelous telescope made "Hubble" a household word, synonymous with wonderous astronomy.

Today, NASA's Hubble Space Telescope pushes the frontiers of knowledge over 10 times farther than Edwin Hubble could ever see. The space telescope has lifted the curtain on a compulsive universe full of active stars, colliding galaxies, and runaway black holes, among the celestial fireworks of the interplay between matter and energy.

Edwin Hubble was the first astronomer to take the initial steps that would ultimately lead to the Hubble Space Telescope, revealing a seemingly infinite ocean of galaxies. He thought that, despite their abundance, galaxies came in just a few specific shapes: pinwheel spirals, football-shaped ellipticals, and oddball irregular galaxies. He thought these might be clues to galaxy evolution – but the answer had to wait for the Hubble Space Telescope's legendary Hubble Deep Field in 1994.

The most impactful finding that Edwin Hubble's analysis showed was that the farther the galaxy is, the faster it appears to be receding from Earth. The universe looked like it was expanding like a balloon. This was based on Hubble tying galaxy distances to the reddening of light — the redshift – that proportionally increased the father away the galaxies are.

The redshift data were first collected by Lowell Observatory astronomer Vesto Slipher, who spectroscopically studied the "spiral nebulae" a decade before Hubble. Slipher did not know they were extragalactic, but Hubble made the connection. Slipher first interpreted his redshift data an example of the Doppler effect. This phenomenon is caused by light being stretched to longer, redder wavelengths if a source is moving away from us. To Slipher, it was curious that all the spiral nebulae appeared to be moving away from Earth.

Two years prior to Hubble publishing his findings, the Belgian physicist and Jesuit priest Georges Lemaître analyzed the Hubble and Slifer observations and first came to the conclusion of an expanding universe. This proportionality between galaxies' distances and redshifts is today termed Hubble–Lemaître's law. Because the universe appeared to be uniformly expanding, Lemaître further realized that the expansion rate could be run back into time – like rewinding a movie – until the universe was unimaginably small, hot and dense. It wasn't until 1949 that the term "big bang" came into fashion.

This was a relief to Edwin Hubble's contemporary, Albert Einstein, who deduced the universe could not remain stationary without imploding under gravity's pull. The rate of cosmic expansion is now known as the Hubble Constant

Ironically, Hubble himself never fully accepted the runaway universe as an interpretation of the redshift data. He suspected that some unknown physics phenomenon was giving the illusion that the galaxies were flying away from each other. He was partly right in that Einstein's theory of special relativity explained redshift as an effect of time-dilation that is proportional to the stretching of expanding space. The galaxies only appear to be zooming through the universe. Space is expanding instead.

After decades of precise measurements, the Hubble telescope came along to nail down the expansion rate precisely, giving the universe an age of 13.8 billion years. This required establishing the first rung of what astronomers call the "cosmic distance ladder" needed to build a yardstick to far-flung galaxies. They are cousins to V1, Cepheid variable stars that the Hubble telescope can detect out to over 100 times farther from Earth than the star Edwin Hubble first found.

Astrophysics was turned on its head again in 1998 when the Hubble telescope and other observatories discovered that the universe was expanding at an ever-faster rate, through a phenomenon dubbed "dark energy." Einstein first toyed with this idea of a repulsive form of gravity in space, calling it the cosmological constant

Even more mysteriously, the current expansion rate appears to be different than what modern cosmological models of the developing universe would predict, further confounding theoreticians. Today astronomers are wrestling with the idea that whatever is accelerating the universe may be changing over time. NASA's Roman Space Telescope, with the ability to do large cosmic surveys, should lead to new insights into the behavior of dark matter and dark energy. Roman will likely measure the Hubble constant via lensed supernovae.

This grand century-long adventure, plumbing depths of the unknown, began with Hubble photographing a large smudge of light, the Andromeda galaxy, at the Mount Wilson Observatory high above Los Angeles.

In short, Edwin Hubble is the man who wiped away the ancient universe and discovered a new universe that would shrink humanity's self-perception into being an insignificant speck in the cosmos.

The Hubble Space Telescope has been operating for over three decades and continues to make ground-breaking discoveries that shape our fundamental understanding of the universe. Hubble is a project of international cooperation between NASA and ESA (European Space Agency). NASA’s Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope and mission operations. Lockheed Martin Space, based in Denver, also supports mission operations at Goddard. The Space Telescope Science Institute in Baltimore, which is operated by the Association of Universities for Research in Astronomy, conducts Hubble science operations for NASA.




About This Release

Credits: Media Contact:

Ray Villard
Space Telescope Science Institute, Baltimore, Maryland

Permissions: Content Use Policy

Contact Us: Direct inquiries to the News Team.

Related Links and Documents



Wednesday, September 18, 2024

The light of knowledge

A spiral galaxy, tilted at an angle, with irregularly-shaped arms. It appears large and close-up. The centre glows in a yellowish colour, while the disc around it is a bluer colour, due to light from older and newer stars. Dark reddish threads of dust cover the galaxy, and there are many large, shining pink spots in the disc, where stars are forming. Credit: ESA/Hubble & NASA, F. Belfiore, W. Yuan, J. Lee and the PHANGS-HST Team, A. Riess, K. Takáts, D. de Martin & M. Zamani (ESA/Hubble)

The magnificent galaxy featured in this Hubble Picture of the Week is NGC 1559. It is a barred spiral galaxy located in the constellation Reticulum near the Large Magellanic Cloud, but much more distant at approximately 35 million light-years from Earth. Hubble last visited this object in 2018. The brilliant light captured in this image offers a wealth of information, which thanks to Hubble can be put to use by both scientists and the public.

This picture is composed of a whopping ten different images taken by the Hubble Space Telescope, each filtered to collect light from a specific wavelength or range of wavelengths. It spans Hubble’s sensitivity to light, from ultraviolet around 275 nanometres through blue, green and red to near-infrared at 1600 nanometres. This allows information about many different astrophysical processes in the galaxy to be recorded: a notable example is the red 656-nanometre filter used here. Hydrogen atoms which get ionised can emit light at this particular wavelength, called H-alpha emission. New stars forming in a molecular cloud, made mostly of hydrogen gas, emit copious amounts of ultraviolet light which is absorbed by the cloud, but which ionises it and causes it to glow with this H-alpha light. Therefore, filtering to detect only this light provides a reliable means to detect areas of star formation (called H II regions), shown in this image by the bright red and pink colours of the blossoming patches filling NGC 1559’s spiral arms.

These ten images come from six different observing programmes with Hubble, running from 2009 all the way up to the present year. These programmes were led by teams of astronomers from around the world with a variety of scientific goals, ranging from studying ionised gas and star formation, to following up on a supernova, to tracking variable stars as a contribution to calculating the Hubble constant. The data from all of these observations live on in the Hubble archive, available for anyone to use — not only for new science, but also to create spectacular images like this one! This image of NGC 1559, then, is a reminder of the incredible opportunities that the Hubble Space Telescope has provided and continues to provide.

Besides Hubble’s observations, astronomers are using the NASA/ESA/CSA James Webb Space Telescope to research this galaxy in even greater depth. This Webb image from February showcases the galaxy in near- and mid-infrared light.



Sunday, June 23, 2024

Seeing Triple

SN H0pe

What at first appears to be a glowing strand of molten iron in the image above is something far wilder: a distant galaxy whose light has been stretched into galactic taffy by the immense gravity of an intervening galaxy cluster. This phenomenon, known as strong gravitational lensing, multiplies and magnifies images of faraway sources, allowing astronomers to use massive objects like galaxy clusters as natural telescopes. Look closely at the zoomed-in version of the image: three points of light stand out against the glow of the lensed galaxy. These three dots are multiple images of a single supernova cataloged as SN H0pe. Researchers plan to use this rare multiply imaged supernova to calculate the Hubble constant, which quantifies the universe’s expansion rate. Using observations from JWST, a team led by Justin Pierel (Space Telescope Science Institute) calculated the time delay of the light from the images, finding arrival times offset by 49 and 117 days. The value of the Hubble constant derived from these observations will be reported in a future publication. In the meantime, be sure to check out the details of these initial calculations in the article linked below.

Citation

“JWST Photometric Time-Delay and Magnification Measurements for the Triply Imaged Type Ia “SN H0pe” at z = 1.78,” J. D. R. Pierel et al 2024 ApJ 967 50. doi:10.3847/1538-4357/ad3c43



Monday, February 12, 2024

NASA's Roman to Use Rare Events to Calculate Expansion Rate of Universe


Supernova Refsdal (Hubble image)
Credits: Image: NASA, ESA, Steve A. Rodney (JHU), Tommaso Treu (UCLA), Patrick Kelly (UC Berkeley), Jennifer Lotz (STScI), Marc Postman (STScI), Zolt G. Levay (STScI), FrontierSN Team, GLASS Team, HFF Team (STScI), CLASH Team

Distant Supernova Multiply Imaged by Foreground Cluster
Illustration: NASA, ESA, Ann Feild (STScI), Joseph DePasquale (STScI)
Science: NASA, ESA, Steve A. Rodney (JHU), Tommaso Treu (UCLA), Patrick Kelly (UC Berkeley), Jennifer Lotz (STScI), Marc Postman (STScI), Zolt G. Levay (STScI), FrontierSN Team, GLASS Team, HFF Team (STScI), CLASH Team




Astronomers investigating one of the most pressing mysteries of the cosmos – the rate at which the universe is expanding – are readying themselves to study this puzzle in a new way using NASA’s Nancy Grace Roman Space Telescope. Once it launches by May 2027, astronomers will mine Roman’s wide swaths of images for gravitationally lensed supernovae, which can be used to measure the expansion rate of the universe.

There are multiple independent ways astronomers can measure the present expansion rate of the universe, known as the Hubble constant . Different techniques have yielded different values, referred to as the Hubble tension . Much of Roman’s cosmological investigations will be into elusive dark energy, which affects how the universe is expanding over time. One primary tool for these investigations is a fairly traditional method, which compares the intrinsic brightness of objects like type Ia supernovae to their perceived brightness to determine distances. Alternatively, astronomers could use Roman to examine gravitationally lensed supernovae. This method of exploring the Hubble constant is unique from traditional methods because it’s based on geometric methods, and not brightness.

“Roman is the ideal tool to let the study of gravitationally lensed supernovae take off,” said Lou Strolger of the Space Telescope Science Institute (STScI) in Baltimore, co-lead of the team preparing for Roman’s study of these objects. “They are rare, and very hard to find. We have had to get lucky in detecting a few of them early enough. Roman’s extensive field of view and repeated imaging in high resolution will help those chances.”

Using various observatories like NASA’s Hubble Space Telescope and James Webb Space Telescope, astronomers have discovered just eight gravitationally lensed supernovae in the universe. However, only two of those eight have been viable candidates to measure the Hubble constant due to the type of supernovae they are and the duration of their time-delayed imaging.

Gravitational lensing occurs when the light from an object like a stellar explosion, on its way to Earth, passes through a galaxy or galaxy cluster and gets deflected by the immense gravitational field. The light splits along different paths and forms multiple images of the supernova on the sky as we see it. Depending on the differences between the paths, the supernova images appear delayed by hours to months, or even years. Precisely measuring this difference in arrival times between the multiple images leads to a combination of distances that constrain the Hubble constant.

“Probing these distances in a fundamentally different way than more common methods, with the same observatory in this case, can help shed light on why various measurement techniques have yielded different results,” added Justin Pierel of STScI, Strolger’s co-lead on the program.

Finding the Needle in the Haystack

Roman's extensive surveys will be able to map the universe much faster than Hubble can, with the telescope “seeing” more than 100 times the area of Hubble in a single image.

“Rather than gathering several pictures of trees, this new telescope will allow us to see the entire forest in a single snapshot,” Pierel explained.

In particular, the High Latitude Time Domain Survey will observe the same area of sky repeatedly, which will allow astronomers to study targets that change over time. This means there will be an extraordinary amount of data – over 5 billion pixels each time – to sift through in order to find these very rare events.

A team led by Strolger and Pierel at STScI is laying the groundwork for finding gravitationally lensed supernovae in Roman data through a project funded by NASA’s Research Opportunities in Space and Earth Science (ROSES) Nancy Grace Roman Space Telescope Research and Support Participation Opportunities program.

“Because these are rare, leveraging the full potential of gravitationally lensed supernovae depends on a high level of preparation,” said Pierel. “We want to make all the tools for finding these supernovae ready upfront so we don’t waste any time sifting through terabytes of data when it arrives.”

The project will be carried out by a team of researchers from various NASA centers and universities around the country.

The preparation will occur in several stages. The team will create data reduction pipelines designed to automatically detect gravitationally lensed supernovae in Roman imaging. To train those pipelines, the researchers will also create simulated imaging: 50,000 simulated lenses are needed, and there are only 10,000 actual lenses currently known.

The data reduction pipelines created by Strolger and Pierel’s team will complement pipelines being created to study dark energy with Type Ia supernovae.

“Roman is truly the first opportunity to create a gold-standard sample of gravitationally lensed supernovae,” concluded Strolger. “All our preparations now will produce all the components needed to ensure we can effectively leverage the enormous potential for cosmology.”

The Nancy Grace Roman Space Telescope is managed at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, with participation by NASA's Jet Propulsion Laboratory and Caltech/IPAC in Southern California, the Space Telescope Science Institute in Baltimore, and a science team comprising scientists from various research institutions. The primary industrial partners are Ball Aerospace and Technologies Corporation in Boulder, Colorado; L3Harris Technologies in Melbourne, Florida; and Teledyne Scientific & Imaging in Thousand Oaks, California.




About This Release

Credits:

Media Contact:

Hannah Braun
Space Telescope Science Institute, Baltimore, Maryland

Christine Pulliam
Space Telescope Science Institute, Baltimore, Maryland

Permissions: Content Use Policy

Related Links and Documents


Sunday, March 21, 2021

LCO Scientists Use Supernovae to Make a New Measurement of the Hubble Constant

 

SN 2011fe in the galaxy M101 is a Type Ia supernova, the type used as standard candles in this study.  This composite image was created from data taken by Las Cumbres Observatory and the Palomar Transient Factory.  Credit: BJ Fulton / LCO / PTF.

One of the longest standing and most controversial questions in astronomy is — how fast is the universe expanding today? New work, including measurements made by Las Cumbres Observatory, has applied new techniques to the problem and found a surprising answer.

Astronomers call the local expansion rate of the universe the Hubble constant, H0, (pronounced H-naught). Measurements have gotten extremely precise in recent years — some claim to have measured it to better than a few percent. Different groups have come up with results that vary by more than 10% — far larger than the claimed uncertainty. Complicating matters, the measurements seem to cluster high or low depending on where they are made in the universe. The Hubble constant measured from nearby supernovae tends to be high, while measurements built up from the afterglow of the Big Bang — the Cosmic Microwave Background — give a low value. Some have argued that this is a crisis for the field, one requiring “new physics.” Perhaps an unknown property of Dark Energy is causing the local expansion rate of the universe to be highly sensitive to the distance at which it is measured. Others argue that there must be some kind of mistake in building the “distance ladder” — in using one set of distance indicators to calibrate another.

The new study, released March 12 in the journal Astronomy & Astrophysics, involves an international team of scientists led by Nandita Khetan, a PhD student at the Gran Sasso Science Institute in Italy, and an associate researcher at the Istituto Nazionale di Fisica Nucleare. It used the Surface Brightness Fluctuations of galaxies to calibrate the distances to nature’s best distance indicators — Type Ia supernovae. Type Ia supernovae are used as “standard candles” to map out distances in the universe. They were used to determine that the universe was accelerating in its expansion, leading to the discovery of Dark Energy that resulted in the 2011 Nobel Prize in Physics.

The standard candle method relies on measuring the apparent brightness of a distant known light, say a 100W light bulb, and using the difference between the apparent and intrinsic brightness to work out how far away the light is. This requires knowing the intrinsic power output —the wattage — of the “standard candle,” something that is unknown for Type Ia supernovae. Astronomers have to calibrate their brightness using a handful of nearby supernovae in galaxies with distances determined by other means. Traditionally this has been done with galaxies whose distances are known from observations of Cepheid variable stars. The new paper research swaps out the Cepheids for a different fundamental calibrator, Surface Brightness Fluctuations. This measures the resolution of individual stars in different galaxies, since stars tend to blur together the farther away a galaxy is. It is similar to how a street will appear rough when photographed up close, but smooth when seen from farther away.

The new study found an answer that is in between the two discordant values of the expansion rate of the universe. This argues that perhaps new physics isn’t needed after all. It may be that previous researchers overestimated the precision of their studies.

Andy Howell, a staff scientist at Las Cumbres Observatory, and adjunct faculty at the University of California Santa Barbara, is the Principal Investigator of the Global Supernova Project, a worldwide collaboration that provided some of the observations of supernovae used in the study. He explains, “At a recent conference about this Hubble Constant crisis, after each speaker walked through their methodology, I couldn’t find any problems with what they were doing. I started to question whether we do need new physics to explain the different Hubble constants. But now we, like several studies before ours, found an answer in the middle. Maybe there’s some weirdness to some of the other measurements that we don’t fully understand. That’s more comforting, because you don’t want to upend our understanding of physics unless you have to.”

The new work does not undermine the discovery or characterization of Dark Energy, since that relies on only relative, not absolute, measurements of supernovae and has been verified by other means.

The new supernova observations were obtained with Las Cumbres Observatory’s worldwide network of robotic telescopes, specifically designed to study time-variable phenomena like supernovae. Howell adds, “Supernovae are hard to observe, because you need just a little bit of telescope time per night, over months. But a robotic telescope network is perfect for this — nobody has to travel — the telescopes can make the observations wherever and whenever they are needed. This is what we built Las Cumbres Observatory for and I’m delighted to see it being used to refine our understanding of the universe.”

The study “A new measurement of the Hubble constant using Type Ia supernovae calibrated with surface brightness fluctuations” involves an international team of scientists with expertise in supernova observations, Surface Brightness Fluctuations, and theory working, at the Gran Sasso Science Institute, INAF, INFN, DARK-Niels Bohr Institute, University of Copenhagen, Centre for Astrophysics and Supercomputing, Swinburne University, Las Cumbres Observatory, UC Santa Barbara, and UC Davis.

Source:  Las Cumbres Observatory (LCO)/News


Thursday, June 11, 2020

New Distance Measurements Bolster Challenge to Basic Model of Universe

Artist's conception illustrating a disk of water-bearing gas orbiting the supermassive black hole at the core of a distant galaxy. By observing maser emission from such disks, astronomers can use geometry to measure the distance to the galaxies, a key requirement for calculating the Hubble Constant. Credit: Sophia Dagnello, NRAO/AUI/NSF.  Hi-Res File

A new set of precision distance measurements made with an international collection of radio telescopes have greatly increased the likelihood that theorists need to revise the “standard model” that describes the fundamental nature of the Universe.

The new distance measurements allowed astronomers to refine their calculation of the Hubble Constant, the expansion rate of the Universe, a value important for testing the theoretical model describing the composition and evolution of the Universe. The problem is that the new measurements exacerbate a discrepancy between previously measured values of the Hubble Constant and the value predicted by the model when applied to measurements of the cosmic microwave background made by the Planck satellite.

“We find that galaxies are nearer than predicted by the standard model of cosmology, corroborating a problem identified in other types of distance measurements. There has been debate over whether this problem lies in the model itself or in the measurements used to test it. Our work uses a distance measurement technique completely independent of all others, and we reinforce the disparity between measured and predicted values. It is likely that the basic cosmological model involved in the predictions is the problem,” said James Braatz, of the National Radio Astronomy Observatory (NRAO).

Braatz leads the Megamaser Cosmology Project, an international effort to measure the Hubble Constant by finding galaxies with specific properties that lend themselves to yielding precise geometric distances. The project has used the National Science Foundation’s Very Long Baseline Array (VLBA), Karl G. Jansky Very Large Array (VLA), and Robert C. Byrd Green Bank Telescope (GBT), along with the Effelsberg telescope in Germany. The team reported their latest results in the Astrophysical Journal Letters.

Edwin Hubble, after whom the orbiting Hubble Space Telescope is named, first calculated the expansion rate of the universe (the Hubble Constant) in 1929 by measuring the distances to galaxies and their recession speeds. The more distant a galaxy is, the greater its recession speed from Earth. Today, the Hubble Constant remains a fundamental property of observational cosmology and a focus of many modern studies.

Measuring recession speeds of galaxies is relatively straightforward. Determining cosmic distances, however, has been a difficult task for astronomers. For objects in our own Milky Way Galaxy, astronomers can get distances by measuring the apparent shift in the object’s position when viewed from opposite sides of Earth’s orbit around the Sun, an effect called parallax. The first such measurement of a star’s parallax distance came in 1838.

Beyond our own Galaxy, parallaxes are too small to measure, so astronomers have relied on objects called “standard candles,” so named because their intrinsic brightness is presumed to be known. The distance to an object of known brightness can be calculated based on how dim the object appears from Earth. These standard candles include a class of stars called Cepheid variables and a specific type of stellar explosion called a Type Ia supernova.

Another method of estimating the expansion rate involves observing distant quasars whose light is bent by the gravitational effect of a foreground galaxy into multiple images. When the quasar varies in brightness, the change appears in the different images at different times. Measuring this time difference, along with calculations of the geometry of the light-bending, yields an estimate of the expansion rate.

Determinations of the Hubble Constant based on the standard candles and the gravitationally-lensed quasars have produced figures of 73-74 kilometers per second (the speed) per megaparsec (distance in units favored by astronomers).

However, predictions of the Hubble Constant from the standard cosmological model when applied to measurements of the cosmic microwave background (CMB) — the leftover radiation from the Big Bang — produce a value of 67.4, a significant and troubling difference. This difference, which astronomers say is beyond the experimental errors in the observations, has serious implications for the standard model.

The model is called Lambda Cold Dark Matter, or Lambda CDM, where “Lambda” refers to Einstein’s cosmological constant and is a representation of dark energy. The model divides the composition of the Universe mainly between ordinary matter, dark matter, and dark energy, and describes how the Universe has evolved since the Big Bang.

The Megamaser Cosmology Project focuses on galaxies with disks of water-bearing molecular gas orbiting supermassive black holes at the galaxies’ centers. If the orbiting disk is seen nearly edge-on from Earth, bright spots of radio emission, called masers — radio analogs to visible-light lasers — can be used to determine both the physical size of the disk and its angular extent, and therefore, through geometry, its distance. The project’s team uses the worldwide collection of radio telescopes to make the precision measurements required for this technique.

In their latest work, the team refined their distance measurements to four galaxies, at distances ranging from 168 million light-years to 431 million light-years. Combined with previous distance measurements of two other galaxies, their calculations produced a value for the Hubble Constant of 73.9 kilometers per second per megaparsec.

“Testing the standard model of cosmology is a really challenging problem that requires the best-ever measurements of the Hubble Constant. The discrepancy between the predicted and measured values of the Hubble Constant points to one of the most fundamental problems in all of physics, so we would like to have multiple, independent measurements that corroborate the problem and test the model. Our method is geometric, and completely independent of all others, and it reinforces the discrepancy,” said Dom Pesce, a researcher at the Center for Astrophysics | Harvard and Smithsonian, and lead author on the latest paper.

“The maser method of measuring the expansion rate of the universe is elegant, and, unlike the others, based on geometry. By measuring extremely precise positions and dynamics of maser spots in the accretion disk surrounding a distant black hole, we can determine the distance to the host galaxies and then the expansion rate. Our result from this unique technique strengthens the case for a key problem in observational cosmology.” said Mark Reid of the Center for Astrophysics | Harvard and Smithsonian, and a member of the Megamaser Cosmology Project team.

“Our measurement of the Hubble Constant is very close to other recent measurements, and statistically very different from the predictions based on the CMB and the standard cosmological model. All indications are that the standard model needs revision,” said Braatz.

Astronomers have various ways to adjust the model to resolve the discrepancy. Some of these include changing presumptions about the nature of dark energy, moving away from Einstein’s cosmological constant. Others look at fundamental changes in particle physics, such as changing the numbers or types of neutrinos or the possibilities of interactions among them. There are other possibilities, even more exotic, and at the moment scientists have no clear evidence for discriminating among them.

“This is a classic case of the interplay between observation and theory. The Lambda CDM model has worked quite well for years, but now observations clearly are pointing to a problem that needs to be solved, and it appears the problem lies with the model,” Pesce said.

The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.

Source:  National Radio Astronomy Observatory (NRAO)


Media Contact:

Dave Finley, Public Information Officer
(575) 835-7302

dfinley@nrao.edu

“The Megamaser Cosmology Project. XIII. Combined Hubble Constant Constraints,” D. W. Pesce, 2020 Feb. 26, Astrophysical Journal Letters [https://iopscience.iop.org/article/10.3847/2041-8213/ab75f0, preprint: https://arxiv.org/abs/2001.09213].


Saturday, November 02, 2019

A Crisis in Cosmology

W. M. Keck Observatory's AO system was used for the first time to obtain the hubble constant by observing three gravitationally lensed systems, including HE0435-1223 (pictured). 

Maunakea, Hawaii – A group of astronomers led by University of California, Davis has obtained new data that suggest the universe is expanding more rapidly than previously thought.

The study comes on the heels of a hot debate over just how fast the universe is ballooning; measurements thus far are in disagreement.

The team’s new measurement of the Hubble Constant, or the expansion rate of the universe, involved a different method. They used NASA’s Hubble Space Telescope (HST) in combination with W. M. Keck Observatory’s Adaptive Optics (AO) system to observe three gravitationally-lensed systems. This is the first time ground-based AO technology has been used to obtain the Hubble Constant.
“When I first started working on this problem more than 20 years ago, the available instrumentation limited the amount of useful data that you could get out of the observations,” says co-author Chris Fassnacht, Professor of Physics at UC Davis. “In this project, we are using Keck Observatory’s AO for the first time in the full analysis. I have felt for many years that AO observations could contribute a lot to this effort.”

The team’s results are published in the latest online issue of the Monthly Notices of the Royal Astronomical Society.

To rule out any bias, the team conducted a blind analysis; during the processing, they kept the final answer hidden from even themselves until they were convinced that they had addressed as many possible sources of error as they could think of. This prevented them from making any adjustments to get to the “correct” value, avoiding confirmation bias. 

“When we thought that we had taken care of all possible problems with the analysis, we unblind the answer with the rule that we have to publish whatever value that we find, even if it’s crazy. It’s always a tense and exciting moment,” says lead author Geoff Chen, a graduate student at the UC Davis Physics Department.

The unblinding revealed a value that is consistent with Hubble Constant measurements taken from observations of “local” objects close to Earth, such as nearby Type Ia supernovae or gravitationally-lensed systems; Chen’s team used the latter objects in their blind analysis. 

The team’s results add to growing evidence that there is a problem with the standard model of cosmology, which shows the universe was expanding very fast early in its history, then the expansion slowed down due to the gravitational pull of dark matter, and now the expansion is speeding up again due to dark energy, a mysterious force.


An artist’s depiction of the standard model of cosmology
Credit: BICEP2 Collaboration/CERN/NASA

This model of the expansion history of the universe is assembled using traditional Hubble Constant measurements, which are taken from “distant” observations of the cosmic microwave background (CMB) – leftover radiation from the Big Bang when the universe began 13.8 billion years ago.

Recently, many groups began using varying techniques and studying different parts of the universe to obtain the Hubble Constant and found that the value obtained from “local” versus “distant” observations disagree.

“Therein lies the crisis in cosmology,” says Fassnacht. “While the Hubble Constant is constant everywhere in space at a given time, it is not constant in time. So, when we are comparing the Hubble Constants that come out of various techniques, we are comparing the early universe (using distant observations) vs. the late, more modern part of the universe (using local, nearby observations).”

This suggests that either there is a problem with the CMB measurements, which the team says is unlikely, or the standard model of cosmology needs to be changed in some way using new physics to correct the discrepancy.

Methodology

Using Keck Observatory’s AO system with the Near-Infrared Camera, second generation (NIRC2) instrument on the Keck II telescope, Chen and his team obtained local measurements of three well-known lensed quasar systems: PG1115+ 080, HE0435-1223, and RXJ1131-1231. 

Quasars are extremely bright, active galaxies, often with massive jets powered by a supermassive black hole ravenously eating material surrounding it. 

Though quasars are often extremely far way, astronomers are able to detect them through gravitational lensing, a phenomenon that acts as nature’s magnifying glass. When a  sufficiently massive galaxy closer to Earth gets in the way of light from a very distant quasar, the galaxy can act as a lens; its gravitational field warps space itself, bending the background quasar’s light into multiple images and making it look extra bright.

At times, the brightness of the quasar flickers, and since each image corresponds to a slightly different path length from quasar to telescope, the flickers appear at slightly different times for each image – they don’t all arrive on Earth at the same time. 

With HE0435-1223, PG1115+ 080, and RXJ1131-1231, the team carefully measured those time delays, which are inversely proportional to the value of the Hubble Constant. This allows astronomers to decode the light from these distant quasars and gather information about how much the universe has expanded during the time the light has been on its way to Earth.

Multiple lensed quasar images of HE0435-1223 (left), PG1115+ 080 (center), and RXJ1131-1231 (right). 
Image credit: G. Chen, C. Fassnacht, UC Davis




“One of the most important ingredients in using gravitational lensing to measure the Hubble Constant is sensitive and high-resolution imaging,” said Chen. “Up until now, the best lens-based Hubble Constant measurements all involved using data from HST. When we unblinded, we found two things. First, we had consistent values with previous measurements that were based on HST data, proving that AO data can provide a powerful alternative to HST data in the future. Secondly, we found that combining the AO and HST data gave a more precise result.”

Next Steps

Chen and his team, as well as many other groups all over the planet, are doing more research and observations to further investigate. Now that Chen’s team has proven Keck Observatory’s AO system is just as powerful as HST, astronomers can add this methodology to their bucket of techniques when measuring the Hubble Constant.

“We can now try this method with more lensed quasar systems to improve the precision of our measurement of the Hubble Constant. Perhaps this will lead us to a more complete cosmological model of the universe,” says Fassnacht.



About NIRC2

The Near-Infrared Camera, second generation (NIRC2) works in combination with the Keck II adaptive optics system to obtain very sharp images at near-infrared wavelengths, achieving spatial resolutions comparable to or better than those achieved by the Hubble Space Telescope at optical wavelengths. NIRC2 is probably best known for helping to provide definitive proof of a central massive black hole at the center of our galaxy. Astronomers also use NIRC2 to map surface features of solar system bodies, detect planets orbiting other stars, and study detailed morphology of distant galaxies.

About Adaptative Optics

W. M. Keck Observatory is a distinguished leader in the field of adaptive optics (AO), a breakthrough technology that removes the distortions caused by the turbulence in the Earth’s atmosphere. Keck Observatory pioneered the astronomical use of both natural guide star (NGS) and laser guide star adaptive optics (LGS AO) on large telescopes and current systems now deliver images three to four times sharper than the Hubble Space Telescope at near-infrared wavelengths. Keck AO has imaged the four massive planets orbiting the star HR8799, measured the mass of the giant black hole at the center of our Milky Way Galaxy, discovered new supernovae in distant galaxies, and identified the specific stars that were their progenitors. Support for this technology was generously provided by the Bob and Renee Parsons Foundation, Change Happens Foundation, Gordon and Betty Moore Foundation, Mt. Cuba Astronomical Foundation, NASA, NSF, and W. M. Keck Foundation.

About W.M. Keck Observatory

The W. M. Keck Observatory telescopes are among the most scientifically productive on Earth. The two, 10-meter optical/infrared telescopes on the summit of Maunakea on the Island of Hawaii feature a suite of advanced instruments including imagers, multi-object spectrographs, high-resolution spectrographs, integral-field spectrometers, and world-leading laser guide star adaptive optics systems.

Some of the data presented herein were obtained at Keck Observatory, which is a private 501(c) 3 non-profit organization operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.

The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the Native Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.


Sunday, September 29, 2019

High value for Hubble constant from two gravitational lenses

Images of the two lensing systems used in this study, B1608+656 and RXJ1131. Labels A to D denote images of the background quasar, G1 and G2 are lens galaxies on the left, G is the lens galaxy on the right with a satellite galaxy S. © MPA

The expansion rate of the Universe today is described by the so-called Hubble constant and different techniques have come to inconsistent results about how fast our Universe actually does expand. An international team led by the Max Planck Institute for Astrophysics (MPA) has now used two gravitational lenses as new tools to calibrate the distances to hundreds of observed supernovae and thus measure a fairly high value for the Hubble constant. While the uncertainty is still relatively large, this is higher than that inferred from the cosmic microwave background.

Gravitational lensing describes the fact that light is deflected by large masses in the Universe, just like a glass lens will bend a light right on Earth. In recent years, cosmologists have increasingly used this effect to measure distances by exploiting the fact that, in a multiple image system, an observer will see photons arriving from different directions at different times due to the difference in optical path lengths for the various images. This measurement thus gives a physical size of the lens, and comparing it to an observed size in the sky gives a geometric distance estimate called the “angular diameter distance”. Such distance measurements in astronomy are the basis for measurements of the Hubble constant, named after the astronomer Edwin Hubble, who found a linear relationship between the redshifts (and thus the expansion velocity of the Universe) and the distances of galaxies (which was also independently found by Georges Lemaître).

“There are multiple ways to measure distances in the Universe, based on our knowledge of the object whose distance is being measured,” explains Sherry Suyu (MPA/TUM), who is a world expert in using gravitational lensing for determining the Hubble constant. “A well-known technique is the luminosity distance using supernovae explosions; however, they must adopt an external calibrator of the absolute distance scale. With our analysis of gravitational lens systems we can provide a completely new, independent anchor for this method.”

Derived Hubble diagram, using the two lens systems (red and yellow dots) as anchors for the 740 supernovae in the JLA dataset. © MPA

The team used two strong gravitational lens systems B1608+656 and RXJ1131 (see Figure 1). In each of these systems, there are four images of a background galaxy with one or two foreground galaxies acting as lenses. This relatively simple configuration allowed the scientists to produce an accurate lensing model and thus measure the angular diameter distances to a precision of 12 to 20% per lens. These distances were then applied as anchors to 740 supernovae in a public catalogue (Joint Light-curve Analysis dataset).

“By construction, our method is insensitive to the details of the assumed cosmological model,” states Inh Jee (MPA), who did the statistical analysis and combined the supernova data with the lensing distances. “We get a fairly high result for the Hubble constant and although our measurement has a larger uncertainty than other direct methods, this is dominated by statistical uncertainty because we use only two lens systems.”

The value for the Hubble constant based on this new analysis is about 82 +/- 8 km/s/Mpc. This is consistent with values derived from the distance ladder method, which uses different anchors for the supernova data, as well as with values from time-delay distances, where other gravitational lensing systems were used to determine the Hubble constant directly.

“Again this new measurement confirms that there seems to be a systematic difference in values for the Hubble constant derived directly from local or intermediate sources and indirectly from the cosmic microwave background,” states Eiichiro Komatsu, director at MPA, who oversaw this project. “If confirmed by further measurements, this discrepancy would call for a revision of the standard model of cosmology.”

Variability in B1608+656
Variability observed in the lens system B1608+656, the labels are the same as in Figure 1. The arrows denote a flare seen at different times in the four images.





Contacts

Inh Jee
jee1213@MPA-Garching.mpg.de

Sherry Suyu
Scientific Staff 2015
suyu@mpa-garching.mpg.de

Hannelore Hämmerle
Press officer 3980
hanne@mpa-garching.mpg.de



Original publication 

1. I. Jee, S.H. Suyu, E. Komatsu, et al. 
A measurement of the Hubble constant from angular diameter distances to two gravitational lenses Science, 13.9.2019 
Source / DOI


Tuesday, July 16, 2019

New Hubble Constant Measurement Adds to Mystery of Universe's Expansion Rate

Galaxies Used to Refine the Hubble Constant
Credit: NASA, ESA, W. Freedman (University of Chicago), ESO, and the Digitized Sky Survey

Astronomers have made a new measurement of how fast the universe is expanding, using an entirely different kind of star than previous endeavors. The revised measurement, which comes from NASA's Hubble Space Telescope, falls in the center of a hotly debated question in astrophysics that may lead to a new interpretation of the universe's fundamental properties.

Scientists have known for almost a century that the universe is expanding, meaning the distance between galaxies across the universe is becoming ever more vast every second. But exactly how fast space is stretching, a value known as the Hubble constant, has remained stubbornly elusive.

Now, University of Chicago professor Wendy Freedman and colleagues have a new measurement for the rate of expansion in the modern universe, suggesting the space between galaxies is stretching faster than scientists would expect. Freedman's is one of several recent studies that point to a nagging discrepancy between modern expansion measurements and predictions based on the universe as it was more than 13 billion years ago, as measured by the European Space Agency's Planck satellite.

As more research points to a discrepancy between predictions and observations, scientists are considering whether they may need to come up with a new model for the underlying physics of the universe in order to explain it. 

"The Hubble constant is the cosmological parameter that sets the absolute scale, size and age of the universe; it is one of the most direct ways we have of quantifying how the universe evolves," said Freedman. "The discrepancy that we saw before has not gone away, but this new evidence suggests that the jury is still out on whether there is an immediate and compelling reason to believe that there is something fundamentally flawed in our current model of the universe.”

In a new paper accepted for publication in The Astrophysical Journal, Freedman and her team announced a new measurement of the Hubble constant using a kind of star known as a red giant. Their new observations, made using Hubble, indicate that the expansion rate for the nearby universe is just under 70 kilometers per second per megaparsec (km/sec/Mpc). One parsec is equivalent to 3.26 light-years distance.

This measurement is slightly smaller than the value of 74 km/sec/Mpc recently reported by the Hubble SH0ES (Supernovae H0 for the Equation of State) team using Cepheid variables, which are stars that pulse at regular intervals that correspond to their peak brightness. This team, led by Adam Riess of the Johns Hopkins University and Space Telescope Science Institute, Baltimore, Maryland, recently reported refining their observations to the highest precision to date for their Cepheid distance measurement technique.

How to Measure Expansion

A central challenge in measuring the universe's expansion rate is that it is very difficult to accurately calculate distances to distant objects.

In 2001, Freedman led a team that used distant stars to make a landmark measurement of the Hubble constant. The Hubble Space Telescope Key Project team measured the value using Cepheid variables as distance markers. Their program concluded that the value of the Hubble constant for our universe was 72 km/sec/Mpc.

But more recently, scientists took a very different approach: building a model based on the rippling structure of light left over from the big bang, which is called the Cosmic Microwave Background. The Planck measurements allow scientists to predict how the early universe would likely have evolved into the expansion rate astronomers can measure today. Scientists calculated a value of 67.4 km/sec/Mpc, in significant disagreement with the rate of 74.0 km/sec/Mpc measured with Cepheid stars.

Astronomers have looked for anything that might be causing the mismatch. "Naturally, questions arise as to whether the discrepancy is coming from some aspect that astronomers don't yet understand about the stars we're measuring, or whether our cosmological model of the universe is still incomplete," Freedman said. "Or maybe both need to be improved upon."

Freedman's team sought to check their results by establishing a new and entirely independent path to the Hubble constant using an entirely different kind of star.

Certain stars end their lives as a very luminous kind of star called a red giant, a stage of evolution that our own Sun will experience billions of years from now. At a certain point, the star undergoes a catastrophic event called a helium flash, in which the temperature rises to about 100 million degrees and the structure of the star is rearranged, which ultimately dramatically decreases its luminosity. 

Astronomers can measure the apparent brightness of the red giant stars at this stage in different galaxies, and they can use this as a way to tell their distance.

The Hubble constant is calculated by comparing distance values to the apparent recessional velocity of the target galaxies — that is, how fast galaxies seem to be moving away. The team's calculations give a Hubble constant of 69.8 km/sec/Mpc — straddling the values derived by the Planck and Riess teams.

"Our initial thought was that if there's a problem to be resolved between the Cepheids and the Cosmic Microwave Background, then the red giant method can be the tie-breaker," said Freedman.

But the results do not appear to strongly favor one answer over the other say the researchers, although they align more closely with the Planck results.

NASA's upcoming mission, the Wide Field Infrared Survey Telescope (WFIRST), scheduled to launch in the mid-2020s, will enable astronomers to better explore the value of the Hubble constant across cosmic time. WFIRST, with its Hubble-like resolution and 100 times greater view of the sky, will provide a wealth of new Type Ia supernovae, Cepheid variables, and red giant stars to fundamentally improve distance measurements to galaxies near and far.

The Hubble Space Telescope is a project of international cooperation between NASA and ESA (European Space Agency). NASA's Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore, Maryland, conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy in Washington, D.C.




Contact:  

Ray Villard
Space Telescope Science Institute, Baltimore, Maryland
410-338-4514

villard@stsci.edu

Louise Lerner
University of Chicago, Chicago, Illinois
773-702-8366

louise@uchicago.edu



Related Links: