Exploring the early universe with the Square Kilometer Array

Andrea Ferrara


© NASA, © SKA Organization

Our understanding of the Universe has remarkably improved over the last two decades. In addition to strengthening and refining the foundations of the Big Bang standard model (the Hubble expansion, the cosmic microwave background (CMB) and the abundance of light elements), Cosmology has given us several genuine, irrefutable surprises. The most prominent among these are that i) we live in a flat Universe, thus corroborating predictions of inflationary theories; ii) 84.4% of cosmic matter is constituted by some yet unknown dark matter particles; iii) the Hubble expansion is accelerating, possibly due to a non-clustering, negative-pressure fluid called dark energy; iv) black holes a billion times the mass of the Sun were already in place just 1 billion years after the Big Bang; v) distant galaxies (visible as the tiny red dots in the deep Hubble Ultra Deep Field image in fig. 1), and products of their stellar activity as the Gamma-Ray Bursts, have been recently detected at redshifts z > 8, corresponding to a cosmic age shorter than 0.65 Gyr. Do we have a complete, exact theory to explain these puzzling experimental evidences? Unfortunately, not yet. Gaps remain in the basic cosmological scenario and admittedly some of them are rather large.

1 A brief history of the early universe

It is now widely believed that the Universe underwent an inflationary phase early on that sourced the nearly scale-invariant primordial density perturbations, which led to the largescale structure we observe today. Inflation requires non-standard physics, and at present there is no consensus on the mechanism that made the Universe inflate; moreover, only few constraints on the numerous inflationary models are available from observations.

Expansion forced matter to cool and therefore to pass through a number (GUT, electroweak, hadrosynthesis, nucleosynthesis) of symmetrybreaking phase transitions until recombination, in which electrons and protons find energetically favorable to combine into hydrogen (or helium) atoms. Due to the large value of the cosmic photon-to-baryon ratio, η ≈ 109, this process is delayed until temperature drops to 0.29 eV, about 50 times lower than the binding energy of hydrogen. Cosmic recombination proceeds far out of equilibrium because of a “bottleneck” at the n = 2 level of hydrogen: atoms can only reach the ground state via slow processes: twophoton decay or Lyman-alpha (Lyα) resonance escape. As the recombination rate becomes rapidly smaller than the expansion rate, the reaction cannot reach completion and a small relic abundance 5 × 10−4 of free electrons is left behind at z < 1078 ± 11.

Right after the recombination epoch, the Universe entered a phase called the Dark Ages, where no significant radiation sources existed. The hydrogen remained largely neutral at this stage. The small inhomogeneities in the dark matter density field present during the recombination epoch started growing via gravitational instability giving rise to highly nonlinear structures, i.e. collapsed haloes. It should, however, be kept in mind that most of the baryons at high redshifts do not reside within these haloes, they are rather found as diffuse gas in the InterGalactic Medium (IGM). The collapsed haloes form potential wells whose depth depends on their mass and the baryons then “fall” in these wells. If the mass of the halo is high enough (i.e., the potential well is deep enough), the gas will be able to dissipate its energy, cool via atomic or molecular transitions and fragment within the halo. This produces conditions appropriate for condensation of gas and forming stars and galaxies. Once these luminous objects form, the Dark Ages are over.

The first population of luminous stars and galaxies generate ultraviolet (UV) radiation through nuclear reactions. In addition to galaxies, perhaps an early population of accreting black holes (quasars) and the decay or annihilation of dark matter particles also generated some amount of UV light. The UV radiation contains photons with energies > 13.6 eV which are then able to ionize hydrogen atoms in the surrounding medium, a process known as “cosmic reionization”. Reionization is thus the second major change in the ionization state of hydrogen (and helium) in the Universe (the first being recombination).

As per our current understanding reionization started around the time when the first structures formed, which is currently believed to be around z ≈ 20 − 30. In the simplest picture, each source first produced an ionized region around it; these regions then overlapped and percolated into the IGM. This era is usually called the “preoverlap” phase. The process of overlapping seemed to be completed around z ≈ 6 − 8 at which point the neutral hydrogen fraction fell to values < 10−4. Following that, a never-ending “post-reionization” (or “post-overlap”) phase started which implies that the Universe is largely ionized at the present epoch. Reionization by UV radiation is also accompanied by heating: electrons released by photo-ionization will deposit the photon energy in excess of 13.6 eV into the IGM. The IGM reheating can expel the gas and/or suppress cooling in low mass haloes – thus, there is a considerable reduction in the cosmic star formation right after reionization. In addition, the nuclear reactions within the stellar sources potentially alter the chemical composition of the medium if the star dies in an energetic explosion (supernova). This can change the star formation mode at later stages.

A 3D visualization of the characteristics swisscheese structure of reionization is shown in fig. 2. It has been obtained using supercomputer simulations. The process of reionization is of immense importance in the study of structure formation since, on the one hand, it is a direct consequence of the formation of first structures and luminous sources while, on the other, it affects subsequent structure formation. Observationally, the reionization era represents a phase of the Universe which is only starting to be probed; the earlier phases are probed by the CMB while the postreionization phase (z < 6) is probed by various observations based on quasars, galaxies and other sources. In addition to the importance outlined above, the study of Dark Ages and cosmic reionization has acquired increasing significance over the last few years because of the availability of good-quality data in different areas. As we will see the study of cosmic reionization will be the province of the world’s largest radio telescope, the forthcoming Square Kilometer Array.

2 From darkness to light

Reionization is caused by H-ionizing radiation produced by a variety of astrophysical sources powered by nuclear (stars), gravitational (quasars) or particle physics (dark matter annihilation/decay) processes. Although additional contribution can come from bremsstrahlung emission by hot plasma produced by supernova explosions or virialization shocks during galaxy formation, a consensus has been reached that such component is negligible.

Stars
For star formation to begin, a sufficient amount of cold dense gas must accumulate. Primordial gas clouds formed in dark matter halos with virial temperature Tvir ~ 1000 K and mass ~106 M⊙ (so-called “mini-halos”) at z = 20−30. Most scholars agree on two facts: i) the Bonnor-Ebert mass in primordial gas is large, suggesting that metal-free (Pop III) stars are likely to be massive ~30 M⊙, ii) only one star per mini-halo can be formed, because the far UV radiation from a single massive star is sufficient to destroy all the H2 in the parent gas cloud. In principle, fragmentation into a binary or multiple star system would be possible, but simulations based on self-consistent cosmological initial conditions do not show this. Interestingly, however, simulations starting from non-cosmological initial conditions have yielded multiple cloud cores as a result of spin-induced disk formation and subsequent break-up. It remains to be seen whether such conditions occur from realistic cosmological initial conditions. Ultimately, the final stellar properties (and hence their ionizing power) are determined by both the parent cloud mass and gas accretion, in turn regulated by radiation pressure from the proto-star. Magnetic fields, if sufficiently strong, can also play an important role. These are complex and not yet understood phenomena.

Quasars
Reionization by z > 6, as suggested by observations, requires either a hard spectrum for the sources or an increase over the evolution rate of the comoving emissivity beyond that epoch. QSOs, with their hard spectra, would be optimal candidates. Unfortunately, the rapid decline in their number causes them to fall short by ~50 times of the emissivity requirements for reionization, unless there is a yet undiscovered large population of very low luminosity Active Galactic Nuclei. An intriguing alternative is a population of mini-quasars, i.e. systems produced by intermediate-mass black holes (IM BH) with masses 20 – 105 M⊙; they might be of foremost relevance as possible building blocks of supermassive (~109 M⊙) black holes unveiled by the Sloan Digital Sky Survey at z ~ 6, although this is matter of debate.

Two different origins for mini-quasars seem plausible: i) collapse into BHs of sufficiently massive first stars, followed by accretion and coalescence with other BHs; ii) direct collapse of dense, low-angular-momentum gas driven by turbulence or gravitational instabilities. Early IM BH accreting as mini-quasars could be important sources of partial, early reionization, especially due to the hardness of their spectra, which extend up to the X-ray band. For this reason, mini-quasars are also indicated as contributors to the X-ray background, and preliminary interesting bounds on their density have been computed based on the level of the unresolved fraction of such radiation. Finally, mini-quasars can also heat the IGM, thus influencing the HI 21 cm line emission/ absorption and the formation of the first structures.

Dark matter
Most weakly-interacting DM candidates are predicted to self-annihilate into Standard Model particles, thus injecting energy in the surrounding medium, with an enhanced rate in density concentrations produced by gravitationally collapsed cosmic structures (an extreme example being represented by dark stars, stellar structures powered by DM rather than nuclear energy release). The ionizations and heating can be produced both by high-energy photons directly emitted in the annihilation of two DM particles (with an energy of the order of the DM mass: typically from tens of GeV to tens of TeV) and by the lower-energy photons produced by Inverse Compton scattering of CMB photons on the energetic e+ and e from DM annihilations. The latter turns out to be by far the most important process due to the rapid energy decrease of photo-ionization cross-sections. Primary photo-electrons deposit their energy in the IGM through a very complex energy shower freeing many more electrons. The crucial parameters that determine the amount of (ionizing) energy emitted by DM annihilations are the mass of the DM particle mχ, and the average annihilation cross-section 〈σv〉. The benchmark values for these quantities are typically taken to be mχ ~ tens to thousand of GeV and 〈σv〉thermal ~ 3 × 10−26 cm3 s−1, the values for which the relic abundance of DM particles matches, via the thermal freeze-out process, the observed ΩDMh2 = 0.110 ± 0.005.

The decay of DM is less constrained: since DM could be coupled only gravitationally to Standard Model particles, its spin can be anything: 0, 1/2, 1, 3/2, 2, ... Furthermore, decay rates suppressed by helicity factors such as (me/mχ)2 are still phenomenologically interesting; DM decays might involve new particles (e.g. gravitino decays into gluinos and quarks). As for mini-quasars, DM annihilation/ decay not only constitutes an important source of ionizing photons, but, by heating the IGM, it affects the HI 21 cm line radiation and first galaxy formation.

2.1 Feedback processes

Once the first luminous sources appeared, the energy they deposited in various forms affected the subsequent cosmic star/galaxy/ quasar formation history and the IGM evolution, in addition to reionization. Such energy deposition, loosely referred to as feedback, acts either to reduce (negative f.) or to increase (positive f.) the formation rate of additional sources. Feedback comes in three different physical flavors: radiative, chemical, and mechanical. For an in-depth description and a complete list of references see the most updated reviews given at the end.

Radiative feedback
The first sources, by emitting copious amounts of UV radiation due to their massive and metal-free stars, photo-dissociate H2 molecules (the main coolant in primordial gas) and photo-ionize/heat the surrounding gas. The H2 photo-dissociation occurs through the two-step Solomon process: H2 + γ → H2*; H2* → 2H + γ1 , where γ is a Lyman-Werner UV photon with energy 11.26 < EeV < 13.6, and H2* denotes an excited molecule. Photo-dissociation leads to a negative feedback effect as the gas is prevented from collapse and form stars in minihalos, lacking a channel to dissipate its thermal energy. Stars can eventually form at later times, when “sterile” (i.e. starless) mini-halos merge into larger halos with Tvir > 104 K in which cooling can proceed via Lyα line excitation. UV ionizing radiation with EeV > 13.6 instead heats up the ionized gas to a temperature ~ 104 K set by the balance between radiative cooling and photo-heating. Heated gas evaporates from halos with Tvir < 104 K thus decreasing or suppressing their ability to form stars, again a negative feedback. However, when an ionization front propagates into a collapsing structure, the enhanced fraction of free electrons catalyzes H2 formation (positive feedback). As photo-dissociation and -ionization act at the same time, determining if the net feedback is positive or negative is quite difficult. In practice, such conclusion depends on the intensity and spectrum of radiation, radiative transfer, presence of HD molecules, density inhomogeneities, and ionizationinduced shocks. All these aspects ply us with unanswered questions.

Chemical feedback
If the first stars were massive, those with mass M < 40 M⊙ or 140 < (M/M⊙) < 260 died as core-collapse or pair-instability supernovae, respectively. Their highly heavyelement enriched ejecta are dispersed in the surrounding gas, thereby enhancing its radiative cooling rate via collisionally excited heavy-element lines. A rapidly cooling gas is prone to fragmentation as long as its temperature decreases with increasing density. Fragmentation then stops when the effective adiabatic index (dlog p/ dlog ρ) > 1; the typical fragmentation massscale is set by the local Bonnor-Ebert mass MBE = 510 (T /200 K)3/2(n /104 cm−3)−1/2 M⊙. It is believed that the plateau + power-law shape of the stellar mass distribution, or Initial Mass Function (IMF), is predominantly set by such thermal fragmentation. Hence, metals produced by the first massive stars suppress their own formation and enable the formation of enriched (sub-)solar mass stars, i.e. Pop II stars, resulting in a dramatic decrease (about 25 times) in the ionizing power of stellar sources, with obvious implications for reionization. This process is referred to as chemical feedback; note that its nature is intrinsically local and negative. The precise value of the critical metallicity, Zcr, threshold for the onset of chemical feedback is debated. Studies so far have only constrained this fundamental parameter to 10−6 < (Zcr / Z⊙) < 5 × 10−3, with the lowest limit implied by models additionally including dust cooling. Further complexity is added by CMB radiation, which affects the atomic/molecular level populations and heats the dust grains.

Mechanical feedback
Mechanical feedback is associated with kinetic and thermal energy injection from stellar winds and supernova explosions in the surrounding gas. It has been invoked to explain a large number of galaxy formation issues and therefore its importance can be hardly overlooked. For reionization, though, we can safely focus on two main consequences of mechanical feedback: i) quenching/suppression of the host galaxy star formation rate via supernova-driven ejection of gas; ii) large-scale metal dispersal and mixing. Both effects produce a decrease in the integrated amount of ionizing photons produced by a galaxy, either by reducing the number of stars at early times, or by inducing a transition from Pop III to Pop II stars in enriched cosmic regions resulting in a decreased number of ionizing photons/baryon into stars. Mechanical feedback poses staggering difficulties, as gas/metal flows are complex due to e.g., interactions between supernova blastwaves, cosmological accretion and halo mergers. In addition, gas mixing of heavy elements is a demanding multi-scale problem as laminar flows, turbulent mixing and diffusive processes dominate at kiloparsec, hundred parsec and the smallest scales, respectively. Thus, the problem remains largely unsolved and awaits for progresses in computational and physical modeling.

3 Basic physics of the 21 cm line emission from neutral hydrogen

How can we explore the universe in the Dark Ages and understand the above physical processes controlling cosmic reionization? The feeble sources shining at those times (first stars, mini-galaxies, black holes and annihilating dark matter) are way too feeble to be directly detected by even the next generation of space-born (James Webb Space Telescope, the successor to Hubble) and 30 m class future generation of ground-based (as, e.g., the E-ELT) telescopes. Fortunately, quantum mechanics comes to our rescue. As in the Dark Ages the overwhelming majority of the cosmic baryons (and hence hydrogen) is in the diffuse IGM, it is reasonable to think that any viable electromagnetic emission mechanism from such huge amount of mass could be in principle detectable. Such mechanism is provided by the 21 cm line emission of neutral hydrogen atoms.

The 21 cm line is associated with the hyperfine transition (often referred to as the “spinflip” transition, as the proton and electron spins go from parallel to anti-parallel) between the triplet and the singlet levels of the neutral hydrogen ground state. The ratio between the number densities of hydrogen atoms in the singlet (n = 0) and triplet (n = 1) ground hyperfine levels can be written as n1/n0 = 3 exp (−T0/TS), where T0 = 0.068 K corresponds to the transition energy, and TS is the spin temperature defined by the above expression.

In the presence of the CMB alone, TS reaches thermal equilibrium with CMB temperature Tγ = 2.73 (1 + z) K on a short time-scale, making the HI undetectable either in emission or absorption. However, collisions and scattering of Lyα photons (the so-called Wouthuysen- Field process or Lyα pumping) can couple the spin temperature to the gas kinetic temperature, TK, making the neutral hydrogen visible in absorption or emission depending on whether the gas is colder or hotter than the CMB. The spin temperature can be calculated as

(1)   $T_{S}^{-1} = \frac{T_{\gamma}^{-1} + x_{c}T_{K}^{-1} +x_{\alpha}T_{c}^{-1}}{1 +x_{c} + x_{\alpha} }$

where Tc is the color temperature, which can be very well approximated with TK, and xα and xC are the coupling coefficients corresponding to Lyα scattering, and collisions, respectively. The key observable quantity is the differential brightness temperature between a neutral hydrogen patch and the CMB. This is defined as

(2)   $T_{b}(\nu) \approx \frac{T_{S} - T_{\gamma}(z)}{1+z}\tau_{\nu_{0}} \approx $ $9x_{HI}(1+\delta) (1+z)^{1/2} [ 1- \frac{T_{\gamma}(z)}{T_{S}}] [ \frac{H (z) / (1+z)}{d\nu / dr }]$, mK

where τ is the optical depth of the neutral IGM at 21 (1 + z) cm, xHI is the IGM neutral fraction at redshift z, δ is the density contrast with respect to the mean cosmic density, H (z) is the Hubble parameter, dv /dr is the comoving gradient of the line-of-sight component of the comoving velocity. The above equation well illustrates an important point: the 21 cm signal carries information not only on the thermodynamic state of the gas (kinetic temperature and ionization fraction), but also on the density field (and hence on its fluctuations), and cosmological parameters in the Dark Ages.

The final step to predict the expected brightness temperature of the signal involves the determination of the neutral fraction, and the kinetic (and hence spin temperature) of the gas. This can be obtained by solving the coupled ionization and energy equations. Conceptually, these equations solve the detailed balance of the relevant process rates. Schematically,

Rate of change of the H ionization fraction = (Ionization rate - Recombination rate);
Rate of change of the gas temperature = (Heating rate - Adiabatic cooling).

The ionization rate is determined by the ionizing photon emission from sources (stars, quasars, dark matter annihilations); this process is balanced by electron-proton recombinations in the IGM. Similarly, the IGM gas temperature is the result of the balance between photo-heating (including X-rays from sources/annihilations and Compton heating by the CMB), and adiabatic cooling due to the Hubble expansion. This simple set of equations provides a very accurate description of the thermal state of the IGM and hence allows us to predict the expected signal.

Hence, the key physics of the 21 cm signal is embedded in eq. (2). Two points are worth noticing to that concern. First, the signal depends on frequency and hence, via the cosmological expansion, on redshift. This has the advantage that, by tuning the receiver frequency, one can perform a “tomographic” study of reionization, i.e. slicing the cosmic gas along the redshift direction to study the reionization evolution. For example, the 21 cm line rest-frame frequency of 1420.4057 MHz emitted at z = 9, well in the epoch of reionization, is observed on Earth at approximately 142 MHz. The low-frequency band around 50–300 MHz has remained virtually unexplored so far and represents a genuine new window on the universe. There are reasons for that. Apart from the obvious presence of Radio Frequency Interference noise caused by human emissions (radio and TV stations, satellite communications, power plants etc.), the actual limiting factor has been the presence of very strong foregrounds. These are produced by both the ionosphere and by a plethora of astrophysical sources (synchrotron emission from the cosmic gas, radio galaxies and other sources). The collective foreground in the relevant frequency band has a brightness temperature > 200–104 K, thus exceeding the searched-after signal by more than 100000 times! Foregrounds therefore are the key challenge and limiting factor for the experiments, like the SKA, aimed at studying the 21 cm signal from the Dark Ages.

To this goal, luckily, a second point comes to our rescue. The signal described by eq. (2) is popularly called the “global” 21 cm signal. More technically, it represents the monopole term of the signal. The 21 cm sky, however, contains exceedingly more information than that carried by the monopole alone. Extra information is stored in the fluctuations around the global signal. This fact is in almost perfect analogy with the more familiar case of the CMB radiation, which on top of the mean intensity given by the 3 K black body radiation, shows small angular fluctuations on a wide range of scales. These fluctuations encode information on the physical state of matter at the recombination epoch. Similarly, the 21 cm fluctuations, once observed, will tell us about the physical state of hydrogen, largely representative of all baryons, in the Dark Ages and in the Epoch of Reionization. But there is more than that. When performing tomographic studies, the brightness temperature of contaminating foregrounds varies very slowly with frequency. On the contrary, on the small spatial scales relevant to fluctuations in the signal, the frequency dependence is very strong. This is because the transitions from neutral to ionized regions (the bubbles) are very sharp in physical space, translating into an analogous strong gradient with frequency when performing a tomography. Such different behavior can be efficiently used to separate the signal from smoothly varying astrophysical foregrounds.

Nevertheless, the milli-kelvin intensity of the signal will limit our ability to produce detailed tomographic images of the 21 cm sky. This is because even with SKA, the telescope noise is comparable to or exceeds the signal except on rather large scales. For this reason, the current trend has been to concentrate on using statistical quantities readily extractable from low signal-to-noise maps to constrain the IGM properties. These are represented by the power spectrum of the fractional perturbation of the brightness temperature (zero-mean) field, δ21 = (Tb –〈Tb〉)/〈Tb〉, where 〈Tb〉 represents the global mean signal. The power spectrum, P21 (k), as a function of the wavenumber k is obtained by Fourier transforming δ21. More often, though, the non-dimensional power spectrum, Δ221 (k) = T0 k3 P21 (k)/(2π2), T0 being the mean global signal (eq. 2), is used.

An approximate expression for the power spectrum can be obtained analytically thanks to the fact that the density fluctuations on large scales are close to linear. However, a number of complicated radiation transfer effects, and the non-linearity associated with the formation of the sources render such approach impractical. To the other extreme one can use fully hydrodynamical cosmological simulations simultaneously following the formation of the sources and the propagation of UV and X-ray ionizing radiation in the IGM on sufficiently large (close to 1 Gpc = 3×1027 cm) scales, where reionization effects are evident and measurable. Such approach, although in principle feasible, is overwhelmingly computational expensive even for the largest supercomputers currently available. An excellent compromise has been to perform “semi-numerical” computations. These entail a realization of a model universe using linear theory to determine the locations of luminous sources and ionized bubbles, and an approximate treatment of the radiative transfer from which both images and the power spectrum can be built. Figure 3 show the evolution of the 21 cm power spectrum obtained from this type of calculations.

At high redshifts, when all the IGM is essentially neutral, the spectrum closely resembles the underlying dark matter spectrum (black curve in fig. 3). However, as bubbles appear they boost the power on intermediate/large scales corresponding to their own sizes (several Mpc). At the same time small scales are depressed with respect to the dark matter spectrum: this is because within ionized bubbles the ionized fraction is uncorrelated with the small-scale density perturbations. These processes imprint a characteristic S-shape to the (nondimensional) power spectrum with the normalization decreasing as redshift and neutral fraction decrease.

This is the fundamental signal that we are after and that we plan to use to unveil the physics of the Dark Ages and cosmic reionization. This is precisely why we are building the world’s largest radio telescope, i.e. the Square Kilometer Array.

4 The Square Kilometer Array

The Square Kilometer Array (SKA) is an endeavor that began more than 20 years ago, in September 1993. At that time the International Union of Radio Science (URSI) established the Large Telescope Working Group to begin a worldwide effort to develop the scientific goals and technical specifications for a next-generation radio observatory. The project had a long incubation period, partly as a result of the very large costs, which would have called for a planetary-scale international collaboration. Moreover, little was known about the possibility to map the reionization epoch with the redshifted 21 cm line, and that small hope was confronted with the fact that the most distant galaxies known were located only at redshift z < 1. Thus, studying the universe at redshifts close to 10 was considered virtually impossible.

However, the situation changed dramatically towards the very end of the last century, when new searching techniques, instrumentations advances and powerful numerical simulations combined together to push the explored cosmic frontier back in time close to the end of the reionization epoch. This triggered a widespread enthusiasm, eventually providing solid foundations to the motivations for the SKA project as well. Nevertheless the cost problem was still asking for a solution. The next step took place in year 2000, when at the International Astronomical Union meeting in Manchester, UK, a Memorandum of Understanding to establish the International Square Kilometer Array Steering Committee (ISSC) was signed by representatives of eleven countries (Australia, Canada, China, Germany, India, Italy, the Netherlands, Poland, Sweden, the United Kingdom, and the United States). This agreement was additionally reorganized until, in December 2011, which we can consider the actual start of the project, it was established that the leadership of the project was to be transferred to the SKA Organization, located at the historical Jodrell Bank Observatory in Cheshire, UK.

SKA is a highly advanced, new-generation radio interferometer array. The total collecting area of the telescope will be approximately 1000000 square meters, as the facility name itself recalls. Such area is spread among thousands of dishes and up to a million antennas. The dishes are used for the high-frequency bands (0.35–14 GHz); they will be located in a remote, desert, radio quiet site in South Africa (see map in fig. 4). This subinstrument is called SKA-MI D. The antennas (SKA-LOW , see a visual representation of the instrument in fig. 5) are instead located in a site with similar characteristics in Western Australia (fig. 4). They are supposed to work in the frequency range 50–350 MHz. Hence SKALOW is the perfect instrument to investigate the Dark Ages and cosmic reionization by looking at the redshifted 21 cm line signal.

So what is the difference between a single dish radio telescope and an interferometer array like SKA? Why is it preferable to build millions of individual antennas? The answer is simple. In an interferometer receiving units (dish or antenna) are separated by physical distances that can be adjusted for each specific experiment. Such distance is calculated precisely using the time difference between the arrival of radio signals at each receiver. Computers can then compute how to combine these signals to synthesize something the equivalent size of a single dish measuring the width of the distance between the two telescopes.

The advantage is clear. Interferometric techniques can emulate a telescope with a size equal to the maximum separation between the telescopes in the array (or configurations using only a subset of them). Thus instead of building a gigantic disk, which would pose tremendous technical and mechanical challenges, one can exploit the flexibility that this interferometric configuration brings. The system can act either as one gigantic telescope, or multiple smaller telescopes, and any combination in between.

Of course, nothing comes for free. The computational load required to synthesize the signals from all receivers is tremendous. The supercomputers controlling SKA will have to correlate very rapidly signals across thousands of telescopes connected by thousands of kilometers of fiber optic cables. It is estimated that the required computing power will be in excess of 100 petaflops (one hundred thousand million million floating point operations per second) of raw processing power. Such performance is indeed astounding. This computational demand will largely surpass even the already humongous supercomputing power in force at LHC. At the same time, it clarifies why the SKA could have not been built earlier.

SKA will be one of the four major astrophysical facilities that will be operating in the next decade, including the James Webb Space Telescope (the successor to Hubble) to be launched in 2018. The Atacama Large Millimeter Array (ALMA), a very high frequency interferometer located in Chile, has just entered in its scientific production phase. These instruments will be complemented with the European Extremely Large Telescope (E-ELT). Such 30 m telescope (and others like the TMT , around the world) will allow us to pierce through the most distant universe. These four instruments all together will certainly revolutionize our view of the universe.

5 The SKA legacy

It would be impossible here to even touch upon the large number of key issues in astrophysics and cosmology on which SKA will shed new light. As we have been mostly concentrating on the Dark Ages and cosmic reionization, studied via the 21 cm line emission, I will restrict myself to briefly discuss what advances SKA will bring in these areas.

Before I do that it is important to recall that SKA will contribute to many other scientific areas. It will be able to map the neutral hydrogen of more than a billion individual galaxies up to redshift of about 1. It will clarify how magnetic fields are generated throughout cosmic evolution, and their role in the formation of structures like galaxy clusters. It will perform strong-field tests of gravity with pulsars and black holes. Proto-planetary disks will be imaged at unprecedented (sub-AU) spatial resolution, at the same time studying amino acids and other complex molecules they contain. It will eventually be used as an ultra-sensitive SETI (Search for ExtraTerrestrial Intelligence) machine. In all these areas the discovery space to which SKA will give us access to is enormous.

Figure 6 shows a past light-cone simulation of a volume of the universe extending in depth from redshift z = 5 to z = 86.5 and about 1.6 Gpc wide. The colored strip in the upper panel shows the evolution of the brightness temperature (in mK) of the global signal (eq. (2)). The transitions from the Dark Ages, where the signal appears in absorption (i.e. negative Tb) provide the Lyman alpha coupling via the Wouthuysen-Field effect, followed by the heating of the IGM initially caused by X-rays, and later followed by UV photons completing reionization, are clearly visible. All these phases are marked by well-defined features in the power-spectrum amplitude.

This is shown in the bottom panel for wavenumbers easily accessible to SKA, k = 0.1 Mpc–1 (solid curve) and k = 0.5 Mpc–1 (dotted curve).

Such predicted spectrum can be easily detected by SKA throughout. To fix ideas, for k = 0.1 Mpc–1 and assuming a 2000 h of observation time (a relatively modest requirement) against the CMB, to the time at which the first sources of light SKA will achieve a thermal noise as low as 0.4 mK2 at z = 20, and of 0.1 mK2 at z = 16. By comparing these figures with the spectrum in fig. 6, we can then conclude that SKA will be able to precisely measure the 21 cm signal throughout the reionization epoch and perhaps also provide detailed tomographic images. We will then we able to see cosmic reionization in progress, a tremendous achievement. This in turn will provide fresh and fascinating insights in these remote epochs of the universe, and on the ancestors of present-day galaxies and black holes that inhabited it.