Invited Talks

Session 1: Earth system modeling

M. Klimenko (West Department of Pushkov Institute of Terrestrial Magnetism, Ionosphere and Radio Wave Propagation RAS, Russia)
Entire Atmosphere Global model (EAGLE): development, first version and preliminary results

We have developed the Entire Atmosphere Global model (EAGLE) which combines Chemistry-Climate model (CCM) HAMMONIA and Global Self-Consistent Model of the Thermosphere, Ionosphere and Protonosphere (GSM TIP). The model allows calculating the atmospheric state from the ground to 15 radii of the Earth including ionosphere and plasmasphere interactively simulating the main physical, radiative, chemical, and dynamical processes in the lower, middle and upper atmosphere. The model treats thermodynamic interaction of charged and neutral components of photochemical ionospheric processes and excitation of the dynamo-electric field under the influence of the tidal winds. It also includes nitrogen oxides production in the thermosphere by energetic electron precipitation, and can realistically describe the electric field distribution and other parameters of the ionosphere close to the geomagnetic equator. The vertical model domain starts from the ground, which allows studying the lower atmosphere influence on the thermosphere/ionosphere system. We apply EAGLE in assimilation mode using nudging to meteorological data below 60 km for January 2009. We compare the output from short-term (from several days to a month) model ensemble runs against observations to evaluate the model and to estimate the sensitivity of the ionosphere to the introduced lower atmospheric perturbations. EAGLE can be used in the future for nowcasting and short-term prediction of the thermosphere/ionosphere system with implication for planning satellite missions, forecasting space weather/climate, estimation of the risks generated by explosive solar events, and functioning of the global positioning systems.

E. Mareev (Institute of Applied Physics, Russian Academy of Sciences, Russia)
Global electric circuit

E. Volodin (Institute of Numerical Mathematics RAS, Russia)
Simulation of climate system with climate model of INM RAS

Review of climate models developed in INM RAS is provided. Participation in climate model intercomparison program CMIP6 is considered. Special attention is paid to simulation of observed climate changes during last decades with ensemble of model runs. The role of external forcing and internal variability in observed climate changes is discussed.

L. Zhou (East China Normal University, China)
A Global Circuit Model with Stratospheric and Tropospheric Aerosol

Atmosphere electric circuit, in which the important parameter, vertical current Jz modulated by external and internal drivers, can affect the cloud microphysics, is a key factor to understand the physical relationship between solar activity and earth climate (Tinsley,2000, 2004, Tinsley and Yu, 2004). The relevant processes are (a) electrically enhanced contact ice nucleation, (b) electrically induced changes in the indirect aerosol effect. Diagram 1 shows the role of the electric circuit in solar activity-earth climate process. Cosmic ray modulated by the solar activity dominates the ion-pair production rate in the low atmosphere except the air close to the surface. The aerosol in the troposphere and stratosphere decides the ion loss in atmosphere. They are key factors which decide the character of the atmosphere electric circuit. Though a few global circuit models have been constructed by Robel and Hays (1979), Makino and Ogawa (1985), and Sapkota and Varshneya (1990), there is need for a new model with more accurate treatment of the external effect due to energy particle from space, of the internal effects due to spatial and temporal variation of the aerosol concentration in troposphere and stratosphere. Moreover several researches and observations prove that at high latitude sulfate in the downward branch of the Brewer-Dobson circulation can condense by the ion-mediate nucleation or homogeneous nucleation to form the ultrafine aerosol layer, which can be ignore in the global circuit. But it has not been involved in the global circuit model. With our new model, in which we made these improvements, we discuss the response of the global circuit to the cosmic ray flux changes due to the solar activity and the variation of the sensitivity of the global circuit to the solar activity due to volcanic activity.


Session 2: Modeling and prediction of geophysical extremes

T. Bodai (University of Reading, UK)
Predictability of fat-tailed extremes

We conjecture for a linear stochastic differential equation that the predictability of threshold exceedances (I) improves with the event magnitude when the noise is a so-called correlated additive-multiplicative (CAM) noise, no matter the nature of the stochastic innovations, and also improves when (II) the noise is purely additive obeying a distribution that decays fast, i.e., not by a power-law, and (III) deteriorates only when the additive noise distribution follows a power-law. The predictability is measured by a summary index of the receiver operating characteristic (ROC) curve. We provide support to our conjecture, to compliment reports in the existing literature on (II), by a set of case studies. Calculations for the prediction skill are conducted in some cases by a direct numerical time-series-data-driven approach, and in other cases by an analytical or semianalytical approach that we have developed. Our results might be relevant to atmospheric dynamics where CAM noise arises as a result of stochastic parametrization.

B. Goswami (Potsdam Institute for Climate Impact Research, Germany)
Abrupt transitions in the Indian Summer Monsoon during the last glacial

The detection of abrupt transitions is an important problem in paleoclimate studies. The spatiotemporal propagation of such transition events, involving so-called 'lead's or 'lag's, critically informs our understanding of teleconnections between different climatic subsystems. However, paleoclimate data sets offer several challenges in fixing the timing of past events, and inferring such teleconnection patterns, particularly because of the errors in dating the climate signal in the paleoclimate archive. Here, we present a novel, 'uncertainty-aware' framework to detect transitions in paleoclimate records. This approach utilises a new representation of time series as a sequence of probability density functions and thereafter uses the modular structure of the probabilities of recurrence to infer dunamical transitions. We apply our method to a new data set from northeastern India which records the intensity of Indian Summer Monsoon (ISM) during a part of the last glacial. A subset of the detected transitions coincide with known events in Greenland, providing clear evidence of a teleconnection between North Atlantic climate and the ISM. Additional detected events are possibly due to further influencing factors on teh ISM and need to be explored in detail as a next step.


Session 3: Global climate variability at different time scales

P. Ditlevsen (Niels Bohr Institute, University of Copenhagen, Denmark)
Predictability, waiting times and tipping points in the climate

It is taken for granted that the limited predictability in the initial value problem, the weather prediction, and the predictability of the statistics are two distinct problems. Predictability of the first kind in a chaotic dynamical system is limited due to critical dependence on initial conditions. Predictability of the second kind is possible in an ergodic system, where either the dynamics is known and the phase space attractor can be characterized by simulation or the system can be observed for such long times that the statistics can be obtained from temporal averaging, assuming that the attractor does not change in time. For the climate system the distinction between predictability of the first and the second kind is fuzzy. On the one hand, weather prediction is not related to the inverse of the Lyapunov exponent of the system, determined by the much shorter times in the turbulent boundary layer. These time scales are effectively averaged on the time scales of the flow in the free atmosphere. On the other hand, turning to climate change predictions, the time scales on which the system is considered quasi-stationary, such that the statistics can be predicted as a function of an external parameter, say atmospheric CO2, is still short in comparison to slow oceanic dynamics. On these time scales the state of these slow variables still depends on the initial conditions. This fuzzy distinction between predictability of the first and of the second kind is related to the lack of scale separation between fast and slow components of the climate system. The non-linear nature of the problem furthermore opens the possibility of multiple attractors, or multiple quasi-steady states. As the paleoclimatic record shows, the climate has been jumping between different quasi-stationary climates. The question is: Can such tipping points be predicted? This is a new kind of predictability (the third kind). The Dansgaard-Oeschger climate events observed in ice core records are analyzed in order to answer some of these questions. The result of the analysis points to a fundamental limitation in predictability of the third kind.

S. Kravtsov (University of Wisconsin-Milwaukee, USA)
Global-Scale Multidecadal Variability Missing in State-of-the-Art Climate Models

Reliability of future global warming projections depends on how well climate models reproduce the observed climate change over the twentieth century. In this regard, deviations of the model simulated climate change from observations, such as a recent “pause” in global warming, have received considerable attention. Here we use a new objective filtering method to show that such discrepancies between the observed and simulated climate trends on decadal and longer time scale are ubiquitous throughout the twentieth century, and have a coherent spatiotemporal structure suggestive of a pronounced global multidecadal oscillation altogether absent from model simulations. We argue that climate model development efforts should strive to alleviate the present substantial mismatch between model predictions and observed multidecadal climate variability.

S. Lovejoy (McGill University, Canada)
Using scaling for macroweather predictions and climate projections

It was recently found that the accepted picture of atmospheric variability was in error by a large factor. Rather than being dominated by a series of narrow scale-range quasi-oscillatory processes with an unimportant quasi-white noise “background”, it turned out that the variance was instead dominated by a few wide range scaling processes albeit occasionally interspersed with superposed quasi - oscillatory processes. Although the classical model implied that successive million year global temperature averages would differ by mere micro Kelvins, the implausibility had not been noticed. In contrast, the new picture inverts the roles of background and foreground and involves four or five wide range scaling processes. The exploitation of the scaling symmetries leads to numerous applications. In this talk we review some recent developments for macroweather forecasting (the regime of decreasing fluctuations from about 10 days to 20 years), and climate projections. Macroweather forecasting: Temporal scaling implies the existence a huge memory in the system that can be exploited for macroweather forecasts. When coupled with a space-time symmetry (space-time statistical factorization) the two imply that if long enough time series are available at that specific place - that irrespective of the spatial correlations - they cannot be used to improve the forecast. This is because the correlation information from (for example) teleconnections is already contained in the past values of the series. The result is the Stochastic Seasonal and Interannual Prediction System (StocSIPS) that exploits these symmetries to perform long-term forecasts. Compared to traditional global circulation models (GCM) it has the advantage of forcing predictions to converge to the real-world climate (not the model climate). It extracts the internal variability (weather noise) directly from past data and does not suffer from model drift or poor model seasonality. Some other practical advantages include much lower computational cost, no need for downscaling and no ad hoc postprocessing. Climate Projections: GCM projections for the next century suffer from a wide model to model dispersion and hence uncertainty. In addition, when compared to the historical record, we find that the multimodel mean of 32 CMIP5 simulations also has a warm bias of about 15%. We discuss a historical based method that exploits the scaling symmetry of the dynamics and its near linearity to make historical based projections with much smaller uncertainties. When these empirically determined Scaling Climate Response Functions (SCRFs) are applied to the CMIP5 models, the method is generally quite accurate but nevertheless has small scenario dependent biases. We also show how a hybrid GCM-SCRF method that combines the CMIP5 GCMs with the scaling historical method can improve both. In effect, the historical data corrects the GCM biases while the future GCM projections correct for the SCRF biases. The overall result has both lower uncertainty as well significantly reduced biases. Following the Representative Concentration Pathway scenarios, this hybrid approach allows us to make improved global and regional surface temperature projections up to the year 2100.


Session 4: Mathematics of geophysical flows

T. Bodai (University of Reading, UK)
The forced response of the climate system: Three application

We frame the forced response of the climate system in terms of an ensemble that represents the so-called snapshot/pullback attractor, and explore the implications and power of this approach. 1. We expose the hysteresis of extremes under an annual cycle in a toy model, the Lorenz 84 model. 2. In an intermediate complexity model, the Planet Simulator, we apply response theory, predicting the ensemble mean, to a geoengineering problem, whereby a precise definition of the side-effects of geoengineering is also possible, and we diagnosed some significant side-effects. 3. Teleconnections as cross-correlations can also be redefined in the ensemble-based framework by evaluating the correlations over ensemble members. As a specific example, we studied the teleconnection between the El Nino–Southern Oscillation (ENSO) and the Indian summer monsoon in ensemble simulations from state-of-the-art climate models, the Max Planck Institute Earth System Model (MPI-ESM) and the Community Earth System Model (CESM). We detect an increase in the strength of the teleconnection in the MPI-ESM under historical forcing between 1890 and 2005, which is in contrast with scientific consensus. In the MPI-ESM no similar increase is present between 2006 and 2099 under the Representative Concentration Pathway 8.5 (RCP8.5), and in a 110-year-long 1-percent pure CO2 scenario; neither is in the CESM between 1960 and 2100 with historical forcing and RCP8.5. This is also a puzzling result inasmuch as the historical forcing is the weakest. Accordingly, we evaluated that the static susceptibility of the strength of the teleconnection with respect to radiative forcing (assuming an instantaneous and linear response) is at least three times larger in the historical MPI-ESM ensemble than in the others.

A. Carrassi (Nansen Environmental and Remote Sensing Center, Norway)
Attribution of climatic events using a data assimilation

Data assimilation (DA) methods were originally designed for state estimation, but are starting to be increasingly applied to the model selection and attribution problems as well. Probabilistic event attribution is the problem of assessing the probability of occurrence of an observed episode under different hypotheses (e.g., different models): a notable example is the causal assessments about episodes of extreme weather or unusual climate conditions. Two quantities are computed: (i) the probability of occurrence, p1, referred to as factual, which represents the probability in the real world; and (ii) the probability, p0, counterfactual, in an alternative world that might have occurred had the forcing of interest been absent. The so-called fraction of attributable risk (FAR) is then defined as: FAR = 1 – p0/p1, i.e., the change in likelihood of an event that is attributable to the external forcing. The approach widely used [see, e.g., Stone and Allen, 2005] to compute the FAR is very costly as it uses a large ensemble of model simulations, unconstrained from the observations, and is difficult to implement in a timely, systematic way in the aftermath of a climatic episode. We will show, as a proof-of-concept, that these obstacles are removed or mitigated by estimating the FAR using DA [Hannart et al., 2016], leading to an efficient DA-based approach to the attribution of climate related events. Carrassi et al. [2017] have introduced a contextual formulation of model evidence (CME) that allows for estimating the two concurrent probabilities, p0 and p1 needed to compute the FAR. In particular, these authors have shown that the CME can be efficiently computed using a hierarchy of ensemble-based DA procedures. In order to extend the theory for estimating CME using an ensemble Kalman filter with localization — a requirement for ensemble-based DA with high-dimensional models — Metref et al. [2018] developed a new formulation of the CME using domain localization, the domain-localized CME (DL-CME). In this talk we will first define the CME and shows that it can be computed using state-of-the-art DA methods. We will later provide examples of its application to the model selection and to the attribution problems using low dimensional numerical models and the intermediate complexity global atmospheric SPEEDY model.

S. Dubinkina (Centrum Wiskunde & Informatica, Netherlands)
Relevance of conservation laws for an Ensemble Kalman Filter

Data assimilation is broadly used in atmosphere and ocean science to reduce uncertainty and to correct model error by periodically incorporating information from measurements into the mathematical model. Ensemble Kalman Filter is an ensemble based data assimilation method that propagates multiple solutions to approximate the evolution of the probability distribution function. A drawback of Ensemble Kalman Filter is that it typically destroys physical laws otherwise preserved by the numerical discretization. Nonetheless, it appears that conservative numerical schemes are essential for good estimations obtained by Ensemble Kalman Filter.

D. Kondrashov (University of California, Los Angeles, USA)
Multiscale Stuart-Landau Emulators: Application to Wind-driven Ocean Gyres

The multiscale variability of the ocean circulation due to its nonlinear dynamics remains a big challenge for theoretical understanding and practical ocean modeling. This presentation demonstrates how recently developed data-adaptive harmonic decomposition (DAHD) and inverse stochastic modeling techniques [Chekroun and Kondrashov, 2017] allow to simulate with high fidelity the main statistical properties of multiscale variability in a coarse-grained eddy-resolving ocean flow. This fully data-driven approach relies on extraction of frequency-ranked time-dependent coefficients describing the evolution of spatio-temporal DAH modes (DAHMs) in the oceanic flow data. In turn, the time series of these coefficients are efficiently modeled by a universal family of low-order stochastic differential equations stacked per frequency, involving a fixed set of predictor functions and a small number of model coefficients. These SDEs take the form of stochastic oscillators, identified as multilayer Stuart-Landau models (MSLMs) and their use is justified by relying on the theory of Ruelle-Pollicott resonances. The good modeling skills shown by the resulting DAH-MSLM emulators demonstrates the feasibility of using a network of stochastic oscillators for the modeling of geophysical turbulence. In a certain sense, the original quasiperiodic Landau's view of turbulence, with the amendment of the inclusion of stochasticity, may be well suited to describe turbulence.


Session 5: Advances in analysis of continuous seismic and acoustic wavefields

G. Beroza (Stanford University, USA)
Data Mining Continuous Seismic Wavefields

A. Obermann (Swiss Seismological Service - ETH Zurich, Switzerland)


Session 6: Dynamics of earthquakes and faults

M. Denolle (Harvard University, USA)
Towards the temporal evolution of the earthquake energy budget

Assuming that the Green’s function between the source and the receiver is properly removed, the only direct measurement we can make of the earthquake energy budget is radiated energy. Fault roughness/geometry or heterogeneity in frictional and pre-stress properties induce local variations in rupture velocity and thus excite high-frequency seismic waves. It is then desirable to zoom into the source moment-rate function and explore the temporal evolution of radiated energy. Here, we construct the radiated energy rate from source spectrograms and use empirical Green’s functions to remove the 3D far-field path effects. Pseudo-dynamic models allow us to confirm that local variations in rupture velocity and in peak slip rate both contribute to strong radiated energy rate. We find that seismic radiation is not uniform throughout the rupture, neither in time nor in space. Through multiple examples of subduction zone earthquakes, we observe that large moment release does not systematically radiate seismic energy and the ratio of energy over moment (scaled energy) varies through time. This step is elemental toward understanding the temporal evolution of the earthquake energy budget.

A. Donnellan (Jet Propulsion Laboratory, California Institute of Technology, USA)
Quantitative Determination of Crustal Deformation from Geodetic Imaging Observations

Crustal deformation is nonlinear in time with spatial variability. Geodetic imaging data in California to show deformation superimposed from several processes, including from long-term tectonic motions, jumps in station position from earthquakes, and transient decaying deformation from postseismic relaxation and fault afterslip. Other processes such as groundwater discharge and recharge in aquifers adds additional complexity to the data. Two plus decades of continuous GPS data provide opportunity to explore the data for time dependent patterns of deformation that could illuminate how fault segments in California respond to earthquakes, interact, and transfer stress. The data also provide the opportunity to search for aseismic fault processes and also remove non-tectonic signals. We are experimenting with clustering analysis of GPS velocities in California to understand partitioning of crustal deformation across faults. By choosing different numbers of clusters for the data we can determine the relative activity between faults, which is important for resolving relative hazard between faults. Large earthquakes and subsequent postseismic deformation provide large enough variation in rates of crustal deformation to also apply clustering methods for different time frames to search for transient changes in relative fault activity. UAVSAR, NASA’s Airborne Synthetic Aperture Radar platform can be used to fill in the spatial variation between the GPS stations spaced at 10 km or more.

J. Weiss (University Grenoble Alps, France)
Faulting of quasi-brittle materials: New insights from lab experiments and progressive damage models

When subjected to compressive loading, quasi-brittle materials such as rocks, concrete, or ice, fail through the development of faults inclined with respect to the principal compressive stress. Coulomb’s theory of failure, a 250 years old concept, remains nowadays the classical framework of interpretation of faulting, separating the material strength into two components, an intrinsic shear strength (cohesion) and a pressure-dependent frictional resistance. It says nothing, however, on the faulting process itself. Nevertheless, it is known for a long time, at least qualitatively, that, in a previously un-faulted material, faulting involves the nucleation, interaction, propagation and coalescence of many microcracks. In other words, faulting is preceded by precursors, with obvious consequences in terms of failure forecasting.

New experimental works performed on rocks or other quasi-brittle materials (e.g. concrete) allowed tracking this faulting process, either from microseismic data (acoustic emission) or X-ray microtomography. For low-porosity materials, these results show that several observables, such as the rate of microcracking events, the size of the largest crack, the global damage, or the seismic energy release rate, all increase non-linearly during the faulting process and diverge following specific scaling laws as approaching macroscopic failure. These scaling laws call for an interpretation of faulting as a critical phase transition between an “intact” and a “faulted” state, in agreement with theoretical developments and numerical progressive damage models. In highly porous materials, such as sandstone, the faulting process can exhibit a different dynamics, owing to the interaction between dilatant damage (microcracking) and pore collapse.

The implications of these results in terms earthquake and faulting physics and forecasting (from e.g. an evolution of seismic wave velocities, or from foreshock activity) will be also discussed.

H. Zhang (University of Science and Technology, China)
Structural control on earthquake behaviors revealed by high-resolution seismic imaging of fault zones

Fault can slip in different modes, including slow slip, non-volcanic tremor, steady creep, microseismicity and large dangerous earthquake, but our understanding of their physical mechanisms is still very limited. To address this issue, we have developed a suite of advanced seismic imaging methods to better determine fault zone properties, including double-difference seismic tomography, joint inversion of seismic body wave and surface wave data, as well as high-resolution Vp/Vs imaging. We have studies several fault zones, including San Andreas Fault around Parkfield, California, Gofar transform fault in East Pacific Rise, Longmenshan Fault in Sichuan, China, and north Anatolian Fault in Turkey. We have found that fault zone properties are closely related to generation of large earthquakes in a specific segment, the propagation of mainshock ruptures and the spatial distribution of small earthquakes and tremors. The results suggest that structural variations along the fault zone have control on different earthquake behaviors.


Session 7: Computational Seismology and Geodynamics

Y. Capdeville (CNRS, Universite de Nantes, France)
Homogenization, effective geological media and full waveform inversion

Finite frequency elastic waves propagating in the Earth only see an effective version on the true Earth. This effective version of the true earth can be computed thanks to the recently developed homogenization technique valid for non-periodic, deterministic with no scale separation media such as geological media. This technique is very useful in the forward modelling context to optimize the numerical cost but also for the inverse problem embedded in seismic imaging techniques. After presenting the method, we will discuss its implications on imaging results and their interpretation in a multi-scale context.

M. Faccenda (University of Padova, Italy)
Subduction zones dynamics and structure from coupled geodynamic and seismological modelling

The present-day structure of subduction settings is mainly determined by means of seismological methods. The interpretation of seismological data (e.g., isotropic and anisotropic velocity anomalies) is however non-unique, as different processes occurring simultaneously at subduction zones can be invoked to explain the observations. A further complication arises when regional tomographic seismic models ignore seismic anisotropy, in which case apparent seismic anomalies due to non-uniform sampling of anisotropic areas will appear. In order to decrease the uncertainties related to the interpretation of seismological observations, geodynamic modelling can be exploited to reproduce the micro and macro scale dynamics and structure of subduction settings, yielding a valuable first-order approximation of the rock isotropic and anisotropic elastic properties. The model output can be subsequently tested against observations by performing seismological synthetics (e.g., SKS splitting, travel-time tomography, receiver functions, azimuthal and radial anisotropy). When the misfit between the modelled and measured seismic parameters is low, the geodynamic model likely provides a good approximation of the recent dynamics and present-day structure of the subduction setting. Such a model can then be used to give a more robust and thermomechanically-based interpretation of the observables and/or further improve the seismological model by providing a-priori information for subsequent inversions. The methodology is still in its infancy, but we envisage that future developments could substantially improve seismological models and, overall, our understanding of complex subduction settings.

P. Martyshko (Ural Branch of Russian Academy of Science, Russia)

T. Nissen-Meyer (University of Oxford, UK)
Dark Earth matters: Complexity-driven wave propagation, resolution limits and inferring from invisibility

Seismic waveforms recorded along the surface encode information from heterogeneous multi-scale structures of Earth’s interior. This structure-wave relation is given by the physics of wave propagation. Tomography strives to invert this relation, and comprehensive numerical forward solvers deliver synthetic waveforms for misfit criteria and gradients (sensitivity kernels) for this purpose. This is necessary and sufficient for full-waveform inversion, but 1) still prohibitively expensive even on large supercomputers, 2) does not deliver uncertainty quantification, and 3) disregards a wealth of complementary information that can possibly be obtained from the complexity of the full 4D wavefield. In this talk, we focus on the complexity of the seismic wavefield and its bearing on the structural inference problem. In particular, we examine the imprint of multiscale 3D heterogeneities on observed surface waveforms and corresponding resolution limits due to wave physics by computing wavefields across various scattering regimes. In doing so, we introduce AxiSEM3D, a new wave propagation method that accurately solves wave propagation for visco-elastic, anisotropic 3D structures with boundary topography at highest resolution and order-of-magnitude speedup compared to conventional methods. The approach exploits intrinsic azimuthal wavefield smoothness by means of a coupled pseudo-spectral-spectral-element approach. AxiSEM3D links computational cost to structural complexity and is thus well suited for exploring the relation between waveforms and structures. "Wavefield learning", sensitivity kernels and differential wavefields will be used to examine wavefield complexity and to discriminate between scattering regimes that can reliably be seen by surface measurements. We will relate the lack of resolution by intrinsic wavefield invisibility in conjunction with existent seismic methods to geophysically relevant but seismically undiscovered structures in the Earth's interior. We thereby attempt to argue for complementary insight as to why tomographic images sustain little consensus for smaller-scale heterogeneities despite compelling advances in data resolution and inverse methods. This shall help in better understanding the relation of tomographic images to the actual Earth's interior, and shall lead towards quantitative illumination criteria in selecting seismic data prior to inversions.