Skip to main content

Official Journal of the Asia Oceania Geosciences Society (AOGS)

  • Research Letter
  • Open access
  • Published:

Geophysical model generation with generative adversarial networks


With the rapid development of deep learning technologies, data-driven methods have become one of the main research focuses in geophysical inversion. Applications of various neural network architectures to the inversion of seismic, electromagnetic, gravity and other types of data confirm the potential of these methods in real-time parameter estimation without dependence on the starting subsurface model. At the same time, deep learning methods require large training datasets which are often difficult to acquire. In this paper, we present a generator of 2D subsurface models based on deep generative adversarial networks. Several networks are trained separately on realistic density and stratigraphy models to reach a sufficient degree of accuracy in generation of new highly detailed and varied models in real-time. This allows for creation of large synthetic training datasets in a cost-effective manner, thus facilitating the development of better deep learning algorithms for real-time inversion and interpretation.


Over the past few years, methods based on deep neural networks (DNN) received significant attention in the geoscientific community. They have been widely applied to various problems such as seismic data processing and interpretation (Zhu et al. 2019), earthquake and tsunami prediction (Fauzi and Mizutani 2020; Mulia et al. 2020; Fauzi and Mizutani 2020) and many others. Of particular interest is the application of modern deep learning (DL) methods to inverse problems in geophysics, i.e. estimation of subsurface parameters from measurements by minimizing a misfit between observed and simulated data. These DL inversion methods have been extensively developed in recent years for the inversion of seismic (Araya-Polo et al. 2018; Yang and Ma 2019; Wu and Lin 2019; Li et al. 2020), electromagnetic (Puzyrev 2019; Oh and Byun 2021) and gravity (Yang et al. 2021) data. Contrary to the conventional gradient-based methods, which are commonly applied in the full waveform inversion (FWI) but are highly sensitive to the starting model, DL inversion does not require specifying a particular model in advance. Instead, it allows for direct estimation of subsurface properties from observed data by exploiting different layers of abstraction to detect low-level and high-level features in data and “learning” nonlinear dependencies that link data and underlying physical model. The majority of the modern DL inversion methods employ convolutional neural networks (CNN), which are very efficient in spatial data handling. Another advantage of the method is that its most computationally expensive parts, which include data preparation and network training, can be performed offline. Once the network is properly trained (i.e. it reaches sufficiently low errors during validation), estimation of the unknown parameters from new data can be done online and takes between a fraction of a second and a few seconds depending on the model complexity. This allows for inversion in real time.

Despite these first promising results, DL inversion suffers from the same drawback as other DL methods, namely the need for a large number of labelled data samples. This, in turn, requires significant manual labour and human expert involvement. Training a modern DNN-based inversion network typically requires hundreds of thousands of complex velocity models (Ren et al. 2021). Quality of training data is extremely important since neural networks can effectively learn how to recognize real geological structures in field data only if trained on similar (real or high-quality synthetic) examples (Puzyrev and Swidinsky 2021). This motivates the development of tools for the automatic generation of diverse subsurface models, which can be used later as training data in DL inversion.

Existing approaches to velocity model generation are largely based on the generation of models with common geological structures, such as folded layers, faults and salt bodies (Wu et al. 2020; Ren et al. 2021; de la Varga et al. 2019; Ao et al. 2020). These methods typically start with a simple initial model and sequentially add geological structures using randomly chosen parameters (Liu et al. 2021). Machine learning methods can address the task of model generation as well. For example, in Ovcharenko et al. (2019) the authors used a CNN-based style transfer approach to produce realistically textured subsurface models based on synthetic prior models. Another alternative, which had only been applied to model generation in earth sciences for geological facies modelling (Zhang et al. 2019; Song et al. 2021) but is widely exploited in other fields, is to use neural networks-based generative models.

The task of generating artificial data has become very common in the DL field in recent years, especially within image processing applications. Two classes of deep neural networks, namely Generative Adversarial Networks (GAN) and Variational Autoencoders (VAE), have achieved large success in generative modelling. GANs, originally proposed in 2014 Goodfellow et al. (2014), have quickly become one of the most important developments in machine learning and computer vision in the 2010s. Training of a GAN is done through an adversarial process involving a pair of networks: a generative model G that captures the data distribution, and a discriminative model D that distinguishes between samples generated by G and those coming from the training data. The resolution and quality of images produced by GANs improved rapidly from rather simple \(32^2\) and \(48^2\) pixel images in 2014 to realistic high-quality \(1024^2\) images in 2019. Significant progress has been also made in improving the variety and diversity of the generated samples. These GAN-generated samples offer a novel method for data augmentation, which allows significant improvement of various applied tasks (Sandfort et al. 2019).

In this paper, we present a new approach for geophysical model building based on GANs. Two state-of-the-art unconditional generative networks, namely StyleGAN2 (Karras et al. 2020) and its recent extension with adaptive discriminator augmentation (ADA) referred to below as StyleGAN2 ADA (Karras et al. 2020), are applied to the creation of 2D density and stratigraphic models. StyleGAN2, while allowing an unprecedented quality of the generated images to be achieved, requires very large training datasets (of the order of \(10^5\)\(10^6\) images). Using too little training data for GANs often results in discriminator overfitting, thus making its feedback to the generator meaningless and causing the training to diverge. The use of the ADA mechanism allows significant stabilization of training in StyleGAN2 ADA when only limited data are available. Data and pre-trained networks are freely available at Github and can be used for 2D model generation by other researchers.

Stratigraphic forward modelling

We use the open-source modelling code Badlands (Salles et al. 2018) to build 3D synthetic stratigraphic architectures, which will allow rich datasets of 2D models to be extracted and used in network training. Badlands is able to simulate both landscape and stratigraphic evolution over space and time induced by sediment erosion, transport and deposition. For this paper and amongst the different capabilities available, we switch on fluvial incision and hillslope processes, which are described by geomorphic equations and explicitly solved using a finite volume discretization. In our experiments, we assume spatially and temporally uniform soil properties over the region and we do not differentiate between regolith and bedrock. Under these assumptions, the continuity of mass is governed by long-term diffusive processes, detachment-limited stream power law and tectonic forces (U):

$$\begin{aligned} \frac{\partial z}{\partial t} = U + \kappa \nabla ^2 z + \epsilon (PA)^m \nabla z^n, \end{aligned}$$

with the elevation z (m), \(\kappa\) the diffusion coefficient for soil creep (Chen et al. 2014) (we choose 0.8 and 1 \(\text {m}^2/\text {yr}\) for terrestrial and marine environments, respectively), m and n dimensionless empirically constants set to 0.5 and 1, respectively, \(\epsilon\) a dimensional coefficient of erodibility of the channel bed (\(2.e^{-6}\) \(\text {yr}^{-1}\)), and PA a proxy for water discharge that numerically integrates the total area (A) and precipitation (P) from upstream regions (Salles et al. 2018). Both \(\kappa\) and \(\epsilon\) depend on lithology, precipitation, and channel hydraulics and are scale dependent (Tucker and Hancock 2010).

Fig. 1
figure 1

A Landscape evolution model outputs showing simulated elevation changes over time induced by spatially variable tectonic regimes and climatic (sea-level and precipitation) forcings. B Bottom panel shows recorded stratigraphic architecture after 20 Myr extracted as a 3D volume for the shaded red box defined in A. The volume can be sliced in all directions, here the cross-section A-B is visualized and coloured based on individual layer thicknesses. Contour lines are drawn every 250 kyr

Figure 1 shows an example of the landscape and stratigraphy evolution simulations run for 20 million years over a continental scale triangular irregular network \(822 \times 445\) km\(^2\) with a resolution of \(\sim 2\) km and outputs saved every 50, 000 yr. Stratigraphic architecture records surface evolution history from sediment production in the continental domain to its transport and deposition in either the terrestrial or marine realm. At each internal time step, the stratigraphic mesh records for every node the elevation at the time of deposition, and the thickness of the active layers (which can be null in case of erosion or sedimentary hiatus) as well as potential thickness changes in underlying sedimentary layers (due to erosion).

Training data

The 2D models used as the training data for our GANs are extracted from five 3D stratigraphic models generated using Badlands. Each density model is 8-10 km in length and 2-2.5 km in depth. Stratigraphic models have lateral dimension varying between 8 and 16 km and depth between 2 and 4 km (the aspect ratio is 4:1 for all 2D models). To generate density models, we first assign lithological types which represent different shale–sand proportions (Bouziat et al. 2019) to the Badlands-produced layers. The resulting density for each cell is calculated as:

$$\begin{aligned} \rho = V_{shale} \cdot \rho _{shale} + (1-V_{shale}) \cdot \rho _{sand}. \end{aligned}$$

Here, as \(\rho _{shale}\) and \(\rho _{sand}\) we use a porosity-dependent combination of matrix and pore fluid. As the matrix, we have either shale with 2.8 \(g/cm^3\) density or quartzite with a density of 2.65 \(g/cm^3\)). The pore fluid is chosen as a saltwater with a density of 1.146 \(g/cm^3\). The sand porosity varies between 40% at the surface and 22% at depth of 2 km, while the shale porosity varies between 70% and 35%.

Fig. 2
figure 2

Sample models from the training datasets. A Density models. B Stratigraphic models

Figure 2 shows several representative density and stratigraphy models from the training dataset. We can observe several realistic features including faults and prograding sedimentary packages. Density increases with depth, however, the rate of this increase varies considerably. Both density and stratigraphic training sets consist of 5,000 different 2D models each. They are passed to GANs as single-channel grayscale images.

GAN setup

The training of a GAN involves training both generator and discriminator models simultaneously and in competition with each other (Goodfellow et al. 2014). The generator gradually learns to generate more realistic looking samples that could deceive the discriminator, while at the same time the latter learns to distinguish these better quality-generated samples from the real ones. As the architecture and properties of the StyleGAN2 model, we use the config-e predefined configuration which offers a compromise between quality of the generator and computational effort needed for training (Karras et al. 2020). The generator and discriminator networks have 24.85 and 24.03 million trainable parameters, respectively. The first layer of the discriminator has the shape of 1 x 512 x 512, which corresponds to a single-channel \(512^2\) image. 32 filters are used in the first convolutional layer. The last convolutional layer of the discriminator has dimensions of 512 x 4 x 4. Table 1 reports the main training statistics and compares performance on NVIDIA GTX 1080 Ti and V100 GPUs. The length of the training process is described by the number og “kimg” that are thousands of real images shown to the network.

Table 1 Training statistics and final FID scores of our GAN models. The term “kimg” refers to thousands of real images shown to the network and thus defines the length of the training process

As the metric to assess the quality of generated images, we use the Fréchet inception distance (FID) which compares the distribution of generated images with the distribution of real images using the features from the last 2048-dimensional pooling layer (pool 3) of a pretrained Inception-v3 convolutional neural network (Heusel et al. 2017). Lower FID is better since it means that real and generated samples are similar in terms of the distance between their activation distributions. From Table 1, we observe that StyleGAN2 ADA takes a significantly fewer number of iterations to be trained (although ADA iterations take on average twice as long as the original StyleGAN2 iterations) and thus requires less GPU time, while delivering similar FID scores.

Numerical examples

Fig. 3
figure 3

Density models generated by the StyleGAN2 network. A \(\psi = 0.25\). B \(\psi = 0.5\). C \(\psi = 0.75\)

Figure 3 shows several density models generated by the StyleGAN2 network. Here, we compare models generated using three different values of \(\psi\) parameter (Karras et al. 2020), whose value determines the deviation of generated images from the average. The truncation parameter is commonly used to trade-off between the quality and variability of the output. Thus, \(\psi\) equal to one is equivalent to no truncation, while \(\psi\) values close to zero result in quality improvement at the cost of reduced variety. In this example, we observe a higher degree of similarity in the models shown in Fig. 3a. Larger values of the truncation parameter \(\psi\) clearly lead to higher variability in the generated samples (Fig. 3b, c).

Fig. 4
figure 4

Stratigraphic models generated by the StyleGAN2 network. A \(\psi = 0.25\). B \(\psi = 0.5\). C \(\psi = 0.75\)

In Fig. 4, we show examples of stratigraphic models generated independently of the density models by the StyleGAN2 network. These models resemble well the training set and include from 5–6 to 30 individual layers with highly varying thicknesses and different degrees of dip. The effect of the truncation parameter \(\psi\) is similar to the previous case. The network successfully generated stratigraphic features such as downlap, toplap, progradation, clinoform geometries, and structural features such as folds. Qualitative evaluation of the generated stratigraphic models confirms their visual consistency with the training data.

Fig. 5
figure 5

Density models generated by the StyleGAN2 ADA network. A \(\psi = 0.25\). B \(\psi = 0.5\). C \(\psi = 0.75\)

Finally, density models generated by the StyleGAN2 ADA are shown in Fig. 5. For this network, we use the same training data and values of the \(\psi\) parameter as for the classical StyleGAN2. The quality of the generated samples are similar between these two cases, and so is the effect of \(\psi\). Sample variability at \(\psi =0.75\) (Fig. 5c) is higher compared to the case shown in Fig. 3c. The training time required to reach similar FID scores is significantly smaller for ADA (Table 1).

Discussion and conclusions

Methods based on deep learning (DL) recently captured the attention of the geophysical community and have become one of the main focuses of research in geophysical modelling and inversion. Their main advantages include no dependence on the starting subsurface model and real-time estimation of model parameters from new data using the pretrained network. On the other hand, all DL-based methods for inversion and interpretation of geophysical data require large training datasets which often limits their usage for practical applications. In this paper, we present a generator of 2D subsurface models based on StyleGAN2 and apply it to the generation of synthetic density and stratigraphy models. As a training set, we use a representative set of subsurface models generated using Badlands modelling code. Once our GANs are trained and reach a sufficient degree of accuracy, they can be used to generate in real-time sufficiently detailed and varied artificial geological models, which have features similar to the models used in training. This allows creating multiple synthetic density and stratigraphy models in a cost-effective manner. A similar approach can be used to create subsurface models with different physical properties such as velocity models. The proposed method can serve as a useful augmentation tool for training sets in DL inversion, thus facilitating the development of more advanced tools for real-time estimation of subsurface parameters from collected data. The pretrained networks and sample sets of 2D subsurface models used in this paper are available online at

Finally, we note that GAN framework was extended to the conditional setting (Mirza and Osindero 2014). Such conditional GAN has both the generator and the discriminator conditioned on some additional latent variables, e.g., a class label, which allows generation of samples belonging to a specific class. This opens possibilities for generation of models with predefined characteristics which might find further applications in interpretation problems.

Availability of data and materials

The datasets generated and analysed during the current study are available online at


  • Ao Y, Lu W, Jiang B, Monkam P (2020) Seismic structural curvature volume extraction with convolutional neural networks. IEEE Trans Geosci Remote Sens 59(9):7370–7384

    Article  Google Scholar 

  • Araya-Polo M, Jennings J, Adler A, Dahlke T (2018) Deep-learning tomography. Lead Edge 37(1):58–66

    Article  Google Scholar 

  • Bouziat A, Guy N, Frey J, Colombo D, Colin P, Cacas-Stentz M-C, Cornu T (2019) An assessment of stress states in passive margin sediments: iterative hydro-mechanical simulations on basin models and implications for rock failure predictions. Geosciences 9(11):469

    Article  Google Scholar 

  • Chen A, Darbon J, Morel J-M (2014) Landscape evolution models: a review of their fundamental equations. Geomorphology 219:68–86.

    Article  Google Scholar 

  • de la Varga M, Schaaf A, Wellmann F (2019) GemPy 1.0: open-source stochastic geological modeling and inversion. Geosci Model Dev 12(1):1–32

    Article  Google Scholar 

  • Fauzi A, Mizutani N (2020) Potential of deep predictive coding networks for spatiotemporal tsunami wavefield prediction. Geosci Lett 7(1):1–13

    Article  Google Scholar 

  • Fauzi A, Mizutani N (2020) Machine learning algorithms for real-time tsunami inundation forecasting: a case study in Nankai region. Pure Appl Geophys 177(3):1437–1450

    Article  Google Scholar 

  • Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial networks. arXiv preprint arXiv:1406.2661

  • Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S (2017) GANs trained by a two time-scale update rule converge to a local nash equilibrium. arXiv preprint arXiv:1706.08500

  • Karras T, Aittala M, Hellsten J, Laine S, Lehtinen J, Aila T (2020) Training generative adversarial networks with limited data. arXiv preprint arXiv:2006.06676

  • Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J, Aila T (2020) Analyzing and improving the image quality of StyleGAN. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8110–8119

  • Li S, Liu B, Ren Y, Chen Y, Yang S, Wang Y, Jiang P (2020) Deep-learning inversion of seismic data. IEEE Trans Geosci Remote Sens 58(3):2135–2149.

    Article  Google Scholar 

  • Liu B, Yang S, Ren Y, Xu X, Jiang P, Chen Y (2021) Deep-learning seismic full-waveform inversion for realistic structural models. Geophysics 86(1):31–44

    Article  Google Scholar 

  • Mirza M, Osindero S (2014) Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784

  • Mulia IE, Gusman AR, Satake K (2020) Applying a deep learning algorithm to tsunami inundation database of megathrust earthquakes. J Geophys Res Solid Earth 125(9):2020–019690

    Article  Google Scholar 

  • Oh S, Byun J (2021) Bayesian uncertainty estimation for deep learning inversion of electromagnetic data. IEEE Geosc Remote Sens Lett 19:1–5

    Article  Google Scholar 

  • Ovcharenko O, Kazei V, Peter D, Alkhalifah T (2019) Style transfer for generation of realistically textured subsurface models. In: SEG technical program expanded abstracts 2019, pp 2393–2397

  • Puzyrev V (2019) Deep learning electromagnetic inversion with convolutional neural networks. Geophys J Int 218(2):817–832

    Article  Google Scholar 

  • Puzyrev V, Swidinsky A (2021) Inversion of 1D frequency-and time-domain electromagnetic data with convolutional neural networks. Comput Geosci 149:104681

    Article  Google Scholar 

  • Ren Y, Nie L, Yang S, Jiang P, Chen Y (2021) Building complex seismic velocity models for deep learning inversion. IEEE Access 9:63767–63778

    Article  Google Scholar 

  • Salles T, Ding X, Webster JM, Vila-Concejo A, Brocard G, Pall J (2018) A unified framework for modelling sediment fate from source to sink and its interactions with reef systems over geological times. Sci Rep 8:5252.

    Article  Google Scholar 

  • Sandfort V, Yan K, Pickhardt PJ, Summers RM (2019) Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks. Sci Rep 9(1):1–9

    Article  Google Scholar 

  • Song S, Mukerji T, Hou J (2021) Geological facies modeling based on progressive growing of generative adversarial networks (GANs). Comput Geosci 25(3):1251–1273

    Article  Google Scholar 

  • Tucker GE, Hancock GR (2010) Modelling landscape evolution. Earth Surf Proc Land 35(1):28–50.

    Article  Google Scholar 

  • Wu Y, Lin Y (2019) InversionNet: an efficient and accurate data-driven full waveform inversion. IEEE Trans Comput Imaging 6:419–433

    Article  Google Scholar 

  • Wu X, Geng Z, Shi Y, Pham N, Fomel S, Caumon G (2020) Building realistic structure models to train convolutional neural networks for seismic structural interpretation. Geophysics 85(4):27–39

    Article  Google Scholar 

  • Yang F, Ma J (2019) Deep-learning inversion: a next-generation seismic velocity model building method. Geophysics 84(4):583–599

    Article  Google Scholar 

  • Yang Q, Hu X, Liu S, Jie Q, Wang H, Chen Q (2021) 3-D gravity inversion based on deep convolution neural networks. IEEE Geosci Remote Sens Lett 19:1–5

    Google Scholar 

  • Zhang T, Tilke P, Dupont E, Zhu L, Liang L, Bailey W (2019) Generating geologically realistic 3D reservoir facies models using deep learning of sedimentary architecture with generative adversarial networks. In: International petroleum technology conference . OnePetro

  • Zhu W, Mousavi SM, Beroza GC (2019) Seismic signal denoising and decomposition using deep neural networks. IEEE Trans Geosci Remote Sens 57(11):9476–9488

    Article  Google Scholar 

Download references


This work was supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia. VP and CE acknowledge support from the Curtin University Oil and Gas Innovation Centre (CUOGIC) and the Institute for Geoscience Research (TIGeR).

Author information

Authors and Affiliations



VP and CE conceived and designed the methodology. TS carried out the Badlands simulations and generated the data. VP and GS developed the computational framework and performed numerical simulations. All authors contributed to the interpretation of the results. VP took the lead in writing the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Vladimir Puzyrev.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Puzyrev, V., Salles, T., Surma, G. et al. Geophysical model generation with generative adversarial networks. Geosci. Lett. 9, 32 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: