Learning UQLab by example

 

We made a significant effort to provide UQLab users with a number of examples that can be used to gradually learn all the features of the software.

 

Each example is especially prepared to provide the user with enough experience and background to develop their own applications in no time. The examples are divided in different categories, as explained below.

Contents

 Bayesian inversion

UQLink

 Support vector machines

 Probabilistic input

 Basic modelling

 Polynomial chaos expansions

 Kriging

 Polynomial-Chaos Kriging

 Low-rank approximations

 Sensitivity analysis

Reliability analysis

Bayesian inversion

 NEW! 

 

Bayesian inversion module short description.

Importance Sampling

Description of the 1st Bayesian example.

Description of the 2nd Bayesian example.

Importance Sampling

Description of the 3rd Bayesian example.

 

UQLink

UQLab allows one to easily define computational models involving a third-party software through a built-in universal "code wrapper".

After the wrapper has been configured through input/output text files and ad-hoc text marking and parsing, the corresponding computational model can be seamlessly integrated with any techniques available in other UQLab modules to create complex analyses in no time.

Importance Sampling

Link UQLab to a C code implementing a simply supported beam then carry out a reliability analysis using AK-MCS.

Link UQLab to a C code implementing a simply supported beam then carry out a reliability analysis using AK-MCS.

Importance Sampling

Link UQLab to an Abaqus model of a ten bar truss then carry out sensitivity and reliability analyses.

Importance Sampling

Link UQLab to OpenSees for a pushover analysis of a two-story
one-bay structure.

 

Support vector machines

Support vector machines (SVM) belong to a class of machine learning techniques developed since the mid 90’s. In the context of uncertainty quantification, support vector machines for classification (SVC) can be used to build a classifier given an experimental design. They can be used for reliability analysis (a.k.a. rare events estimation). Support vector machines for regression (SVR) can be used as a metamodelling tool to approximate a black-box or expensive-to-evaluate computational model.

 

Classification (SVC)

UQLab offers a straightforward parametrization of a SVC model to be fitted on the data set at hand (e.g. a design of computer experiments): linear and quadratic penalization schemes, separable and elliptic kernels based on classical SVM kernels (linear, polynomial, sigmoid) and others (Gaussian, exponential, Matérn, user-defined). Different techniques including the span leave-one-out and the cross-validation error estimation methods as well as various optimization algorithms are available to estimate the hyperparameters.

Importance Sampling

Learn how to select different kernel families through an introductory SVC example based on the Fisher’s iris dataset.

Learn how to use the span estimate of the leave-one-out error or K-fold cross-validation as well as various optimization strategies.

Importance Sampling

Regression (SVR)

Create an SVC model from existing data (Breast cancer dataset)

 

Importance Sampling
 

UQLab offers a straightforward parametrization of an SVR model to be fitted on the data set at hand (e.g., a design of computer experiments):
L1- and L2-penalization schemes, separable and elliptic kernels based on classical SVM kernels (linear, polynomial, sigmoid) and others (Gaussian, exponential, Matérn, user-defined). Different techniques, including the span leave-one-out and the cross-validation error estimation methods, as well as various optimization algorithms, are available to estimate the hyperparameters.

Importance Sampling

Learn how to select different kernel families through an introductory SVR example.

Importance Sampling

Learn how to use the span estimate of the leave-one-out error or K-fold cross-validation as well as various optimization strategies.

Importance Sampling

Apply SVR to a computational model with multiple outputs.

 

Importance Sampling

Create an SVR model from existing data (Boston housing dataset).

 

Probabilistic Input

Whether the uncertainty sources are simple independent uniform variables, or complex combinations of fancy marginals glued together by a copula, UQLab provides simple methods to define, sample or transform your input distributions.

Learn how to specify a random vector and draw samples using various sampling strategies.

Learn how to specify marginal distributions for the elements of a random vector and a Gaussian copula.

 

Basic modelling

UQLab allows one to simply define new computational models, either based on existing m-code files, function handles, or simple text strings.
After a computational model has been configured, it can be seamlessly integrated with any other technique to create complex analyses in no time.

Learn different ways to define a computational model. 

Learn how to pass parameters to a computational model.

Learn how to handle models that return vector outputs instead of scalars. 

 

Polynomial chaos expansions

Modern computational models are often expensive-to-evaluate. In the context on uncertainty quantification, where repeated runs are required, only a limited budget of simulations is usually affordable. We consider a “reasonable budget” to be in the order of 50-500 runs.

 

With this limited number of runs, a polynomial surrogate may be built, which mimics the behavior of the true model at a close-to-zero computational cost. Polynomial chaos expansions are a particularly powerful technique which makes use of so-called orthogonal polynomials with respect to the input probability distributions.

Learn how to deploy various strategies to compute PCE coefficients.

Apply sparse PCE to surrogate a model with multiple outputs.

Try out various methods to generate an experimental design (DOE) and compute PCE coefficients with least-square minimization.

Build a sparse PCE surrogate model from existing data

Apply sparse PCE for the estimation of the mid-span deflection of a simply supported beam subject to a uniform random load.

Build a sparse PCE using polynomials orthogonal to arbitrary distributions

 

Kriging

Kriging

Kriging (a.k.a Gaussian process modelling) considers the computational model as a realization of a Gaussian process indexed by the parameters in the input space.

 

UQLab offers a straightforward parametrization of the Gaussian process to be fitted to the experimental design points: constant, linear, polynomial or arbitrary trends, separable and elliptic kernels based on different one-dimensional families (Gaussian, exponential, Matérn, user-defined).

 

The hyperparameters can be estimated either using the Maximum-Likelihood or the Cross-Validation method using various optimization techniques (local and global).

Learn how to select different correlation families through an introductory Kriging example.

Learn how to select different correlation families through an introductory Kriging example.

Create a  Kriging surrogate with multiple input variables

Learn how to use maximum likelihood or leave-one-out cross-validation as well as various optimization strategies.

Create a Kriging surrogate from existing data (truss structure dataset)

Apply Kriging to a computational model with multiple outputs.

Learn how to specify various trend types.

Create a Kriging surrogate from existing data (Boston housing dataset)

 

Polynomial Chaos-Kriging

Polynomial Chaos-Kriging (PC-Kriging) combines features from Polynomial Chaos expansions and Kriging in a single efficient surrogate.

A PC-Kriging model is effectively a universal Kriging model with a sophisticated trend based on orthogonal polynomials. By adopting advanced sparse polynomial chaos expansion techniques based on compressive sensing, PC-Kriging can effectively reproduce the global approximation behaviour typical of PCE, while at the same time retaining the interpolatory characteristics of Kriging.

The PC-Kriging implementation in UQLab leverages on the PCE and Kriging modules, to provide a fully-configurable tool that is seamlessly integrated with the whole metamodelling environment offered by the platform.

Learn how to create a PC-Kriging surrogate of a simple 1D function

Learn how to create a PC-Kriging surrogate of a multi-dimensional function

Apply PC-Kriging to a computational model with multiple outputs.

Create a PC-Kriging surrogate from existing data

 

Low-rank approximations

Canonical low-rank polynomial approximations (LRA), also known as separated representations, have recently been introduced in the field of uncertainty quantification as a promising tool for effectively dealing with high-dimensional model inputs. The key idea is to approximate a response quantity of interest with a sum of a small number of appropriate rank-one tensors, which are products of univariate polynomial functions.

An important property of LRA is that the number of unknowns increases only linearly with the input dimension, hence making them a powerful tool to tackle high dimensional problems.

The LRA module in UQLab capitalizes on the PCE module to create low rank representations, hence offering similar flexibility in terms of supported polynomial types.

Create a canonical low-rank tensor approximation of a simple engineering model

Create a canonical low-rank tensor approximation of a highly non-linear function

Create a canonical low-rank approximation of a complex to surrogate function

Learn how to create a low rank approximation of a model with multiple outputs

Learn how to create a low-rank approximation from existing data

 

Sensitivity analysis

Sensitivity analysis is a powerful tool to identify which variables contribute the most to the variability of the response of a computational model.

 

UQLab offers a wide selection of sensitivity analysis tools, ranging from linearization methods (e.g., perturbation analysis, standard regression coefficients),  screening methods (e.g., Morris' elementary effects and Cotter indices) and more advanced ANOVA- (ANalysis Of Variance) measures (e.g., Sobol' indices).

 

The various algorithms can take advantage of the latest available sampling-based algorithms, as well as some surrogate-model-specific techniques to extract the most accurate information based on a relatively small sample size (e.g., polynomial chaos expansion- and low-rank-approximation- based Sobol' indices).

Apply various sensitivity analysis techniques to a benchmark problem (borehole function).

Discover the amazing efficiency of PCE-based Sobol' analysis on a 100-dimensional example.

Learn how to obtain the Sobol' indices using either the sampling-based or the PCE/LRA-based methods. 

Apply sensitivity analysis techniques to models with multiple outputs.

 

Reliability analysis

The reliability analysis (a.k.a. rare events estimation) module provides a set of tools to determine the probability that some criterion on the model response is satisfied, e.g. the probability of exceedance of some prescribed admissible threshold. 

 

The reliability analysis module in UQLab comprises a number of classical techniques (e.g. FORM and SORM approximation methods, classical Monte-Carlo Sampling (MCS), Importance Sampling and Subset Simulation) as well as more recent, surrogate modelling-based methods (e.g. Adaptive Kriging Monte-Carlo simulation).

 

All the available algorithms can take advantage of the high interoperability with UQLab's other modules, hence making their deployment time-efficient.

Importance Sampling
Subset Simulation

See how different simulation-based methods perform on a highly non-linear limit-state function.

Structural reliability Damped Oscillator

Apply several different algorithms in a high-dimensional, strongly non-linear test case.

Reliability of a parallel system

Perform reliability analysis on a parallel system with the FORM method.

Reliability of a parallel system

Use UQLab to compute the outcrossing rate  in a non-stationary time-variant reliability problem using the PHI2 method.