Learning UQLab by example
We made a significant effort to provide UQLab users with a number of examples that can be used to gradually learn all the features of the software.
Each example is especially prepared to provide users with enough experience and background to develop their own applications in no time. The examples are divided into different categories, as explained below.
Reliability-based design optimization
Support vector machines
Polynomial chaos expansions
High-performance computing (HPC) dispatcher
Distributed computing resources give users possibilities to scale up, speed up, or offload UQLab computations typically running on their personal computers. However, typical workflow of using such resources, from submitting the computation to retrieving the results, can be challenging.
UQLab, via the HPC dispatcher module, offers an interface between users' personal computers and common distributed computing resources
(e.g., HPC clusters) to seamlessly set up, submit, and retrieve parallel computation jobs directly from within UQLab.
Reliability-based design optimization
Reliability-based design optimization (RBDO) is a powerful tool for the design of structures under uncertainty. UQLab offers an intuitive way to set-up and solve RBDO problems using either state-of-the-art algorithms or custom solution schemes, which combine various reliability, optimization and surrogate modeling techniques.
Bayesian inversion is a powerful tool for probabilistic model calibration and validation. UQLab offers users an intuitive way to set up Bayesian inverse problems with customizable likelihood functions and discrepancy model options; and solve the problems with state-of-the art Markov Chain
Monte Carlo (MCMC) algorithms.
UQLab allows one to easily define computational models involving a third-party software through a built-in universal "code wrapper".
After the wrapper has been configured through input/output text files and ad-hoc text marking and parsing, the corresponding computational model can be seamlessly integrated with any techniques available in other UQLab modules to create complex analyses in no time.
Support vector machines
Support vector machines (SVM) belong to a class of machine learning techniques developed since the mid-'90s. In the context of uncertainty quantification, support vector machines for classification (SVC) can be used to build a classifier given an experimental design. They can be used for reliability analysis (a.k.a. rare events estimation). Support vector machines for regression (SVR) can be used as a metamodeling tool to approximate a black-box or expensive-to-evaluate computational model.
UQLab offers a straightforward parametrization of a SVC model to be fitted on the data set at hand (e.g. a design of computer experiments): linear and quadratic penalization schemes, separable and elliptic kernels based on classical SVM kernels (linear, polynomial, sigmoid) and others (Gaussian, exponential, Matérn, user-defined). Different techniques including the span leave-one-out and the cross-validation error estimation methods as well as various optimization algorithms are available to estimate the hyperparameters.
UQLab offers a straightforward parametrization of an SVR model to be fitted on the data set at hand (e.g., an experimental design):
L1- and L2-penalization schemes, separable and elliptic kernels based on classical SVM kernels (linear, polynomial, sigmoid) and others (Gaussian, exponential, Matérn, user-defined). Different techniques, including the span leave-one-out and the cross-validation error estimation methods, as well as various optimization algorithms, are available to estimate the hyperparameters.
Whether the uncertainty sources are simple independent uniform variables or complex combinations of fancy marginals glued together by a copula, UQLab provides simple methods to define, sample or transform your input distributions. If the theoretical marginals and/or copulas are unknown and only data are available, uqlab provides the possibility to infer the unknown distributions from the data in a simple way.
UQLab allows one to simply define new computational models, either based on existing m-code files, function handles, or simple text strings.
After a computational model has been configured, it can be seamlessly integrated with any other technique to create complex analyses in no time.
Polynomial chaos expansions
Modern computational models are often expensive-to-evaluate. In the context of uncertainty quantification, where repeated runs are required,
only a limited budget of simulations is usually affordable. We consider a “reasonable budget” to be in the order of 50-500 runs.
With this limited number of runs, a polynomial surrogate may be built, which mimics the behavior of the true model at a close-to-zero computational cost. Polynomial chaos expansions are a particularly powerful technique which makes use of so-called orthogonal polynomials with respect to the input probability distributions.
Kriging (Gaussian process modeling) considers the computational model as a realization of a Gaussian process indexed by the parameters
in the input space.
UQLab offers a straightforward parametrization of the Gaussian process to be fitted to the experimental design points: constant, linear, polynomial, or arbitrary trends, with separable and elliptic kernels based on different one-dimensional families (Gaussian, exponential, Matérn, or user-defined).
The hyperparameters can be estimated either using the Maximum-Likelihood or the Cross-Validation method using various optimization techniques (local and global). UQLab supports both interpolation and regression (Kriging with noisy model response) mode.
Polynomial Chaos-Kriging (PC-Kriging) combines features from polynomial chaos expansions and Kriging in a single efficient surrogate.
A PC-Kriging model is effectively a universal Kriging model with a sophisticated trend based on orthogonal polynomials. By adopting advanced sparse polynomial chaos expansion techniques based on compressive sensing, PC-Kriging can effectively reproduce the global approximation behavior typical of PCE, while at the same time retaining the interpolatory characteristics of Kriging.
The PC-Kriging implementation in UQLab leverages on the PCE and Kriging modules, to provide a fully-configurable tool that is seamlessly integrated with the whole metamodeling tools offered by the platform.
Canonical low-rank polynomial approximations (LRA), also known as separated representations, have recently been introduced in the field of uncertainty quantification as a promising tool for effectively dealing with high-dimensional model inputs. The key idea is to approximate a response quantity of interest with a sum of a small number of appropriate rank-one tensors, which are products of univariate polynomial functions.
An important property of LRA is that the number of unknowns increases only linearly with the input dimension, hence making them a powerful tool to tackle high dimensional problems.
The LRA module in UQLab capitalizes on the PCE module to create low-rank representations, hence offering similar flexibility in terms of supported polynomial types.
Sensitivity analysis is a powerful tool to identify which variables contribute the most to the variability of the response of a computational model.
UQLab offers a wide selection of sensitivity analysis tools, ranging from sample-based methods (e.g., input/output correlation), linearization methods (e.g., perturbation analysis), screening methods (e.g., Morris' elementary effects), moment-independent method (e.g., Borgonovo indices), to the more advanced ANOVA- (ANalysis Of Variance) measures for independent (e.g., Sobol' indices) and dependent inputs parameters (e.g., ANCOVA indices).
The algorithms take advantage of the latest available sampling-based algorithms as well as some metamodel-specific techniques to extract the most accurate information based on relatively small sample size (e.g., polynomial chaos expansion- and low-rank-approximation-based Sobol' indices).
The reliability analysis (rare events estimation) module provides a set of tools to determine the probability that some criterion on the model response is satisfied, e.g., the probability of exceedance of some prescribed admissible threshold.
The reliability analysis module in UQLab comprises a number of classical techniques (e.g., FORM and SORM approximation methods,
classical Monte-Carlo Sampling (MCS), Importance Sampling, and Subset Simulation) as well as the more recent surrogate modeling-based methods (e.g., Adaptive Kriging Monte-Carlo simulation).
The reliability analysis module in UQLab also offers a modular framework that allows users to easily build active learning custom solution schemes
by combining a selection of surrogate models and reliability algorithms, learning function, and stopping criteria.
All the available algorithms take advantage of the high interoperability with the other modules of UQLab, hence making their deployment time-efficient