Component Reliability: Part 2#
This assignment builds up on the first one on component reliability, and is based on the assigned reading in Sections 6.6, 6.7 and 6.8. The reliability problem from HW3 is not presented here again as it is strictly the same as in Homework 3, and the student is therefore invited to refer to it if needed.
Task 1: Derive the analytic expressions for the sensitivity vectors introduced in the Chapter 6 reading of the ADK book. Specifically, the sensitivity to the limit-state function parameters a and b, as well as the sensitivity to the distribution parameters (there are 4: 2 for each random variable).
Task 2: Use the results of the FORM analysis to calculate the sensitivity vectors.
Task 3: Interpret the results.
The solutions are in blocks like this.
First: import packages, define the limit-state function, marginal distributions and correlation structure.
Show code cell source
import numpy as np
import scipy.stats as st
import openturns as ot
# Limit state function parameters:
a = 1
b = -2
# Distribution parameters X1
mu_1 = 10
sigma_1 = 2
X1 = st.norm(mu_1, sigma_1)
# Distribution parameters X2
mu_2 = 20
sigma_2 = 5
X2 = st.norm(mu_2, sigma_2)
rho = 0.5
def cond_param(X1, X2, x1):
''' Computes the distribution parameters of X2|X1=x1.
Arguments:
X1, X2: random variables.
x1: float.
'''
sigma = X2.std()*np.sqrt(1-rho**2)
mu = X2.mean() + rho*X2.std()*(x1-X1.mean())/X1.std()
return mu, sigma
def myLSF(x):
'''
Vectorized limit state function.
------------
x: (2,1) array. First coordinate corresponds to the value of X1, second to the value of X2.
'''
return [a*x[0]**2 + b*x[1]]
def grad_myLSF(x):
'''
Computes the gradient of the limit state function.
Arguments:
x: (2,1) array. First coordinate corresponds to the value of X1, second to the value of X2.
'''
return np.array([[2*a*x[0], b]])
# Definition of random variables
x1 = ot.Normal(mu_1, sigma_1) # X1 already assigned to normal RV defined with scipy.stats
x2 = ot.Normal(mu_2, sigma_2) # Idem for X2
# Definition of the dependence structure: here, bivariate normal with correlation coefficient rho.
R = ot.CorrelationMatrix(2)
R[0,1] = rho
R[1,0] = rho
bivar_copula = ot.NormalCopula(R)
The analysis is set up and completed using OpenTURNS.
Show code cell source
inputDistribution = ot.ComposedDistribution((x1,x2), bivar_copula)
inputDistribution.setDescription(["x1", "x2"])
inputRandomVector = ot.RandomVector(inputDistribution)
myfunction = ot.PythonFunction(2, 1, myLSF)
# Vector obtained by applying limit state function to X1 and X2
outputvector = ot.CompositeRandomVector(myfunction, inputRandomVector)
# Define failure event: here when the limit state function takes negative values
failureevent = ot.ThresholdEvent(outputvector, ot.Less(), 0)
failureevent.setName('LSF inferior to 0')
optimAlgo = ot.Cobyla()
optimAlgo.setMaximumEvaluationNumber(1000)
optimAlgo.setMaximumAbsoluteError(1.0e-10)
optimAlgo.setMaximumRelativeError(1.0e-10)
optimAlgo.setMaximumResidualError(1.0e-10)
optimAlgo.setMaximumConstraintError(1.0e-10)
algo = ot.FORM(optimAlgo, failureevent, inputDistribution.getMean())
algo.run()
result = algo.getResult()
x_star = result.getPhysicalSpaceDesignPoint()
u_star = result.getStandardSpaceDesignPoint()
pf = result.getEventProbability()
print('FORM result, pf = {:.4f}'.format(pf))
print('FORM result, beta = {:.3f}\n'.format(result.getHasoferReliabilityIndex()))
print('Design point in the u space:',u_star)
print('Design point in the x space:',x_star)
FORM result, pf = 0.0175
FORM result, beta = 2.108
Design point in the u space: [-1.92808,0.852995]
Design point in the x space: [6.14384,18.8734]
Sensitivity analysis#
For an understanding of FORM result’s sensitivity to variations in its parameters, read Section 6.6 ADK. In particular, this part of the assignment will determine the sensitivity with regards to the limit-state parameters, \((\theta_g=(a,b))\), followed by sensitivity to the distribution parameters \((\theta_f = (\mu_1, \sigma_1, \mu_2, \sigma_2))\).
Sensitivity to LSF parameters#
In this sub-section, we aim to determine the sensitivity of the reliability index \(\beta\) w.r.t the LSF parameters \(a\) and \(b\) - i.e \(\nabla_{\theta_g}\beta\). Refer to Section 6.6 for the theory.
We know that :
where \(\nabla_u G(u^*,\theta_g)=\nabla_x g(x^*, \theta_g)J_{x,u}\) can be retrieved from the FORM analysis in HW03 and \(\nabla_{\theta_g}g(x^*, \theta_g)\).
The solution is in this code cell:
def grad_theta(x, a, b):
''' Computes the gradient of g with regards to the LSF parameters (a,b). '''
return np.array([x[0]**2, x[1]])
Before evaluating this functions, a hint:
print('The type of x_star is:', type(x_star))
x_s = np.array([value for value in x_star])
print('Use list comprehension and np.array to create x_s:', x_s)
print('The type of x_s is:',type(x_s))
print('The type of an element of x_s (x_s[0,0]) is:',type(x_s[0]))
The type of x_star is: <class 'openturns.typ.Point'>
Use list comprehension and np.array to create x_s: [ 6.14383802 18.8733728 ]
The type of x_s is: <class 'numpy.ndarray'>
The type of an element of x_s (x_s[0,0]) is: <class 'numpy.float64'>
Now we can easily evaluate the gradient, using the Jacobian from HW3:
The solution is in this code cell:
Jxu = np.array([[2, 0],
[2.5, 4.33]]) # Obtained using results from HW03
grad_U = np.dot(grad_myLSF(x_s), Jxu)
Clearly then, we can compute \(\nabla_{\theta_g}\beta\):
The solution is in this code cell:
sensi_lsf = grad_theta(x_s, a, b)/np.linalg.norm(grad_U)
print(sensi_lsf)
[1.76342384 0.88171192]
Sensitivity to distribution parameters#
Similarly to the previous sub-section, the theory is presented in Section 6.6 ADK. The first step consists in computing \(J_{T,\theta_f}(x^*, \theta_f)\). We note \(J_{T,\theta_f}=(a_{i,j})\) with \(i \in [\![0,1]\!]\) and \(j \in [\![0,3]\!]\) as defined in ADK, equation (6.54) (Jacobian of transformation w.r.t distribution parameters).
We have seen in HW03 that \(X_1\) and \(X_2\) are both normal, and therefore \(X_2|X_1\) is normal as well. Therefore, we can write :
with \( \mu=\mu_2 + \rho \sigma_2 (\frac{x_1 - \mu_1}{\sigma_1}) \) and \( \sigma = \sigma_2 \sqrt{1-\rho^2}\).
The solutions is here:
The solution is in this code cell:
Jt = np.zeros((2,4))
z1 = (x_s[0]-mu_1)/sigma_1 # Clearer for implementation in long expressions
mu, sigma = cond_param(X1, X2, x_s[0])
Jt[0,0] = -1/sigma_1
Jt[1,0] = rho*sigma_2/(sigma*sigma_1)
Jt[0,1] = -z1/sigma_1
Jt[1,1] = (rho*sigma_2*(x_s[0]-mu_1))/(sigma*sigma_1**2)
Jt[1,2] = -1/sigma
Jt[1,3] = (-rho*z1*sigma_2*np.sqrt(1-rho**2) - (x_s[1]-mu)*np.sqrt(1-rho**2))/((1-rho**2)*sigma_2**2)
Then, we can compute \( \nabla_{\theta_f}\beta = \hat{\alpha} J_{T,\theta_f}(x^*, \theta_f) \) using the \(\hat{\alpha}\) from HW3:
The solution is in this code cell:
alpha = np.array([-0.9145, 0.4549])
sensi_dist = np.matmul(alpha, Jt)
print(sensi_dist)
[ 0.58856832 -1.13480739 -0.10505465 0.02367149]
From the results printed above:
- The distriution parameters of X1 have the most impact; the variance more than the mean.
- The term for the variance of X1 is negative, which means increasing variance reduces the reliability index: a logical result!
- The variance of X2 has very little impact, and, oddly, is positive: this means increasing the variance increases the relilability, which is counter-intuitive. However, since the magnitude is nearly 0, we can ignore this result.
- Similar observations could be made about many of the sensitivity factors. You will see examples of them as we go through other homework and workshop assignments in this unit. Remember, in the end they all pretty much just boil down to partial derivatives! (a "local" sensitivity measure).