# Calculations for Functional Safety Quantities, Formulas and Methods

#### D Importances

Importances indicate the influence of each basic event on a system parameter. In the literature a whole series of importances can be found, which are often defined differently and almost always without naming the system parameter for which they were defined. Thus, also regarding the importances, it is often only spoken of "failure probabilities".

Importances for the system failure rate $$h_{\mathrm {sys}}$$ (called PFH in [IEC 61508]) are practically searched in vain in the literature. This is understandable in so far, as importances are almost always defined in connection with fault trees, and the computation of the system failure rate with fault trees is also treated only rarely (e. g. in [NUREG]). Some importances can be applied directly to the failure rate, some analogously, and some importances cannot be meaningfully defined for the failure rate at all.

Although importances are mostly defined for use with fault trees, some of them can also be applied to other models such as Markov models.

##### D.1 General Notes

In the sections 5 to 7 it was shown that often a transient (time-dependent) calculation is necessary to get correct values. With importances one can often do without this, because on the one hand many importances are already relative quantities by definition, inaccuracies in numerator and denominator cancel each other out, and on the other hand the purpose of importances is only that, to prioritize basic events or minimum cuts, for which it also depends only on ratios or orders of magnitude and not on certain numerical values.

To keep the formulas short and memorable, the dependence on the system lifetime $$T$$ or the averaging will not be mentioned in the following: Instead of $$F(T)$$, $$F$$ is written for short, instead of $$\overline {Q}(T)$$ is written $$Q$$ for short, and instead of $$\overline {h}(T)$$ is written $$h$$ for short.

##### D.2 Partial Derivative (PD) and Birnbaum Importance (BI)

Immediately obvious as a measure of the importance of individual basic events is the partial derivative (partial derivative, PD) of the system value $$Q$$, $$F$$ or $$h$$.

The partial derivatives of system unavailability $$Q$$ and system reliability $$F$$ are also called Birnbaum importance. 25.

25 There is no known source, that refers to a partial derivative of the system failure rate as "Birnbaum importance"

###### D.2.1 Partial derivative for system unavailability

The derivative of system unavailability $$Q_{\mathrm {sys}}$$ according to the unavailability of each basic event $$Q_x$$ is given by:

\begin{equation} \mathrm {I^{PD}_{Q,x}} = \frac {\partial Q_{\mathrm {sys}}}{\partial Q_x} = \frac {Q_{\mathrm {sys}}(\mathbf {Q} +\partial Q_x) - Q_{\mathrm {sys}}(\mathbf {Q})} {\partial Q_x} \end{equation}

Where $$\mathbf {Q}$$ denotes the vector of (mean values of) unavailabilities of all basic events.

Using the approximation formula (53) for the system non-availability for fault trees, the partial derivative is calculated to be

\begin{equation} \mathrm {I^{PD}_{Q,x}} \approx \frac {\partial \sum \limits _{i=1}^{n_{\mathrm {MCS}}} \left ( \prod \limits _{j=1}^{m_{\mathrm {Lit},i}} Q_j(t) \right )}{\partial Q_x} = \sum \limits _{i=1}^{n_{\mathrm {MCS}}} \begin{cases} 0 & \text { if basic event } x \notin \mathrm {MCS}_i \\ \prod \limits _{j=1,j\neq x}^{m_{\mathrm {Lit},i}} Q_{j} & \text { if basic event } x \in \mathrm {MCS}_i \end {cases} \end{equation}

Using BDDs, the partial derivative $$\frac {\partial Q_{\mathrm {sys}}}{\partial Q_x}$$ can be easily determined exactly. Moving each basic event in turn to the top of the BDD, as shown in Figure 42, this results in

\begin{equation} \mathrm {I^{PD}_{Q,x}} = \frac {\partial \big ( (1-Q_x) \cdot \mathrm {BDD}_0 + Q_x \cdot \mathrm {BDD}_1 \big ) }{\partial Q_x} = \mathrm {BDD}_1 - \mathrm {BDD}_0 \end{equation}

Where $$\mathrm {BDD}_0$$ is the low branch for basic event $$x$$ i. e. the system unavailability in the case, that basic event $$x$$ has not failed, and $$\mathrm {BDD}_1$$ the high branch i. e. the system unavailability in the case, that basic event $$x$$ has failed. Thus one can also write

\begin{equation} \mathrm {I^{PD}_{Q,x}} = \mathrm {BDD}_{x,1} - \mathrm {BDD}_{x,0} = Q_{\mathrm {sys}}(Q_x:=1) - Q_{\mathrm {sys}}(Q_x:=0) \end{equation}

Where $$Q_{\mathrm {sys}}(Q_x:=1)$$ means the system unavailability, which results if one sets the unavailability of basic event $$x$$ to 1, and leaving the unavailability of all other basic events at their original values.

Since $$\mathrm {BDD}_{x,0}$$ gives the probability, with which the system is not available, even if component $$x$$ is OK, and $$\mathrm {BDD}_{x,1}$$ is the probability, that the system is then unavailable, if component $$x$$ also fails, the difference is the probability that the system is in a state in which component $$x$$ is critical, i. e., the failure of component $$x$$ would lead to system failure.

###### D.2.2 Partial derivative for system unreliability

The partial derivative for system unreliability can also be specified:

\begin{equation} \mathrm {I^{PD}_{F,x}} = \frac {\partial F_{\mathrm {sys}}}{\partial F_x} = \frac {F_{\mathrm {sys}}(\mathbf {F} +\partial F_x) - F_{\mathrm {sys}}(\mathbf {F})} {\partial F_x} \end{equation}

Here $$\mathbf {F}$$ denotes the vector of unreliabilities of all basic events at a given time (usually at the system end-of-life).

• Example D.1 Let the fault tree of a system be BE1 AND BE2. Thus holds:

\begin{equation*} F_{\mathrm {sys}}(T) = F_{\mathrm {BE1}}(T) \cdot F_{\mathrm {BE2}}(T) \end{equation*}

The derivative to $$F_{\mathrm {BE1}}(T)$$ is $$F_{\mathrm {BE2}}(T)$$ and vice versa.

As explained in section 7, a fault tree for calculating system unreliability can also contain conditions, i. e. basic events, which are described by their unavailability $$Q$$. For these basic events, one can substitute the partial derivative $$\mathrm {I^{PD}_{F,x}} = \frac {partial F_{\mathrm {sys}}}{\partial Q_x}$$, however, the above formulas do not apply or apply only approximately.

###### D.2.3 Partial derivative for system failure rate

For the system failure rate $$h$$ a partial derivation only according to the occurrence rate $$h_x$$ of a basic event $$\frac {\partial h_{\mathrm {sys}}}{\partial h_x}$$ makes little sense, since the system failure rate $$h$$ according to formula (65) also depends on the unavailability of each basic event:

\begin{equation} h_{\mathrm {sys}}(t) \lessapprox \sum _{i=1}^{n_{\mathrm {MCS}}} \left ( \sum _{j=1}^{n_{\mathrm {Lit,MCS_i}}} \left ( h_j(t) \cdot \prod _{k=1,k\neq j}^{n_{\mathrm {Lit,MCS_i}}} q_{i,k}(t) \right ) \right ) \end{equation}

Of course, one could use two derivatives $$\mathrm {I^{PD}_{h_h,x}} = \frac {\partial h_{\mathrm {sys}}}{\partial h_x}$$ and $$\mathrm {I^{PD}_{h_Q,x}} = \frac {\partial h_{\mathrm {sys}}}{\partial Q_x}$$. However, $$Q_x$$ again depends on the failure rate of the same for most basic events:

\begin{equation} h_{\mathrm {sys}} = \mathrm {fkt}(h_x,Q_x=\mathrm {fkt}(h_x)) \end{equation}

For regularly tested and repaired components, for example, the mean unavailability is $$\overline {Q} \approx \lambda \cdot (T_{\mathrm {test}}/2+\mathrm {MRT}) = h \cdot (T_{\mathrm {test}}/2+\mathrm {MRT})$$.

Therefore it makes more sense to define the importance $$\mathrm {I^{PD}_{h,x}}$$ as the derivative with respect to the (mean) failure rate of the basic event $$\lambda _i$$:

\begin{equation} \mathrm {I^{PD}_{h,x}} = \frac {\partial h_{\mathrm {sys}}}{\partial \lambda _x} = \frac {h_{\mathrm {sys}}(\boldsymbol {\lambda } +\partial \lambda _x) - h_{\mathrm {sys}}(\boldsymbol {\lambda })} {\partial \lambda _x} \approx \frac {\partial \left ( \sum \limits _{i=1}^{n_{\mathrm {MCS}}} h_{\mathrm {MCS,i}} \right )}{\partial \lambda _x} = \sum \limits _{i=1}^{n_{\mathrm {MCS}}} \frac {\partial h_{\mathrm {MCS,i}}}{\partial \lambda _x} \end{equation}

Using formula (64) for the failure rate $$h_{\mathrm {MCS,i}}$$ of each minimal cut

\begin{equation} \begin{split} h_{\mathrm {MCS}} & \lessapprox h_{1} \cdot Q_{2} \cdot Q_{3} \cdot \ldots \cdot Q_{m} \\ & + h_{2} \cdot Q_{1} \cdot Q_{3} \cdot \ldots \cdot Q_{m} \\ & + \dots \\ & + h_{m} \cdot Q_{1} \cdot Q_{2} \cdot \ldots \cdot Q_{m-1} \end {split} \end{equation}

results in

\begin{equation} \begin{split} \frac {\partial h_{\mathrm {MCS,i}}}{\partial \lambda _x} &\approx \frac {\partial (h_1 \cdot Q_2 \cdot Q_3 \cdot \ldots \cdot Q_m)}{\partial \lambda _x} \\ & + \frac {\partial (h_2 \cdot Q_1 \cdot Q_3 \cdot \ldots \cdot Q_m)}{\partial \lambda _x}\\ & + \dots \\ & + \frac {\partial (h_m \cdot Q_1 \cdot Q_2 \cdot \ldots \cdot Q_{m-1}}{\partial \lambda _x} \\ &=\sum _{j=1}^{m} \frac {\partial \left ( h_j \cdot \prod \limits _{k=1,k\neq j}^{m} Q_{k} \right )}{\partial \lambda _x} \end {split} \end{equation}

If basic event $$x$$ is not included in $$\mathrm {MCS}_i$$, this derivative is zero. Otherwise, the summand with $$j=x$$ is equal to $$\prod \limits _{k=1,k\neq j}^{m} Q_{k}$$ (where the unavailabilities of this product are all independent of base event $$x$$), and all summands with $$j\neq x$$ are equal to $$h_j \frac {\partial Q_x}{\partial \lambda _x} \prod \limits _{k=1,k\neq j,k\neq x}^{m} Q_{k}$$.

Thus applies

\begin{equation} \mathrm {I^{PD}_{h,x}} \approx \sum \limits _{i=1}^{n_{\mathrm {MCS}}} \begin{cases} 0 & \text {if BE } x \notin \mathrm {MCS_i} \\ \prod \limits _{k=1,k\neq x}^{m_{\mathrm {Lit},i}} Q_{k} + \frac {\partial Q_x}{\partial \lambda _x} \cdot \sum \limits _{j=1,j\neq x}^{m_{\mathrm {Lit},i}} \left ( h_j \cdot \prod \limits _{k=1,k\neq j,k\neq x}^{m_{\mathrm {Lit},i}} Q_{k} \right ) & \text {wenn BE } x \in \mathrm {MCS_i} \end {cases} \end{equation}

• Example D.2 Let a system consist of two different components with constant failure rates $$\lambda _1$$ and $$\lambda _2$$, which are regularly tested at different intervals $$T_{\mathrm {Test,i}}$$ and repaired immediately if necessary. The system then fails dangerously, if one of the components has failed and in this state the second component still fails. The fault tree is thus BE1 AND BE2. So there is only one minimum cut, namely {BE1, BE2}. Thus holds:

\begin{equation*} \overline {h_{\mathrm {sys}}} \lessapprox \lambda _1 \cdot \overline {Q_2} + \lambda _2 \cdot \overline {Q_1} \end{equation*}

For the mean unavailability of each component applies $$\overline {Q_x} \approx \lambda _x \cdot T_{\mathrm {test},x}/2$$ and thus for its derivative with respect to $$\lambda _x$$: $$\frac {\partial Q_x}{\partial \lambda _x} \approx T_{\mathrm {test},x}/2$$.

Thus applies

\begin{equation*} \mathrm {I^{PD}_{h,1}} \approx \overline {Q_2} + \lambda _2 \cdot \frac {T_{\mathrm {Test},1}}{2} = \lambda _2 \cdot \frac {T_{\mathrm {Test},2}}{2} + \lambda _2 \cdot \frac {T_{\mathrm {Test},1}}{2} = \lambda _2\,\frac {T_{\mathrm {Test},1} + T_{\mathrm {Test},2}}{2} \end{equation*}

and

\begin{equation*} \mathrm {I^{PD}_{h,2}} \approx \lambda _1 \cdot \frac {T_{\mathrm {Test},2}}{2} + \overline {Q_1} = \lambda _1 \cdot \frac {T_{\mathrm {Test},2}}{2} + \lambda _1 \cdot \frac {T_{\mathrm {Test},1}}{2} = \lambda _1\,\frac {T_{\mathrm {Test},1} + T_{\mathrm {Test},2}}{2} \end{equation*}

###### D.2.4 Calculation for Markov models

Due to the above mentioned property, that the partial derivatives of unavailability or unreliability are equal to probability, that the system is in a state from which it enters a failure state when event $$x$$ occurs, the partial derivative with respect to $$Q_x$$ or $$F_x$$ is equal to the sum of the (average) residence probabilities of all $$m_x$$ states, from which an edge of the base event $$x$$ leads to a failure state:

\begin{equation} \mathrm {I^B_{Q,x}}=\sum \limits _{j=1}^{m_x} \overline {p_j} \end{equation}

##### D.3 Risk-Reduction (RR)

The risk reduction potential (RR) indicates how much $$\overline {Q}$$, $$F(T)$$ or $$\overline {h}$$ would be reduced, if basic event $$\mathrm {BE}_x$$ would never occur, i. e. component $$x$$ could not fail (at least not with this failure mode).

\begin{equation} I^{\mathrm {RR}}_{Q,x} = Q_{\mathrm {sys}}(\mathbf {Q}) - Q_{\mathrm {sys}}(Q_x:=0) \end{equation}

\begin{equation} I^{\mathrm {RR}}_{F,x} = F_{\mathrm {sys}}(\mathbf {F}) - F_{\mathrm {sys}}(F_x:=0) \end{equation}

The improvement potential can also be directly applied to the system failure rate, because due to the definition it is irrelevant by which quantity the quality of a basic event is defined – or by which combination of quantities. However, one must then sensibly set $$h_x=0$$ and $$Q_x=0$$ at the same time:

\begin{equation} I^{\mathrm {RR}}_{h,x} = h_{\mathrm {sys}}(\mathbf {h},\mathbf {Q}) - h_{\mathrm {sys}}(h_x:=0, Q_x:=0) \end{equation}

##### D.4 Risk-Reduction-Worth (RRW)

The Risk-Reduction-Worth (RRW) indicates, how much $$\overline {Q}$$, $$F(T)$$ or $$\overline {h}$$ would be relatively reduced, if component $$x$$ did not fail:

\begin{equation} I^{\mathrm {RRW}}_{Q,x} = \frac {Q_{\mathrm {sys}}(\mathbf {Q})-Q_{\mathrm {sys}}(Q_x:=0)} {Q_{\mathrm {sys}}(Q_x:=0)} = \frac {Q_{\mathrm {sys}}(\mathbf {Q})} {Q_{\mathrm {sys}}(Q_x:=0)}-1 \end{equation}

\begin{equation} I^{\mathrm {RRW}}_{F,x} = \frac {F_{\mathrm {sys}}(\mathbf {F})-F_{\mathrm {sys}}(F_x:=0)} {F_{\mathrm {sys}}(F_x:=0)} = \frac {F_{\mathrm {sys}}(\mathbf {F})} {F_{\mathrm {sys}}(F_x:=0)}-1 \end{equation}

\begin{equation} I^{\mathrm {RRW}}_{h,x} = \frac {h_{\mathrm {sys}}(\mathbf {h},\mathbf {Q})-h_{\mathrm {sys}}(h_x:=0, Q_x:=0)} {h_{\mathrm {sys}}(h_x:=0, Q_x:=0)} = \frac {h_{\mathrm {sys}}(\mathbf {h},\mathbf {Q})} {h_{\mathrm {sys}}(h_x:=0, Q_x:=0)}-1 \end{equation}

The Risk-Reduction-Worth can obviously assume arbitrarily large values. The larger, the more effective is the improvement of component $$x$$. A value of $$\approx 0$$ on the other hand means, that component $$x$$ has practically no influence. Attention: The summand -1 is often omitted.

##### D.5 Fussell-Vesely-Importance (FV)

Dividing the risk reduction potential by the original system size, we get the Fussell-Vesely importance:

\begin{equation} I^{\mathrm {FV}}_{Q,x} = \frac {I^{\mathrm {RR}}_{Q,x}}{Q_{\mathrm {sys}}(\mathbf {Q})} = \frac {Q_{\mathrm {sys}}(\mathbf {Q}) - Q_{\mathrm {sys}}(Q_x:=0)}{Q_{\mathrm {sys}}(\mathbf {Q})} \end{equation}

\begin{equation} I^{\mathrm {FV}}_{F,x} = \frac {I^{\mathrm {RR}}_{F,x}}{F_{\mathrm {sys}}(\mathbf {F})} = \frac {F_{\mathrm {sys}}(\mathbf {F}) - F_{\mathrm {sys}}(F_x:=0)}{F_{\mathrm {sys}}(\mathbf {F})} \end{equation}

\begin{equation} I^{\mathrm {FV}}_{h,x} = \frac {I^{\mathrm {RR}}_{h,x}}{h_{\mathrm {sys}}(\mathbf {h},\mathbf {Q})} = \frac {h_{\mathrm {sys}}(\mathbf {h},\mathbf {Q})-h_{\mathrm {sys}}(h_x:=0, Q_x:=0)}{h_{\mathrm {sys}}(\mathbf {h},\mathbf {Q})} \end{equation}

The Fussell-Vesely importance can be calculated very easily based on minimum cuts: $$Q_{\mathrm {sys}}(Q_x:=0)$$ is the fraction of system unavailability, which is supplied by the minimal cuts, containing base event $$x$$ not. Consequently, $$Q_{\mathrm {sys}}(\mathbf {Q}) - Q_{\mathrm {sys}}(Q_x:=0)$$ is the fraction of system unavailability, which is supplied by the minimum cuts, which contain base event $$x$$. Thus, approximately (for small $$Q_{\mathrm {MCS}}$$):

\begin{equation} I^{\mathrm {FV}}_{Q,x} \approx \frac { \sum \limits _{i=1}^{n_{\mathrm {MCS}}} \begin{cases} 0 & \text {if } \mathrm {BE}_x \notin \mathrm {MCS}_i \\ Q_{\mathrm {MCS},i} & \text {if } \mathrm {BE}_x \in \mathrm {MCS}_i \end {cases} } {Q_{\mathrm {sys}}(\mathbf {Q})} \end{equation}

The same holds for $$I^{\mathrm {FV}}_{F,x}$$ and $$I^{\mathrm {FV}}_{h,x}$$. The Fussell-Vesely importance is thus the probability, that at least one minimal cut, containing component $$x$$, has led to the system failure when the system failed.

Alternatively, you can use the formula of Esary-Proschan (54)

\begin{equation} Q_{\mathrm {sys}}(t) \lessapprox 1-\prod \limits _{i=1}^{n_{\mathrm {MCS}}} \left ( 1-Q_{\mathrm {MCS},i}(t) \right ) \end{equation}

then you get

\begin{equation} I^{\mathrm {FV}}_{Q,x} \approx \frac {1-\prod \limits _{i=1}^{n_{\mathrm {MCS}}} \begin{cases} 1 & \text {if } \mathrm {BE}_x \notin \mathrm {MCS}_i \\ 1-Q_{\mathrm {MCS},i} & \text {if } \mathrm {BE}_x \in \mathrm {MCS}_i \end {cases} }{Q_{\mathrm {sys}}(\mathbf {Q})} \end{equation}

##### D.6 Risk Achievement (RA)

For unavailability and unreliability, risk achievement (RA) is defined as follows:

\begin{equation} I^{\mathrm {RA}}_{Q,x} = Q_{\mathrm {sys}}(Q_x:=1) - Q_{\mathrm {sys}}(\mathbf {Q}) \end{equation}

\begin{equation} I^{\mathrm {RA}}_{F,x} = F_{\mathrm {sys}}(F_x:=1) - F_{\mathrm {sys}}(\mathbf {F}) \end{equation}

With the previously introduced definition of partial derivative (Birnbaum importance) and risk-reduction potential, the following applies immediately:

\begin{equation} \begin{split} I^{\mathrm {RA}}_{Q,x} + I^{\mathrm {RR}}_{Q,x} &= \left (Q_{\mathrm {sys}}(Q_x:=1) - Q_{\mathrm {sys}}(\mathbf {Q})\right ) + \left (Q_{\mathrm {sys}}(\mathbf {Q}) - Q_{\mathrm {sys}}(Q_x:=0)\right ) \\ &= Q_{\mathrm {sys}}(Q_x:=1) - Q_{\mathrm {sys}}(Q_x:=0) \\ &= I^{\mathrm {PD}}_{Q,x} \end {split} \end{equation}

No RA can be specified for the system failure rate $$h$$, since the failure rate of a component (or in general: the occurrence rate of an event) is not dimensionless and therefore does not know an upper bound $$h_{\mathrm {max}}$$, and therefore there is no upper bound $$h_{\mathrm {sys}}(h_{\mathrm {max},x})$$.

##### D.7 Risk-Achievement-Worth (RAW)

If one puts the RA in relation to the original system size, we get the factor by which the risk would increase, if the component $$x$$ had always failed (Risk-Achievement-Worth, RAW):

\begin{equation} I^{\mathrm {RAW}}_{Q,x} = \frac {Q_{\mathrm {sys}}(Q_x:=1) - Q_{\mathrm {sys}}(\mathbf {Q})}{Q_{\mathrm {sys}}(\mathbf {Q})} = \frac {Q_{\mathrm {sys}}(Q_x:=1)}{Q_{\mathrm {sys}}(\mathbf {Q})}-1 \end{equation}

\begin{equation} I^{\mathrm {RAW}}_{F,x} = \frac {F_{\mathrm {sys}}(F_x:=1) - F_{\mathrm {sys}}(\mathbf {F})}{F_{\mathrm {sys}}(\mathbf {F})} = \frac {F_{\mathrm {sys}}(F_x:=1)}{F_{\mathrm {sys}}(\mathbf {F})}-1 \end{equation}

Attention: The summand -1 is often omitted.

As for the RA, the RAW is not applicable for failure rates, since in general no limit value exists.

##### D.8 Criticality Importance (CRI)

Criticality Importance (CRI) is defined as the ratio of the relative change in system size to the relative change in component size:

\begin{equation} I^{\mathrm {CRI}}_{Q,x} =\frac { \frac {\partial Q_{\mathrm {sys}}}{Q_{\mathrm {sys}}} } {\frac {\partial Q_x}{Q_x}} = \frac {Q_{\mathrm {sys}}(\mathbf {Q} +\partial Q_x) - Q_{\mathrm {sys}}(\mathbf {Q})} {Q_{\mathrm {sys}}(\mathbf {Q})} \cdot \frac {Q_x}{\partial Q_x} = I^{\mathrm {PD}}_{Q,x} \cdot \frac {Q_x}{Q_{\mathrm {sys}}(\mathbf {Q})} \end{equation}

\begin{equation} I^{\mathrm {CRI}}_{F,x} = \frac { \frac {\partial F_{\mathrm {sys}}}{F_{\mathrm {sys}}} } {\frac {\partial F_x}{F_x}} = \frac {F_{\mathrm {sys}}(\mathbf {F} +\partial F_x) - F_{\mathrm {sys}}(\mathbf {F})} {F_{\mathrm {sys}}(\mathbf {F})} \cdot \frac {F_x}{\partial F_x} = I^{\mathrm {PD}}_{F,x} \cdot \frac {F_x}{F_{\mathrm {sys}}(\mathbf {F})} \end{equation}

It can be extended to the failure rate, by describing the component quantities $$h_x$$ and $$Q_x$$ as a function of the failure rate of the component, as in the case of the partial derivative:

\begin{equation} I^{\mathrm {CRI}}_{h,x} = \frac { \frac {\partial h_{\mathrm {sys}}}{h_{\mathrm {sys}}} } {\frac {\partial \lambda _x}{\lambda _x}} = \frac {h_{\mathrm {sys}}(\boldsymbol {\lambda } +\partial \lambda _x) - h_{\mathrm {sys}}(\boldsymbol {\lambda })} {h_{\mathrm {sys}}(\boldsymbol {\lambda })} \cdot \frac {\lambda _x}{\partial \lambda _x} = I^{\mathrm {PD}}_{h,x} \cdot \frac {\lambda _x}{h_{\mathrm {sys}}(\boldsymbol {\lambda })} \end{equation}

It is the probability that component $$x$$ led to the failure, when the system failed. It thus gives an indication where to look for the failure first, when the system has failed. Or put another way: The greater the criticality importance, the stronger the effect of a relative improvement of the component. It is therefore sometimes called Upgrading Importance.

##### D.9 Importances for generic basic events

It is also interesting to ask, how much the system property $$Q_{\mathrm {sys}}$$, $$F_{\mathrm {sys}}$$ or $$h_{\mathrm {sys}}$$ changes, if one changes a component which is used multiple times. Thus, it is not the importance of a single event that is considered, but the importance of all events which refer to the same generic basic event (GBE), including possibly existing common cause factors $$\beta$$. This is included in the following section for Example 3.

In particular, the importances $$\mathrm {I^{PD}}$$ and $$\mathrm {I^{CRI}}$$ are important with respect to generic basic events, because they indicate how much the system size changes in absolute and relative terms, respectively, if the base size changes – for instance because it is not known exactly.

For fault trees, the partial derivative after the generic basic event xgen for system unavailability is calculated using the approximate formula (53) to be

\begin{equation} \mathrm {I^{PD}_{Q,xgen}} \approx \frac {\partial \sum \limits _{i=1}^{n_{\mathrm {MCS}}} \left ( \prod \limits _{j=1}^{m_{\mathrm {Lit},i}} Q_j(t) \right )}{\partial Q_{\mathrm {xgen}}} = \sum \limits _{i=1}^{n_{\mathrm {MCS}}} \begin{cases} 0 & \text {if GBE } x \notin \mathrm {MCS}_i \\ a Q_{\mathrm {xgen}}^{a-1} \prod \limits _{j=1,j\neq \mathrm {xgen}}^{m_{\mathrm {Lit},i}} Q_{j} & \text {wenn GBE } x \in \mathrm {MCS}_i \end {cases} \end{equation}

where $$a$$ means the number of basic events in the minimum cut $$i$$, which refer to the same generic basic event xgen. The expression $$j\neq \mathrm {xgen}$$ means, that all basic events, which refer to the generic basic event xgen, are to be ignored, regardless of their index in the minimal cut.

The partial derivative for the system failure rate is calculated based on minimum cuts to be

\begin{equation} \mathrm {I^{PD}_{h,xgen}} \approx \sum \limits _{i=1}^{n_{\mathrm {MCS}}} \begin{cases} 0 & \text {if GBE } x \notin \mathrm {MCS_i} \\ a^2\, \lambda _{\mathrm {xgen}}^{a-1} \left ( \dfrac {\partial Q_{\mathrm {xgen}}}{\partial \lambda _{\mathrm {xgen}}} \right )^{a-1} \cdot \displaystyle \prod \limits _{j=1,j\neq \mathrm {xgen}}^{m_{\mathrm {Lit},i}} Q_j & \\ +\, a\, \lambda _{\mathrm {xgen}}^{a-1} \left ( \dfrac {\partial Q_{\mathrm {xgen}}}{\partial \lambda _{\mathrm {xgen}}} \right )^{a} \cdot \displaystyle \sum \limits _{j=1,j\neq \mathrm {xgen}}^{m_{\mathrm {Lit},i}} \left ( h_j \cdot \displaystyle \prod \limits _{k=1,k\neq j,k\neq x}^{m_{\mathrm {Lit},i}} Q_{k} \right ) & \text {if GBE } x \in \mathrm {MCS_i} \end {cases} \end{equation}

##### D.10 Example importances for system unavailability

For some simple architectures, the importances with respect to $$Q_{\mathrm {sys}}$$ are mentioned in the following table. In Example 3, two similar events A.1 and A.2 are ANDed. Thus, the importances introduced in section D.9 with respect to the underlying generic basic event (A) are also of interest here. These are denoted here by $$I_{\mathrm {Q,genA}}$$, whereas $$I_{\mathrm {Q,A}}$$ denotes the importance of the single event A.1 or A.2, respectively. A common-cause factor between A.1 and A.2 was not assumed ($$\beta _A = 0$$).

Note: The mean values $$\overline {Q_x}$$ were always used in the calculations, i. e. $$\overline {Q_{\mathrm {A.1}}} \cdot \overline {Q_{\mathrm {A.2}}}$$ instead of $$1/T \cdot \int _0^T Q_{\mathrm {A.1}}(t) \cdot Q_{\mathrm {A.2}}(t) \; dt$$. In addition, the approximation formula (41) was used for the unavailabilities of the single events.

##### D.11 Example importances for system failure rate

For some simple architectures, the importances with respect to $$h_{\mathrm {sys}}$$ are mentioned in the following table. In Example 3, two similar events A.1 and A.2 are ANDed. Thus, the importances introduced in section D.9 with respect to the underlying generic base event are also of interest here. These are denoted here by $$I_{\mathrm {h,genA}}$$, whereas $$I_{\mathrm {h,A}}$$ denotes the importance of the single event A.1 or A.2, respectively.