# Personalized Risk Calculations with a Generative Bayesian Model: Am I fast enough to react in time ?

# Personalized Risk Calculations with a Generative Bayesian Model: Am I fast enough to react in time ?

## Pt 0 - Abstract

**Abstract:** We present a Bayesian modeling and decision procedure to answer the question of whether the reaction speed of a single individual is slower and thus more risky than the speed of a randomly selected individual in a reference population. The behavioral domain under investigation is simple reaction times (SRTs). To do this, we need to consider aspects of Bayesian cognitive modeling, psychometric measurement, person-centered risk calculation, and coding with the Turing-complete functional probabilistic programming language WebPPL. We pursue several goals:

*First*, we lean on the new and paradoxical metaphor of a '*cautious gunslinger*'. We think that a whole range of risky situations can be embedded into this metaphor.

*Second*, the above described *gunslinger metaphor* can be mapped to the framework of *Bayesian decision strategies.* We want to show by way of example that within this framework the research question '*transfer the locus of longitudinal control*' in * Partial Autonomous Driver Assistant Systems* (

*PADAS*) can be tackled.

*Third*, evidence-based priors for our generative Bayesian models are obtained by reuse of meta-analytical results. For demonstration purposes we reuse reaction-time interval estimates of Card, Moran, and Newell's (CMN's) meta-analysis, the Model Human Processor (MHP).

*Fourth*, the modification of priors to posterior probability distributions is weighted by a *likelihood function*, which is used to consider the SRT data from a single subject as evidence and to measure how plausibly alternative prior hypotheses generate these data.

Fifth, we want to demonstrate the expressiveness and usefulness of WebPPL in computing posterior distributions and personal probabilities of risk.

**Keywords:** Personal Bayesian Risk Calculation, Context-dependent Risk Potential, Single-Case Diagnostics, Cognitive Engineering Model, Reuse of Meta-Analyses as Bayesian Priors, Generative Bayesian Model, Model Human Processor, Single Subject Response Time, Probabilistic Programming Language WebPPL, Bayesian Decision Strategy, Transfer the locus of longitudinal control, Partial Autonomous Driver Assistant System, PADAS.

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments or bug reports to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------

## Pt 1 - Introduction

### 1.1 Motivation

This is a study in the development of a ** Bayesian cognitive engineering model** and

**It is accompanied by the reuse and integration of**

*Bayesian psychometric decision procedure.***. All computations are supported by**

*psychological meta-analysis data***written in the**

*code***probabilistic programming language**

*Turing-complete functional***. We feel being in the tradition of Westmeyer (1975), Bessiere, Laugier and Siegwart (2008), Pearl (2009), Lee and Wagenmakers (2013), Goodman, Tenenbaum and The ProbMods Contributors (2016), and Levy and Mislevy (2016, 2020). We pursue several goals:**

*WebPPL*** First**, we lean on the new and paradoxical

*metaphor of a*

**. We think that a whole range of risky situations (Lefèvre, Vasquez, and Laugier, 2014) can be embedded into this metaphor. The agent has to answer himself three increasing complex**

*cautious gunslinger***: 1) "**

*counterfactual and metaphorical questions**Can I draw my revolver fast enough, if my opponent needs only \(\tau_*\) milliseconds to do so ?*", 2) "

*Can I draw my revolver as fast as a randomly selected person of a (younger) reference population, if my opponent needs only \(\tau_*\) milliseconds to do so ?*", 3) "

*Is the probability of drawing my revolver slower than a randomly selected person of a (younger) reference population at most \(p=0.05\) if my opponent needs only \(\tau_*\) milliseconds ?*".

** Second,** the above described

*gunslinger metaphor*can be mapped to the framework of

**We want to show by way of example that within this framework the research question**

*Bayesian decision strategies.***in**

*transfer the locus of longitudinal control**Partial Autonomous Driver Assistant Systems (PADAS)*can be tackled.

*Third,*** evidence-based priors **for our generative Bayesian models are obtained by reuse of meta-analytical results. For demonstration purposes we reuse reaction-time interval estimates of Card, Moran, and Newell's (CMN's) meta-analysis, the Model Human Processor (MHP). According to the MHP total simple reaction times (SRTs) of an arbitrary computer user are composed from three latent time components related to perception, cognition, and motor processes.

** Fourth**, the modification of priors to posterior probability distributions is weighted by a

**, which is used to consider the SRT data from a single subject as evidence and to measure how plausibly the alternative prior hypotheses generate these data.**

*likelihood function***are obtained by runs of the Metropolis-Hastings Markov-Chain-Monte-Carlo**

*Posteriors***provided in Turing-complete, functional WebPPL.**

*(MH-MCMC) algorithm*** Fifth**, we want to demonstrate the expressiveness and usefulness of

**in computing posterior distributions and personal probabilities of risk. When SRT-specific values-at-risk (Linsmeier & Pearson, 2000; Cottin & Döhler, 2009, p.114ff; Petters & Dong, 2016, p. 178) are externally provided prior-risk-probabilities can be compared to posterior risk-probabilities. It can be checked whether there is a substantial or even striking increase, which we call**

*WebPPL***. This way it is possible to answer the above mentioned questions. So,**

*risk-excess***(e.g. traffic scenarios) with only a few behavioral data of a single subject (e.g. a driver) can be mapped to the**

*hazardous scenarios***and to**

*paradoxical and counterfactual cautious gunslinger scenario***.**

*Bayesian psychometric decision procedures*### 1.2 Generation of Simple Reaction Times (SRTs) under MHP Guidance

#### Card, Moran, and Newell's Model Human Processor (MHP)

In their seminal book ** The Psychology of Human-Computer Interaction (PHCI)** Card, Moran, and Newell (CMN, 1983) present the solution of several

*design problems. The proposed solutions are based on some basic and abstract human information-processing mechanisms and are summarized in*

*Human-Computer Interaction (HCI)**.*

**the Model Human Processor (MHP)****MHP**is a (simplified)

*and a s*

**engineering model****of the human**

*tatic calculation guide**. "*

**perceptual-cognitive-motor system***It can be divided into three interacting subsystems: (1) the perceptual system, (2) the motor system, and (3) the cognitive system, each with its own memories and processors*" (CMN, 1983, p.24).

This way, **MHP **can be used as a simple** static calculation guideline, **e.g.** t**o predict human response times. According to CMN MHP-guided design activities belong to * applied information-processing psychology* supporting engineering activities like

**(CMN, 1983, p.9f; 1986). A more recent meta-analysis can be found in Jastrzembski and Charness (2007).**

*task analysis, calculation, and approximation*In MHP the meta-analytic knowledge is quantified by ** time intervals** with interval boundaries and 'typical' values. "

*We can define three versions of the model: one in which all the parameters listed are set to give the worst performance*" (CMN, 1983, p.44)

**(Slowman)**, one in which they are set to give the best performance (**Fastman**), and one set for a nominal performance (**Middleman**).Interval data from even more recent meta-analyses can easily be integrated into our Bayesian SRT model by substituting the CMN intervals by more recent ones. As an example we refer to Gratzer and Becke (2009, p.126). They present interval data similar to CMN (1983) but for reaction phases in * braking events*. They report intervals for

*'*

**basic reaction time**(*Reaktionsgrundzeit')*,

*'*

**conversion time**('Umsetzzeit*)*,

*'*

**response time**(*Ansprechzeit') which add up to*

**total reaction time**.#### MHP-Composition of Processor Cycle Times

The uncertainties in the parameters of the MHP are captured by three subversions of the MHP (CMN, 1983, p.44):

is the version ... in which all the parameters ... are set to give the*Middleman**normal*perfomance.is the version ... in which all the parameters ... are set to give the*Fastman**best*perfomance.is the version ... in which all the parameters ... are set to give the*Slowman**worst*perfomance.

Cycle times of hypothetical perceptual (*P*), cognitive (*C*), and motor (*M*) processor are reported according to the interval-template

$$ \tau_X := \tau_{X_{Middleman}} [\tau_{X_{Fastman}} \sim \tau_{X_{Slowman}}] \text{ ; } X = P, C, M \qquad(1) $$

and similarly for the *total reaction time* \( T \)

$$\tau_T := \tau_{T_{Middleman}} [\tau_{T_{Fastman}} \sim \tau_{T_{Slowman}}] \qquad(2)$$

According to CMN \(\tau_T\) should be the sum of the *specific* component cycle times \(\tau_{X_{Zman}} \; ; X = P, C, M \; ; Z = Middle, Fast, Slow \) :

$$\tau_{T_{Zman}} = \sum_{X \in \{P,C,M\}} \tau_{X_{Zman}} \qquad(3)$$

The meaning of (1) is that *\( \tau_X \)* is ranging from \(\tau_{X_{Fastman}}\) to \(\tau_{X_{Slowman}}\) with a 'typical' value \(\tau_{X_{Middleman}}\) and (2) has a similar meaning for t*he total reaction time \(\tau_T\)*. CMN not only provide quantitative intervals for (1) but also for (2) which obey the constraints (3). But a given left side of (2) can be fulfilled by many more 3-tuples \((\tau_P, \tau_C, \tau_M) \) not considered in (3) on the right-hand side. This is why we introduce for modelling purposes a new less constrained variable \(\tau_\Sigma \) replacing \(\tau_T\).

$$ \tau_\Sigma := \tau_P + \tau_C + \tau_M = \sum_{X \in \{P,C,M\}} \tau_X . \qquad(4) $$

The semantics of a 'typical value' is not formally specified by CMN (CMN, 1983, p.44f). We attempt various formal interpretations of the ambiguous term 'typical value' through statistics such as mode, median, and mean. These interpretations lead to different *weakly informed prior *belief distributions.

#### MHP-\(\tau_X\)-intervals

Cycle-times \(\tau_X \) and related \(\tau_X \; ; \; X \in \{ P, C, M \}\)-intervals are displayed in Table 1 (CMN, Fig 2.1, p.26):

- \(\tau_P := 100 \; [ \; 50 \sim 200] \text{ msec } \) ; cycle time of
*perceptual*processor (CMN, 1983, p.32f\) - \(\tau_C := \; 70 \; [ \; 25 \sim 170] \text{ msec } \) ; cycle time of
*cognitive*processor (CMN, 1983, p.42f)\)**(Table 1)** - \(\tau_M := \; 70 \; [ \; 30 \sim 100] \text{ msec } \) ; cycle time of
*motor*processor (CMN, 1983, p.34f) - \(\tau_T := 240 \; [105 \sim 470] \text{ msec } \) ; total reaction time (CMN, 1983, Fig 2.1, p.26, p.66, p.433f)

#### Generation of Simple Reaction Times (SRTs) in MHP's Example 9

One of the standard problems in ** The Psychology of Human-Computer Interaction **(CMN, 1983) to motivate the use of

**is**

*MHP***example 9**:

*A user sits before a computer display terminal. Whenever any symbol appears, he is to press the space bar. What is the time between signal and response?*(CMN, 1983, p.66).

**Solution.** "*Let us follow the course of processing through the Model Human Processor in Figure ... The user is in some state of attention to the display ...When some physical depiction of the letter A (we denote it \(\alpha\)) appears, it is processed by the Perceptual Processor, giving rise to a physically-coded representation of the symbol (we write it \(\alpha'\)) in the Visual Image Store and very shortly thereafter to a visually coded symbol (we write it \(\alpha''\)) in Working Memory... This process requires one Perceptual Process cycle *\(\tau_P \)*. The occurrence of the stimulus is connected with a response...,requiring one Cognitive Processor cycle,*\(\tau_C \)* . The motor system then carries out the actual physical movement to push the key..., requiring one Motor Processor cycle,*\(\tau_M \)

*\(\tau_P + \tau_C + \tau_M \)*

**.**Total time required is*. Using Middleman values, the total time required is 100 + 70 + 70 = 240 msec. Using Fastman and Slowman values gives a range 105 ~ 470 msec. "*\( \blacksquare \) (CMN, Fig 2.1, p.26, p.66, p.433f)

These are the cycle times in *msec* of the* hypothetical* perceptual processor, the cognitive processor, and the motor processor, respectively

**.**

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------** **

## Pt 2 - Priors

Cycle times of hypothetical perceptual (P), cognitive (C), and motor processor (M) are reported according to the definition template**$$ \tau_X := \tau_{X_{Middleman}} [\tau_{X_{Fastman}} \sim \tau_{X_{Slowman}}] \;\;; \;\; X = P, C, M. \qquad(1)$$**

The meaning of (1) is that *\( \tau_X \)* is ranging from \(\tau_{X_{Fastman}}\) to \(\tau_{X_{Slowman}}\) with a 'typical' value \(\tau_{X_{Middleman}}\). The formal semantics of a 'typical' value are left unspecified by CMN (CMN, 1983, p.44f). We try various interpretations like *mode* (Pt 2.1, Pt 3.1), *median* (Pt 2.2, Pt 3.2) or *mean* (Pt 2.3-2.4, Pt 3.3-3.4) in our generative model's prior belief pdfs. We expect that the corresponding prior ** triangular** pdfs are more left-skewed in that same order because there is a well-known heuristic called '

*mode-median-mean-inequality*' \(mode < median <mean\) (Arens et al., 2020, p.1372). So we expect that the prior with interpretation '

*typical value*' is

**is more left-skewed than the others.**

*mean*If the interpretation of 'typical value' is the ** mode** of the prior pdf, we get along with a

*simple*triangular prior (Pt 2.1, 3.1)

$$Triangle_{mode}(a=\tau_{Fastman}, b=\tau_{Slowman}, c=\tau_{Middleman}) \qquad(5).$$

If the interpretation is the ** median** of the prior pdf, then we can stick to the triangular prior but with a slightly complicated parameter \(c\)

$$c = mode_{Triangle} = h_{median_{Triangle}}( \tau_{X_{Fastman}}, \tau_{X_{Slowman}},\tau_{X_{Middleman}}). $$

The function \( h_{median_{Triangle}} \) is a mapping from the *median* of a triangular pdf to its *mode*. In this case the prior pdf is (Pt 2.2, 3.2)

$$Triangle_{median}(a=\tau_{Fastman}, b=\tau_{Slowman}, c=h_{median_{Triangle}}(a, b, median_{Triangle})).\qquad(6)$$

If the interpretation of 'typical value' is the * mean* of the prior, we

*cannot*use a

*symmetric*pdf like the

*Gaussian*, because the MHP-intervals are

*asymmetric*around the 'typical value'. For this interpretation '

*typical value' is mean*we can either reuse the triangular distribution similar to the median interpretation (Pt 2.3, 3.3) or use the Gamma distribution (Luce, 1986) as prior belief pdfs (Pt 2.4, 3.4). In the case of reusing the triangular distribution the pdf is

$$Triangle_{mean}(a=\tau_{Fastman}, b=\tau_{Slowman}, c=h_{mean_{Triangle}}(a, b, mean_{Triangle})).\qquad(7)$$

where the function \(h_{mean_{Triangle}}(a, b, mean_{Triangle}) \) is a mapping from the *mean* of a triangular pdf to its *mode.*

Besides its asymmetric shape there is another argument in favour of a Gamma prior. It could be the case, that each of the three latent processes is a result of a convolution of several latent component's pdfs. If these components have a Gamma latency distribution their sum is also a Gamma distribution: the convolution of Gammas is a Gamma distribution.

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------

## Pt 2.1 - Triangular Priors for Interval 'Modes' Extracted From CMN's MHP

Because the MHP-intervals provide only scarce information in form of the 'typical' values, the lower, and the upper bounds we chose as *weakly**informative* priors the *triangular* (Pt. 2.1-2.3) and the\(\Gamma\)-distribution (Pt. 2.4). When using *triangular* priors the 'inspired guesses' are the *mode*, the *median, and the mean*, and again the *mean* in the case of *\(\Gamma\)* priors.

Here in Pt 2.1 the "inspired guess" is the ** mode** of the triangular prior pdf. In other words we force the MHP-value \( \tau_{X_{Middleman}} \) to be the

*mode*of the prior pdf.

The PDF is

$$ f_{Triangle}(x | a, b, c) := \left\{ \begin{array}{clr} 0 & \text{ for } x \lt a, \\ \frac{2(x-a)}{(b-a)(c-a)} & \text{ for } a \leq x \lt c, \\ \frac{2}{(b-a)} & \text{ for } x = c, & \qquad(8) \\ \frac{2(b-x)}{(b-a)(b-c)} & \text{ for } c \lt x \leq b, \\ 0 & \text{ for } b \lt x. \end{array} \right .$$

The generation of triangular-distributed random variates is done by the following mapping. *Given a random variate \(U\) drawn from the uniform distribution in the interval \((0, 1)\), then the variate*

$$ X_{Triangle} | a, b, c := \left\{ \begin{array}{clr} a + \sqrt{U(b-a)(c-a)} & \text{ for } 0 \lt U \lt F(c), & \qquad(9.1) \\ b - \sqrt{(1-U)(b-a)(b-c)} & \text{ for } F(c) \leq U \lt 1 & \qquad(9.2) \end{array} \right.$$

where $$F(c | a, b) := (c - a) / (b - a). $$.

The prior pdfs are displayed in Figs 2.1.01 - 05. In Fig 2.1.01 - 03 the MHP-values \( \tau_{X_{Middleman}} \) were defined to bethe ** mode** of the corresponding

*triangular*priors. The \( \tau_\Sigma\)-distribution in Fig 2.1.04 is the

*convolution*of the priors \(\tau_P, \tau_C, \tau_M\) in Figs 2.1.01 - 03. This was obtained by summing the samples of the

*latent*components (4).

Fig 2.1.05 displays the prior pdf \(\sigma_{\tau_{\Sigma}} \sim \Gamma(k=4, \theta=20)\) for the standard deviation \(\sigma_{\tau_{\Sigma}}\) of the Gaussian likelihood function (10) for i.i.d. SRT-data

$$ L_N(\tau_\Sigma, \sigma_{\tau_{\Sigma}} |SRTs) := \prod_{i=1}^m N(SRT_i | \tau_\Sigma, \sigma_{\tau_{\Sigma}}) \text{ ; m = #SRTs = nr of SRT-data.} \qquad(10) $$

We chose \(\Gamma(4, 20)\) because we thought that a mean of \( \mu=80\) and a \(\sigma=40\) for the \(\Gamma\)-pdf of the \(\sigma_{\tau_{\Sigma}}\) would provide sufficient *unexplained* variation in the Gaussian likelihood independent of the* true *variation due to \(\tau_\Sigma\). The independence assumption by introducing the product in (10) could be criticized because all SRT-data (Fig 3.1) stem from a *single* subject. But for the moment we stick to this assumption.

Three parametrizations for the \(\Gamma\) distribution are known. We chose that one with *shape * \(k\) and *scale* \(\theta\) parameter (with \(k > 0,\; \theta > 0)\)*.*Under this selection

*expectation*,

*variance*, and

*standard deviation*are functions of these

*hyperparameters*\(k\) and \(\theta\):

$$E(\sigma_{\tau_{\Sigma}}|k,\theta) = \mu(\sigma_{\tau_{\Sigma}}|k,\theta) = k\theta = 4 \cdot 20 = 80, $$

$$ $$

$$ Var(\sigma_{\tau_{\Sigma}}|k,\theta) = \sigma^2(\sigma_{\tau_{\Sigma}}|k,\theta) = k\theta^2 = 4 \cdot \theta^2 = 4 \cdot 20^2 = 1600 $$

and $$\sqrt {Var(\sigma_{\tau_{\Sigma}}|k,\theta)} = \sigma(\sigma_{\tau_{\Sigma}}|k,\theta) = \sqrt {k}\theta\ = \sqrt{4} \cdot 20 = 40.$$

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------

## Pt 2.2 - Triangular Priors for Interval Medians Extracted From CMN's MHP

As a *second alternative for a weakly informative* prior we chose the triangular distribution with the *median* as an "inspired guess" for MHP's 'typical value'. In other words we force the MHP-value \(\tau_{X_{Middleman}}\) to be the *median* of the prior pdf.

The ** median** of a triangular distribution is defined as:

$$ median_{Triangle} = \left\{ \begin{array}{ccc} a + \sqrt{\frac{(b-a)(c-a)}{2}}, & \text{ for } c \geq \frac{a+b}{2}, & \qquad(11.1) \\ b - \sqrt{\frac{(b-a)(b-c)}{2}}, & \text{ for } c \lt \frac{a+b}{2} & \qquad(11.2) \end{array} \right. $$

If we interpret the 'typical' value \(\tau_{X_{Middleman}}\) as the *median* of a \(Triangle(a, b, c)\)-pdf with bounds \(a = \tau_{X_{Fastman}}\) and \(b = \tau_{X_{Slowman}}\) then we need to determine the unknown third parameter \(c = mode_{triangle}\) from the information given by \(median_{Triangle}, \tau_{X_{Fastman}}, \text{ and } \tau_{X_{Slowman}}\). In short, we apply the function \(h_{median_{Triangle}}\) to instantiated \(median_{Triangle}, \tau_{X_{Fastman}}, \text{ and } \tau_{X_{Slowman}}\) statistics. The result of the function application provides the desired \(mode_{Triangle} = c \).

$$c := mode_{Triangle} = h_{median_{Triangle}}( \tau_{X_{Fastman}}, \tau_{X_{Slowman}},\tau_{X_{Middleman}}) \qquad(12)$$

which is

$$ c := mode_{Triangle} = h_{median_{Triangle}}(a, b, median_{Triangle}) \qquad(13)$$

*First*, we derive \(c\) when \( c \geq \frac{a+b}{2}\; \) and \(md \equiv median :\)

$$(md - a) = \sqrt{\frac{(b-a)(c-a)}{2}}$$

$$(md - a)^2 = \frac{(b-a)(c-a)}{2}$$

$$2(md - a)^2 = (b-a)(c-a)$$

$$\frac{2(md - a)^2}{(b-a)} = (c-a)$$

$$c := h_{median_{Triangle}}(a, b, md_{Triangle}) = \frac{2(md - a)^2}{(b-a)} + a \; \qquad(14)$$

*Second*, we derive \(c\) when \( c \lt \frac{a+b}{2}\;:\)

$$(md - b) = - \sqrt{\frac{(b-a)(b-c)}{2}}$$

$$- (md - b) = \sqrt{\frac{(b-a)(b-c)}{2}}$$

$$(md - b)^2 =\frac{(b-a)(b-c)}{2}$$

$$2(md - b)^2 = (b-a)(b-c)$$

$$\frac{2(md - b)^2}{(b-a)} = (b-c)$$

$$\frac{2(md - b)^2}{(b-a)} - b = - c$$

$$c := h_{median_{Triangle}}(a, b, md_{Triangle}) = - \frac{2(md - b)^2}{(b-a)} + b \; \qquad(15)$$

The prior pdfs are displayed in Figs 2.2.01 - 05. We recognize that in Figs 2.2.01 - 03 the **MHP-values** \( \tau_{X_{Middleman}} ; X = P, C, M \) were defined to bethe** median** of the corresponding

**triangular prior**. The \( \tau_\Sigma\)-distribution in Fig 2.2.04 is the

*convolution*of the priors \(\tau_P, \tau_C, \tau_M\) in Figs 2.2.01 - 03. This was obtained by summing the samples of the

*latent*components (4).

Fig 2.2.05 displays the prior \(\sigma_{\tau_{\Sigma}} \sim \Gamma(k:=4, \theta:=20)\)-pdf for the standard deviation \(\sigma_{\tau_{\Sigma}}\) of the Gaussian likelihood function (10) for i.i.d. SRT-data.

$$ L(\tau_\Sigma, \sigma_{\tau_{\Sigma}}|SRTs) := \prod_{i=1}^m N(SRT_i; \tau_\Sigma, \sigma_{\tau_{\Sigma_{prior}}}) \text{ ; m = #SRTs}. \qquad(10) $$

This distribution deviates from Fig.2.1.05 only by sampling errors due to the MH-MCMC sampling process.

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------

## Pt 2.3 - Triangular Priors for Interval Means Extracted From CMN's MHP

As a *third* alternative for a *weakly informative *prior we chose the triangular distribution with the ** mean** as an "inspired guess" for MHP's 'typical value'. In other words we force the MHP-value \( \tau_{X_{Middleman}} \) to be the mean of the prior pdf.

The ** mean** of a triangular distribution is defined as:

$$ mean_{Triangle} = \frac{a+b+c}{3}\qquad(16)$$

If we interpret the 'typical' value \(\tau_{X_{Middleman}}\) as the *mean* of a \(Triangle(a, b, c)\) pdf with bounds \(a = \tau_{X_{Fastman}}\) and \(b = \tau_{X_{Slowman}}\) we need to obtain the unknown third parameter \(c = mode_{triangle}\) of \(Triangle(a, b, c)\) from the information given by \(mean_{Triangle}, \tau_{X_{Fastman}}, \text{ and } \tau_{X_{Slowman}}\). In short, we apply the function \(h_{mean_{Triangle}}\) to instantiated statistics \(mean_{Triangle}, \tau_{X_{Fastman}}, \text{ and } \tau_{X_{Slowman}}\) to generate \(c\):

$$c := mode_{Triangle} = h_{mean_{Triangle}}( \tau_{X_{Fastman}}, \tau_{X_{Slowman}},\tau_{X_{Middleman}}) \qquad(17)$$

which is

$$ c := mode_{Triangle} = h_{mean_{Triangle}}(a, b, mean_{Triangle}) \qquad(18)$$

where

$$c := h_{mean_{Triangle}}(a, b, mean_{Triangle}) = 3 \cdot mean_{Triangle} - (a + b). \qquad(19)$$

The prior pdfs are displayed in Fig 2.3.01 - 05. We recognize that in Fig 2.3.01 - 03 the **MHP-values** \( \tau_{X_{Middleman}} \) were defined to be the ** mean** of the corresponding

**triangular prior**. The \( \tau_\Sigma\)-distribution in Fig 2.3.04 is the convolution of the priors \(\tau_P, \tau_C, \tau_M\) in Fig 2.3.01 - 03. This was obtained by summing the samples of the

*latent*components (4).

Fig.2.3.05 displays the prior \(\sigma_{\tau_{\Sigma}} \sim \Gamma(k:=4, \theta:=20)\)-pdf for the standard deviation of the Gaussian likelihood function (10). This distribution deviates from Fig 2.1.05 and Fig 2.2.05 only by sampling errors due to the MH-MCMC sampling process.

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------

## Pt 2.4 - Gamma Priors for Interval Means Extracted From CMN's MHP

As a fourth alternative for a weakly informative prior we chose the Gamma pdf. We treat all process cycle times \(\tau_X \) as *gamma distributed* waiting times though other distributions are also popular by cognitive psychologists (Luce, 1986; Wickens, 1982; Van Zandt, 2000).

There are several reasons for this decision. *First*, because the 'typical value' \( \tau_{X_{Middleman}} \) does not lay in the middle of the CMN-interval it is not wise to use the *mean* of a symmetric distribution like a Gaussian as the 'typical value' of a prior. In contrast the Gamma pdf can be left-skewed. *Second*, if we assume that the processor cycle times \( \tau_X ; X = P, C, M \) are results of many independent subprocesses with *exponential* waiting times then their sum is gamma distributed. The convolution of exponential distributions is a Gamma distribution. In a special case when some parameters are equal a similar result is true for independent Gamma-distributed subprocesses. For the general case with nonidentical parameters no closed solution is known. This general case can be handled quite easily within the sampling approach of WebPPL.

Here the 'inspired guess' is the *mean* of the Gamma prior pdf. In other words we force the CMN-value \( \tau_{X_{Middleman}} \) to be the mean \( \mu_\Gamma(X) \) of the Gamma prior pdf.

#### Convolution of Waiting Time and Processor Cycle Time Distributions

According to CMN the total cycle time \(\tau_T\) should be the sum of the *specific* component cycle times \(\tau_{X_{Zman}} \; ; X = P, C, M \; ; Z = Middle, Fast, Slow \) :

$$\tau_{T_{Zman}} = \sum_{X \in \{P,C,M\}} \tau_{X_{Zman}} \qquad(3)$$

We decided that the prior pdfs of the *sub*component cycle times \( \tau_X \) should be Gamma-distributed. The convolution of Gammas is known to be a Gamma only in the special case when one or both parameters of the to be convoluted Gammas are identical. It seems that in our general case where all parameters are different we have to compute the result of the convolution within a numerical simulation within WebPPL.

#### Extracting Gamma Distribution Parameters from MHP's Empirical Constraints

There exist three slightly different parametrizations of the gamma distribution. We chose that one with *shape *parameter \(k\) and *scale* parameter \(\theta\) (\(k>0, \theta > 0)\)*.Mean*,

*variance*, and

*standard deviation*of a gamma-distributed random variable \(X\) are functions of these parameters:

$$E_\Gamma(X|k,\theta) = k\theta = \mu_\Gamma(X), \qquad(2.4.20.1)$$

$$Var_\Gamma(X|k,\theta) = k\theta^2 =\sigma_\Gamma^2(X), \qquad(2.4.20.2) $$

and $$\sqrt {Var_\Gamma(X)} = \sqrt {k}\theta\ = \sigma_\Gamma(X). \qquad(2.4.20.3)$$

In the WebPPL scripts we use the substitutions *\(k/a\) *("\(a\) for \(k\)")*, \(\theta/b\) *("\(b\) for \(\theta\)"), \(\mu_\Gamma/m\) ("\(m\) for \(\mu_\Gamma\)"), and \(\sigma_\Gamma/s\) ("\(s\) for \(\sigma_\Gamma\)").

To specify the Gamma parameters \(k, \theta \) we map the *concepts* of CMN's empirical intervals

$$ \tau_X := \tau_{X_{Middleman}} [\tau_{X_{Fastman}} \sim \tau_{X_{Slowman}}] \;\;; \;\; X = P, C, M. \qquad(1)$$

to the *parameters *\(\mu_\Gamma\) and \(\sigma_\Gamma\) of the Gamma pdf and the spread parameter \(r\) (the identifiers* \(a, b, m, s, r \)* are symbols in WebPPL scripts):

$$ mode_\Gamma(X|a,b) = (a-1)b \qquad(2.4.21.1)$$

$$\mu_\Gamma(X|a,b) = ab =: m \qquad(2.4.21.2)$$

$$\sigma_\Gamma(X|a,b) = \sqrt{a}b =: s \qquad(2.4.21.3)$$

$$\tau_{X_{Fastman}} \approx (ab - r\sqrt{a}b) =: (m-rs) \qquad(2.4.22.1)$$

and

$$\tau_{X_{Slowman}} \approx (ab + r\sqrt{a}b) =: m+rs,\qquad(2.4.22.2)$$

where \(r\) is the number of standard deviations ('nSigma' in WebPPL) measuring the distance between interval boundaries \(\tau_{X_{Fastman}}\) or \(\tau_{X_{Slowman}}\) and mean \(\mu_\Gamma(X) \).

If we assume that the distance between interval mean \(\mu_\Gamma(X) \) and interval bounds \(\tau_{X_{Fastman}} \) or \(\tau_{X_{Slowman}}\) can be *approximated* by (2.4.22) so that \( \tau_{X_{Fastman}} = (\mu_X - r\sigma_X)\) and \( \tau_{X_{Slowman}} = (\mu_X + r\sigma_X)\) we can use Chebyshev's inequality

$$P(|X-\mu| \ge r\sigma) \le \frac{1}{r^2}\qquad(2.4.23) $$

to make an approximate determination of \(r\) according:

$$ r\sigma\text{-Intervals Derived from Chebyshev's Inequality}$$

$$\text{Probability of Staying Within and Risk of Leaving Interval} $$

$$\begin{array}{|c|c|c|} \hline \\

r & P(|X-\mu| \le r\sigma) \ge 1 - \frac{1}{r^2} & P(|X-\mu| \ge r\sigma) \le \frac{1}{r^2} \\ \text{ } & P(\text{Within CMN-Interval}) & P(\text{Outside CMN-Interval}) \\ \hline \text{...} & \text{.........} & \text{.........} \\

3 & P(|X-\mu| \le r\sigma) \ge 0.8888 & P(|X-\mu| \ge r\sigma) \le \frac{1}{9} = 0.1111 \\ 4 & P(|X-\mu| \le r\sigma) \ge 0.9375 & P(|X-\mu| \ge r\sigma) \le \frac{1}{16} = 0.0625 \\ \text{...} & \text{.........} & \text{.........} \\ \hline \end{array} \qquad(2.4.24) $$

So the *width* of an *\(r\sigma\)*-interval (\(r = 1,2,3, ... \)) is

$$[\tau_{X_{Slowman}} - \tau_{X_{Fastman}}] \approx 2r\sigma_X = 2r\sqrt{a}b =: 2rs.$$

We chose \(r = 3\) because the risk of observing a random value outside the \(3\sigma\)-inverval with probability \(p \le 0.1111\) is sufficiently small.

The *semantics* of the CMN-intervals can be now be approximated by the *parameters* of a gamma \( \Gamma(a, b) \)-distribution:

$$\hat \tau_X := \hat \tau_{X_{Middleman}} [\hat \tau_{X_{Fastman}} \sim \hat \tau_{X_{Slowman}}] \approx ab[ab-r\sqrt{a}b \sim ab+r\sqrt{a}b] \qquad(2.4.25).$$

Of course this approximation includes an error, because this interval is symmetric about the mean, whereas this is not true for the CMN-interval and its 'typical value' \(\tau_{X_{Middleman}}\). Later when have generated the \( \hat \tau_\Gamma(X) \)-distributions we can compare our generated \( \hat \tau_\Gamma(X) \)-intervals with those from CMN and compute the probabilities or risks of exceeding the CMN-interval limits. As we shall see the probability of exceeding the CMN-intervals by a sample from our simulated Gamma-distributions is negligible small.

Now, we have to identify the parameters or *\(a\)* and *\(b\)*. Solving towards *\(a\)* and* \(b\)* we get:

$$a := \frac{ab}{b}=\frac{m}{b}$$

$$b := \frac{[\tau_{X_{Slowman}} - \tau_{X_{Fastman}}]}{2r \sqrt{a}}$$

We substitute \(a = \frac{m}{b}\) into the formula for* b*:

$$b=\frac{[\tau_{X_{Slowman}} - \tau_{X_{Fastman}}]}{2r \sqrt{\frac{m}{b}}}$$

squaring both sides:

$$b^2=\frac{[\tau_{X_{Slowman}} -\tau_{X_{Fastman}}]^2}{2^2r^2 \left(\frac{m}{b}\right)}=\frac{[\tau_{X_{Slowman}} -\tau_{X_{Fastman}}]^2}{2^2r^2}\frac{b}{m}$$

Cancelling *\(b\)* on both sides:

$$b := \frac{[\tau_{X_{Slowman}} - \tau_{X_{Fastman}}]^2}{2^2r^2 m} \qquad(2.4.26)$$

*Now, \(b\)* is totally identified by the estimators or empirical constraints \(\tau_{X_{Slowman}}\)*, *\(\tau_{X_{Fastman}}\), and \( \mu_\Gamma(X) = m \). This is not yet the case for *\(a\)*. So we substitute *\(b\)* back into the equation for *\(a\)*. This results in:

$$a := \frac{m}{b}=m*\frac{2^2r^2m}{[\tau_{X_{Slowman}} - \tau_{X_{Fastman}}]^2}=\frac{(2rm)^2}{[\tau_{X_{Slowman}} - \tau_{X_{Slowman}}]^2}\qquad(2.4.27)$$

We check the results:

$$ab=\frac{(2rm)^2}{[\tau_{X_{Slowman}} - \tau_{X_{Fastman}}]^2} * \frac{[\tau_{X_{Slowman}} - \tau_{X_{Fastman}}]^2}{(2r)^2 m}=\frac{2^2r^2m^2}{2^2r^2m}=m $$

The simulation to specify the shape of the prior Gamma-pdfs starts with two constants \(nTrials\) and \(nSigma\). \(nTrials\) is relevant for the WebPPL-\(Infer\) function which generates \( nTrials (= 50000) \) samples \( \hat \tau_X \sim \Gamma_X(a_X,b_X) \) with the \(forward\) method. The second constant is \(nSigma\) which determines the interval width \(2 \cdot nSigma \cdot \sigma = 6 \cdot \sigma \). We have chosen \(nSigma = 3\) because the probability that a \( \hat \tau_X \) is sampled outside the interval should be a rare event. According to Chebyshev's Inequality (s.above) the probability of staying within the \(3\sigma\)-interval is for *any* distribution is \(p < 0.1111\). We shall demonstrate that our generated \(tau_\Gamma(X)\)-priors stay within the CMN-intervals with a probability \(p < 0.98\) and leave the CMN-intervals with \(p< 0.02\).

In the first step of the simulation we generate a data structure containing the \(\tau_X \)-intervals as published by CMN (2.4.28). We added a column \(\tau_{X_{middle}}\). This designates the *middle* of the interval:

$$\tau_{X_{middle}} = \frac{\tau_{X_{Fastman}} + \tau_{X_{Slowman}}} {2}$$

$$\tau_X \text{-CMN-intervals}$$

$$\begin{array}{|c|ccc|} \hline

\tau_X & \tau_{X_{Middleman}} & \tau_{X_{Fastman}} & \tau_{X_{middle}} & \tau_{X_{Slowman}} \\

\hline

\tau_P & 100 & 50 & 125 & 200 \\

\tau_C & 70 & 25 & 97.5 & 170 \\

\tau_M & 70 & 30 & 65 & 100 \\ \hline \tau_\Sigma = \sum_{X \in {P, C, M}} \tau_X & 240 & 105 & 287.5 & 470 \\ \hline

\end{array} \qquad(2.4.28)$$

The last line of the above table is a result of summing the \(\tau_X\)-interval values.

Now, the Gamma-parameters \(a_{\tau_X}\) and \(b_{\tau_X}\) can be obtained by (2.4.27) and (2.4.26). These values are documented in the second and third columns of the following table (2.4.29). From the two parameters \(a_{\tau_X}\) and \(b_{\tau_X}\) we can compute \(mode_{\tau_X}, \mu_{\tau_X}\) and \( \sigma_{\tau_X}\) of the Gamma pdf according to (2.4.21.1), (2.4.21.2), and (2.4.21.3). These parameters can be found in columns 4 - 6 of the following table (2.4.29).

$$\text{ }$$

$$\Gamma_{\tau_X} \text{-parameters} $$

$$\begin{array}{|c|cc|ccc|} \hline

\hat \tau_X & a_{\tau_X} (2.4.27) & b_{\tau_X} (2.4.26) & mode_{\tau_X} (2.4.21.1) & \mu_{\tau_X} (2.4.21.2) & \sigma_{\tau_X} (2.4.21.3) \\ \hline

\hat \tau_P & 16.00 & 6.25 & 93.75 & 100 & 25 \\

\hat \tau_C & 8.39 & 8.34 & 61.65 & 70 & 24.1 \\ \hat \tau_M & 36.00 & 1.94 & 68.05 & 70 & 11.66 \\ \hat \tau_T & 15.56 & 15.42 & 224.58 & 240 & 60.6 \\\hline

\end{array} \qquad(2.4.29) $$

$$\text{ }$$

Having identified the basic \(\Gamma\)-parameters \(a_X (2.4.27)\) and \(b_X (2.4.26) \) we are able to reconstruct the \( \hat \tau_X\)-intervals according (2.4.25). These are found in columns 4 - 6 in the table (2.4.30) below. As expected the \( \hat \tau_{X_{Middleman}} \) are nearly identical to the CMN's \( \tau_{X_{Middleman}} \) though the lower and upper \( \hat \tau_X\)-interval bounds deviate slightly from their CMN-\( \tau_X\)-counterparts. The best fit is for the motor-processor cycle-time \(\tau_M\).

As can be seen from (2.4.31) and (2.4.32; second to last column in table (2.4.30)), sampling is done for \( \hat\tau_X \) from the the gamma distributions parametrized with the \(a_{\tau_X} (2.4.27) \) and \(b_{\tau_X} (2.4.26) \). \(a_{\tau_X} \) and \(b_{\tau_X} \) were directly obtained from CMN's interval values. (2.4.32.5; table 2.4.30) is computed in a different way than (2.4.32.4). Whereas (2.4.32.4) is directly computed from CMN's intervals by (2.4.26) and (2.4.27), the case for (2.4.32.5) is different. (2.4.32.5) is the sum of the sampled values (2.4.32.1) - (2.4.32.3). So (2.4.32.5) is in some way a numeric equivalent to a convolution of distributions (2.4.32.1) - (2.4.32.3). The range of this 'convoluted' interval is a bit smaller than the values (2.4.32.4) obtained directly from CMN's interval. Because this distribution is only existent as an array of *nTrials* samples it has no parameters \( a_{\tau_X} \) and \( b_{\tau_X} \). So we wanted to condense this array of numeric particles into a parametrized Gamma pdf with \( a_{\tau_X} \) and \( b_{\tau_X} \). These parameter estimates are presented in columns 2-3 in the last line of the next table (2.4.33). Only with these two parameters at hand we can derive the interval values (columns 4-6 in last line of table (2.4.30). But, interval values of (2.4.32.5) and (2.4.32.6) are nearly identical.

$$ \hat \tau_X \sim \Gamma_{\tau_X}(a_{\tau_X}, b_{\tau_X}) ; X \in {P, C, M, T} \qquad(2.4.31) $$

$$\text{Model-generated } \hat \tau_X \text{-intervals (2.4.25) and Sampled Distributions } \hat \tau_X \sim \Gamma_{\tau_X}(a_{\tau_X},b_{\tau_X})$$

$$\begin{array}{|c|cc|ccc|lc|c|} \hline\hat \tau_X & a_{\tau_X} & b_{\tau_X} & \hat \tau_{X_{Middle}} & \hat \tau_{X_{Fast}} & \hat \tau_{X_{Slow}} & \hat \tau_X \sim \Gamma_{\tau_X}(a_{\tau_X}, b_{\tau_X}) & (2.4.32.k) & \text{Fig} \\ \hline \hat \tau_P & 16.0 & 6.3 & 100.0 & 24.7 & 175.2 & \hat \tau_P \sim \Gamma_{\tau_P}(a_P,b_P) & (2.4.32.1) & \text{2.4.1} \\ \hat \tau_C & 8.4 & 8.3 & 69.8 & -2.5 & 142.2 & \hat \tau_C \sim \Gamma_{\tau_C}(a_C,b_C) & (2.4.32.2) & \text{2.4.2} \\ \hat \tau_M & 36.0 & 1.9 & 70.1 & 35.2 & 104.9 & \hat \tau_M \sim \Gamma_{\tau_M}(a_M,b_M) & (2.4.32.3) & \text{2.4.3} \\ \hat \tau_T & 15.6 & 15.4 & 240.0 & 56.9 & 423.1 & \hat \tau_T \sim \Gamma_{\tau_T}(a_T,b_T) & (2.4.32.4) & \text{2.4.4} \\ \hline \hat \tau_\sum & - & - & 239.9 & 129.7 & 350.5 & \hat \tau_\sum = \sum_{X \in {P, C, M}} \hat \tau_X & (2.4.32.5) & \text{2.4.5} \\ \hat \tau_\sum & 42.4 & 5.7 & 240.1 & 129.8 & 350.5 & \hat \tau_\sum \sim \Gamma_\sum(42.4, 5.7) & (2.4.32.6) & \text{2.4.6} \\ \hline \end{array}\\ \qquad(2.4.30)$$

Now, we study the risk of generating \(\Gamma_{\hat\tau_X}\)-random values falling outside the CMN-interval boundaries. These risk-probabilities are compiled in the following table (2.4.33):

$$\text{Probability of a \(\Gamma_{\hat\tau_X}\)-generated Samples of Falling Outside the CMN-Interval Boundaries}$$

$$\begin{array}{|c|cc|cc|cc|} \hline

\hat \tau_X & a_{\tau_X} & b_{\tau_X} & \hat \tau_{X_{Fastman}} & \hat \tau_{X_{Slowman}} & P(\hat\tau_X < \tau_{X_{Fastman}}) & P(\hat\tau_X > \tau_{X_{Slowman}}) \\

\hline

\hat \tau_P & 16.0 & 6.3 & 50 & 200 & 0.00878 & 0.00054 \\

\hat \tau_C & 8.4 & 8.3 & 25 & 170 & 0.00828 & 0.00074 \\

\hat \tau_M & 36.0 & 1.9 & 30 & 100 & 0.00000 & 0.00926 \\ \hline \hat \tau_T & 15.6 & 15.4 & 105 & 470 & 0.00294 & 0.00138 \\ \hat \tau_\sum & 42.4 & 5.7 & 105 & 470 & 0.00000 & 0.00000 \\ \hline

\end{array} \qquad(2.4.33)$$

We see that the risk of a Gamma-generated \(\hat\tau_X\)-sample falling outside a CMN-interval is as small as \(p < 0.009\). This means that our Gamma distributions are not generating values *untypical* for the CMN-intervals. From this aspect they are good priors derived from the CMN-intervals. Maybe our Gamma-generated \(3\sigma\)-intervals are too narrow. We have to study this question, when visualizing the Gamma distributions.

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------

## Pt 3 - The Generative Model

The main idea of Bayesian inference consists in computing the posterior probability from the generative model (MacKay. 2003. p.29; Bishop. 2006. p.22; Lunn et al. 2013. p.36; Lee & Wagenmakers. 2013. p.4; Staton, 2021) is:

$$ posterior \propto likelihood \times prior. \qquad(20)$$

If all factors are given as unconditional \( f(...) \) or conditional \(f(...|...)\) densities (20) can be formalized (Lunn et al, 2013, p.35; Robert & Casella, 2004, p.12) as:

$$ f(\mathbf{\theta} | \mathbf{x}) \propto f(\mathbf{x}|\mathbf{\theta}) \cdot f(\mathbf{\theta}) \qquad(21) $$

where \(\mathbf{x}\) = vector or matrix of data (e.g. *SRTs* ) and \( \mathbf{\theta} \) = vector of parameters.

The data in our Bayesian modeling study are \(m=10\) SRTs **(458, 292, 228, 403, 271, 420, 350, 235, 260, 306 msec)** (Fig. 3.1) of a 72-year old car driver gathered within a time span of 30 min by the author in 2018 on the dashboard of the website Human Benchmark.

Under the assumption that the SRTs are i.i.d. we can decompose the *likelihood *and the* priors *into a set of multiplicative factors:

$$ f^*(.) = f_{posterior}(\tau_P, \tau_C, \tau_M, \tau_\Sigma, \sigma_{\tau_\Sigma} | SRTs) \propto ... $$

$$ ... \propto L_N(\tau_P, \tau_C, \tau_M, \tau_\Sigma, \sigma_{\tau_\Sigma} | SRTs) \prod_{X \in \{\tau_P, \tau_C, \tau_M, \tau_\Sigma, \sigma_{\tau_\Sigma}\}} f_{prior}(X). \qquad{(22)} $$

The determination of the *explicit* mathematical form of the posterior conditional density \( f_{posterior}(\tau_P, \tau_C, \tau_M, \tau_\Sigma, \sigma_{\tau_\Sigma} | SRTs)\) (left side of (22)) is difficult if not impossible. But, if we are able to compute the densities in the generative model (22) and are satisfied by only approximating the posterior in (22) numerically then we can use WebPPL's MCMC inference method to determine the shape, characteristic statistics, and relevant probabilities of the posterior pdf \(f^*(.) \).

The abstract idea of MCMC is easy to grasp and communicate: "*Markov chain simulation (also called Markov chain Monte Carlo, or MCMC) is a general method based on drawing values of \(\theta\) from approximate distributions and then correcting those draws to better approximate the target posterior distribution, \(p(\theta|y)\). The sampling is done sequentially, with the distribution of the sampled draws depending on the last value drawn; hence the draws form a Markov chain... The key to the method's success, however, is not the Markov property but rather the approximate distributions are improved at each step in the simulation, in the sense of converging to the target distribution.*" (Gelman et al., 2014, p.275).

In PPLs (e.g. WebPPL) the technical details (e.g. the *proposal* or *jump* distribution, the search strategy, etc. used) of MCMC are hidden from the modeler (Goodman & Stuhlmüller, 2020). So the modeler doesn't know the variant of the MCMC (Metropolis, Metropolis-Hastings, Gibbs, Hamilton, etc.) which is implemented for the actual model. Instead, s/he has to concentrate on the outline of the generative model. So it is wise to have a subjective mental model of MCMC which is a nonbiased abstraction and which supports writing correct and efficient computer code. This informal intuitive cognitive semantics should not substantial deviate from a formal semantics (e.g. Staton's *programs-as-measures* semantics; Staton, 2021).

The generative model (Fig 3.2) consists of

*triangular prior*s for the latent process cycle times (5)-(7) \(\tau_{X \in \{P, C, M \}} \), (Fig 2.1.01-03; Fig 2.2.01-03; Fig 2.3.01-03). \(\Gamma\)-priors are displayed in Fig 2.4.01-03.*prior*pdf \(\sigma_{\tau_{\Sigma}} \sim \Gamma(k:=4, \theta:=20)\) for the standard deviation \(\sigma_{\tau_{\Sigma}}\) (Fig 2.1.05, 2.2.05, 2.3.05, 2.4.05) of the Gaussian likelihood function (10).- the Gaussian
*likelihood*function (10) (Robert & Casella, 2004, p.6) for independent identical distributed (i.i.d.)*SRT*-data.

In MCMC each parameter vector \(\theta\) is sampled in *one* step in multidimensional parameter space \(\Theta\). Then this sample is proposed for acceptance (Gelman et al., 2014, ch.11.2). The *mental* sampling in the modeler's imagined MCMC is slightly different. First, sampling proposals "\(\sim\)" is done for \(\tau_{X_i}\) and \( \sigma_{\tau_{\Sigma_i}} ;\; X = \tau_P, \tau_C, \) and \(\tau_M; \; i = 1, ... , n_{Trials}\). Then, proposals are accepted dependent on the magnitude of the densities in model equation (22). This is done by comparing the target density \(c \cdot f^*(X=x)\) for the current proposal \(x\) with the target density \(c \cdot f^*(X=x')\) for the previous one \(x'\) (Murphy, 2012, ch. 24). Proposals are accepted according to the rules of various variants of the MH-MCMC-algorithm hidden from the modeler.

Looking at model's static function WebPPL-code (Möbus, 2021) does not help much in understanding dynamic sampling logic. It is better to think of four '*virtual*' computation steps triggered by the model function:

$$ \text{1. Unconditional sampling of priors for latent components: } $$

$$ \tau_X \sim Prior(...) \; ; X \in \{P, C, M\} $$

$$\text{where: } Prior \in \{Triangle_{mode}, Triangle_{median}, Triangle_{mean}, \Gamma_{mean}(a, b) \} $$

$$ \text{2. Unconditional sampling of prior for standard deviation of Gaussian likelihood: } $$

$$ \sigma_{\tau_{\Sigma}} \sim \Gamma(k=4, \theta=20) $$

$$ \text{3. Computation of deterministic function value sum of priors: } $$

$$ \tau_\Sigma = \sum_{X \in \{P,C,M\}} $$

$$ \text{4. Conditional sampling from Gaussian likelihood with variable priors and fixed SRTs: } $$

$$ (\tau_P, \tau_C, \tau_M, \tau_\Sigma, \sigma_{\tau_{\Sigma}} | SRTs) \sim L_N(\tau_\Sigma, \sigma_{\tau_\Sigma} | SRTs) = \prod_{i=1}^m N(SRT_i | \tau_\Sigma, \sigma_{\tau_{\Sigma}}) $$

The last equation can be rewritten as the last ** structural causal equation** in a system of equations of a

*(Pearl, 2009, ch.1.4; Pearl et al, 2016, ch.1.5; Pearl & MacKenzie, 2018, p.276ff, p.283ff):*

**Structural Causal Model (SCM)**$$ (\tau_P, \tau_C, \tau_M, \tau_\Sigma, \sigma_{\tau_{\Sigma}} | SRTs) := f_{SRT}(m, \tau_\Sigma, U_{SRT}(0, \sigma_{\tau_\Sigma})) \qquad{(23)} $$

where:

- \(m\) = #(i.i.d. data-points); i.i.d. = independent identical distributed,
- \(U_{SRT}(0, \sigma_{\tau_\Sigma})\) means an
*unexplained, exogeneous*, and*independent*random influence with an expectation \(\mu = 0\) and a standard deviation \(\sigma = \sigma_{\tau_\Sigma}\). - \(f_{SRT}(...., U_{SRT}(0, ...))\) means a
generating random samples for SRT*structural causal equation*

'*virtual*' means that the modeler should have a cognitive model of the sampling process which is useful for developing correct and efficient code but could deviate from the low-level implementation of the MCMC-process.

To my knowledge there are at least two *probabilistic programming languages (PPLs)* BUGS (Lunn et al., 2013) and TURING (Ge, H., Xu, K., and Ghahramani, Z., 2018) which directly use the mathematical "\(\sim\)"-symbol as an operator in the left side of their sampling statements. WebPPL uses a more indirect nonmathematical '*observation*' syntax for sampling from the likelihood.

(23) seems to be only an academic exercise, but SCMs are simpler to compile into WebPPL-scripts than e.g. Causal Bayes Nets. The samplings of the generative model form a SCM in Pearl's sense.

The full code of the central parameterless modeling WebPPL-function **oneSampleOfModel **is:

**/** * @function oneSampleOfModel - takes o n e sample from the priors * @returns {object} posteriorTauT - returns o n e sample of posterior TauT */ var oneSampleOfModel = function() { /** * @variable {number} PriorTauSum - a sample from Gamma TauSum-distribution */ var priorTauP = oneSampleOf... var priorTauC = oneSampleOf... var priorTauM = oneSampleOf... var priorTauSum = priorTauP + priorTauC + priorTauM /** * @variable {number} priorSigmaTauSum - a sample from SigmaTauSum Gamma distribution */ var priorSigmaTauSum = oneSampleOfPriorSigmaTauSum() // map(function(datum) { observe(Gaussian({mu:priorTauSum, sigma:priorSigmaTauSum}),datum) }, data) return {postTauP: priorTauP, postTauC: priorTauC, postTauM: priorTauM, postTauSum:priorTauSum, postSigmaTauSum:priorSigmaTauSum} }**

The map(....) function computes the likelihood and the **return {...}** returns the accepted conditional samples \( (\tau_P, \tau_C, \tau_M, \tau_\Sigma, \sigma_{\tau_{\Sigma}} | SRTs) \) (23) of the joint posterior pdf

$$P ( \tau_P, \tau_C, \tau_M,\tau_\Sigma, \sigma_{\tau_{\Sigma}} | SRT). \qquad(24) $$

.

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------

##
Pt. 4 - Posteriors

Pt 4.1 - Posteriors for Interval Modes as 'Typical' Values

Sampling according to our generative model is done with four sampling subprocesses:

$$ \text{1. Unconditional sampling triangular priors for interval modes as 'typical' values: }$$

$$ \tau_P \sim Triangle_{mode}(a=50, b=200, c=100) $$

$$ \tau_C \sim Triangle_{mode}(a=25, b=170, c=70) $$

$$ \tau_M \sim Triangle_{mode}(a=30, b=100, c=70) $$

$$ \text{2. Unconditional sampling priors for the standard deviation in the Gaussian likelihood: } $$

$$ \sigma_{\tau_{\Sigma}} \sim \Gamma(k=4, \theta=20) $$

$$ \text{3. Computing the deterministic function sum of priors: } $$

$$ \tau_\Sigma = \sum_{X \in \{P,C,M\}} \tau_X$$

$$ \text{4. Conditional sampling from Gaussian likelihood with variable priors and fixed SRTs:} $$

$$ (\tau_P, \tau_C, \tau_M, \tau_\Sigma, \sigma_{\tau_{\Sigma}} | SRTs) \sim L_N( \tau_\Sigma, \sigma_{\tau_\Sigma} | SRTs ) = \prod_{i=1}^m N(SRT_i | \tau_\Sigma, \sigma_{\tau_{\Sigma}}) $$

The sampled values of the priors are summed up to \(\tau_\Sigma \). Then the likelihood of the SRT-data is evaluated *conditional* on the prior latent \(\tau_\Sigma \) and the prior \( \sigma_{\tau_{\Sigma}} \).

Besides the code for function **oneSampleOfModel **the most central code line in the WebPPL-script is:

**var posterior = Infer({model:oneSampleOfModel, method:'MCMC', samples: nTrials, burn:myBurnPeriod, lag:myLag}) **

The function **Infer **generates the conditional 4-tuple-samples of the joint posterior distribution:

$$ (\tau_P, \tau_C, \tau_M, \tau_\Sigma, \sigma_{\tau_{\Sigma}} | SRTs) \qquad{(23.12)} $$

with the help of the Markov-Chain Monte-Carlo (MCMC)-method (Robert & Casella, 2004, ch.7; Murphy, 2012, ch.23; Lunn et al, 2014, ch.4) with the parameters \(nTrials = 6E4 = 60 000\), \(myBurnPeriod = nTrials \cdot 0.10\), and \(myLag=10 \).

\(nTrials \) is the number of random trials or samples, \(myBurnPeriod \) is the length of the 'burn-in'-period at the beginning of the MCMC-process, and \(myLag\) is the sampling jump distance of the algorithm to avoid autocorrelation between the samples. A lag of 10 means that only each 10th sample is kept; all others are discarded. Also all samples of the 'burn-in' period are discarded.

All generated samples (23) constitute the support of the joint posterior pdf

$$P ( \tau_P, \tau_C, \tau_M,\tau_\Sigma, \sigma_{\tau_{\Sigma}} | SRT), \qquad(24) $$

which posterior marginal densities \(Triangle_{mode}( \tau_X | SRT)\) and \(f( \tau_\Sigma | SRT)\) (Fig 3.1.01-04) are reconstructed from the posterior samples (23) by kernel density estimation methods (Hastie, Tibshirani & Friedman, 2001, ch.6.6.1; Murphy, 2012, ch.14.7.2). In addition we display the posterior pdf of the standard deviation of the Gaussian likelihood in Fig 4.1.05. Our SRT-data are distributed with approximately \( \sigma_{\tau_\Sigma} = 85 \) around the marginal posterior \(\tau_\Sigma | SRT \).

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------

## Pt 4.2 - Posteriors for Interval Medians as 'Typical' Values

Sampling according to our generative model is done with four sampling subprocesses:

$$ \text{1. Unconditional sampling triangular priors for interval medians as 'typical' values: } $$

$$ \tau_P \sim Triangle_{median}(a=50, b=200, c=66.7) $$

$$ \tau_C \sim Triangle_{median}(a=25, b=170, c=32.1) $$

$$ \tau_M \sim Triangle_{median}(a=30, b=100, c=75.7) $$

$$ \text{2. Unconditional sampling priors for standard deviation of Gaussian likelihood: } $$

$$ \sigma_{\tau_\Sigma} \sim \Gamma(k=4, \theta=20) $$

$$ \text{3. Computing the deterministic function sum of priors: } $$

$$\tau_\Sigma = \sum_{X \in \{P,C,M\}} \tau_X$$

$$ \text{4. Conditional sampling from Gaussian likelihood with variable priors and fixed SRTs:} $$

$$ (\tau_P, \tau_C, \tau_M, \tau_\Sigma, \sigma_{\tau_\Sigma} | SRTs) \sim L_N( \tau_\Sigma, \sigma_{\tau_\Sigma} ; SRTs ) = \prod_{i=1}^m Gaussian(SRT_i | \tau_\Sigma, \sigma_{\tau_\Sigma}). $$

The function **Infer** generates the conditional 4-tuple-samples of the joint posterior distribution (23)

$$( \tau_P, \tau_C, \tau_M, \tau_\Sigma, \sigma_{\tau_\Sigma}) | SRT $$

by the Markov-Chain Monte-Carlo (MCMC)-method (Robert & Casella, 2004, ch.7; Murphy, 2012, ch.23; Lunn et al, 2014, ch.4) with the parameters \(nTrials = 6E4 = 60 000\), \(myBurnPeriod = nTrials \cdot 0.10\), and \(myLag=10 \).

All generated samples (23) constitute the support of the joint posterior pdf

$$P ( \tau_P, \tau_C, \tau_M,\tau_\Sigma, \sigma_{\tau_\Sigma} | SRT), \qquad(24) $$

which marginal densities \(Triangle_{median}( \tau_X | SRT)\) and \(f( \tau_\Sigma | SRT)\) (Fig 3.2.01-04) are reconstructed from the posterior samples (23) by kernel density estimation methods (Hastie, Tibshirani & Friedman, 2001, ch.6.6.1; Murphy, 2012, ch.14.7.2). In addition we display the posterior pdf of the standard deviation of the Gaussian likelihood in Fig 4.2.05. Our SRT-data are distributed with approximately \( \sigma_{\tau_\Sigma} = 86 \) around the marginal posterior \(\tau_\Sigma | SRT \).

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------

## Pt 4.3 - Posteriors for Interval Means as 'Typical' Values

Sampling according to our generative model is done with four sampling subprocesses:

$$ \text{1. Unconditional sampling triangular priors for interval means as 'typical' values: } $$

$$ \tau_P \sim Triangle_{mean}(a=50, b=200, c=50.0) $$

$$ \tau_C \sim Triangle_{mean}(a=25, b=170, c=15.0) $$

$$ \tau_M \sim Triangle_{mean}(a=30, b=100, c=80.0) $$

$$ \text{2. Unconditional sampling priors for standard deviation of Gaussian likelihood: } $$

$$ \sigma_{\tau_\Sigma} \sim \Gamma(k=4, \theta=20) $$

$$ \text{3. Computing the deterministic function sum of priors: }$$

$$\tau_\Sigma = \sum_{X \in \{P,C,M\}} \tau_X

$$

$$ \text{4. Conditional sampling from Gaussian likelihood with variable priors and fixed SRTs:} $$

$$ (\tau_P, \tau_C, \tau_M, \tau_\Sigma, \sigma_{\tau_\Sigma} | SRTs) \sim L_N( \tau_\Sigma, \sigma_{\tau_\Sigma} ; SRTs ) = \prod_{i=1}^m N(SRT_i | \tau_\Sigma, \sigma_{\tau_\Sigma}). $$

The function **Infer** generates the conditional 4-tuple-samples of the joint posterior distribution (23):

$$( \tau_P, \tau_C, \tau_M, \tau_\Sigma, \sigma_{\tau_\Sigma}) | SRT $$

with the help of the Markov-Chain Monte-Carlo (MCMC)-method (Robert & Casella, 2004, ch.7; Murphy, 2012, ch.23; Lunn et al, 2014, ch.4) with the parameters \(nTrials = 6E4 = 60 000\), \(myBurnPeriod = nTrials \cdot 0.10\), and \(myLag=10 \).

All generated samples (23) constitute the support of the joint posterior pdf

$$P ( \tau_P, \tau_C, \tau_M,\tau_\Sigma, \sigma_{\tau_\Sigma} | SRT), \qquad({24)} $$

whose marginal densities \(Triangle_{mean}( \tau_X | SRT)\) and \(f( \tau_\Sigma | SRT)\) (Fig 4.3.01-04) are reconstructed from the posterior samples (23) by kernel density estimation methods (Hastie, Tibshirani & Friedman, 2001, ch.6.6.1; Murphy, 2012, ch.14.7.2). In addition we display the posterior pdf of the standard deviation of the Gaussian likelihood in Fig 4.3.05. Our SRT-data are distributed with approximately \( \sigma_{\tau_\Sigma} = 86 \) around the marginal posterior \(\tau_\Sigma | SRT \).

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------

## Pt 4.4 - Posteriors for Generative Bayesian SRT-Model with Gamma Priors for Interval Means as 'Typical' Values

Sampling according to our generative model is done with four sampling subprocesses (44.1 - 44.4):

$$ \text{1. Unconditional sampling Gamma priors for interval means as 'typical' values: } $$

$$ \tau_P \sim \Gamma(a=16, b=6.3) \qquad{(44.1)} $$

$$ \tau_C \sim \Gamma(a=8.4, b=8.3) \qquad{(44.2)} $$

$$ \tau_M \sim \Gamma(a=36, b=1.9) \qquad{(44.3)} $$

$$ \text{2. Unconditional sampling priors for standard deviation of Gaussian likelihood: } $$

$$ \sigma_{\tau_\Sigma} \sim \Gamma(k=4, \theta=20) \qquad{(35.10)=(44.4)} $$

$$ \text{3. Computing the function value of priors: } \tau_\Sigma = \sum_{X \in \{P,C,M\}} \qquad{(35.11)=(44.5)} $$

$$ \text{4. Conditional sampling from Gaussian likelihood with priors and fixed SRTs:} $$

$$ (\tau_P, \tau_C, \tau_M, \tau_\Sigma | SRTs) \sim L( \tau_\Sigma, \sigma_{\tau_\Sigma} ; SRTs ) = \prod_{i=1}^m Gaussian(SRT_i | \tau_\Sigma, \sigma_{\tau_\Sigma}). \qquad{(35.12)=(44.6)} $$

The function **Infer** generates the conditional 4-tuple-samples of the joint posterior distribution:

$$( \tau_P, \tau_C, \tau_M, \tau_\Sigma) | SRT \qquad(35.12)=(45) $$

with the help of the Markov-Chain Monte-Carlo (MCMC)-method (Robert & Casella, 2004, ch.7; Murphy, 2012, ch.23; Lunn et al, 2014, ch.4) with the parameters \(nTrials = 5E4 = 50 000\), \(myBurnPeriod = nTrials \cdot 0.10\), and \(myLag=10 \).

All generated samples (45) constitute the support of the joint posterior pdf

$$P ( \tau_P, \tau_C, \tau_M,\tau_\Sigma| SRT), \qquad(46) $$

whose marginal densities \(f( \tau_P | SRT)\), \(f( \tau_C | SRT)\), \(f( \tau_M | SRT)\), and \(f( \tau_\Sigma | SRT)\) (Fig 4.4.01-04) are reconstructed from the posterior samples (35.12) by kernel density estimation methods (Hastie, Tibshirani & Friedman, 2001, ch.6.6.1; Murphy, 2012, ch.14.7.2). In addition we display the posterior pdf of the standard deviation of the Gaussian likelihood in Fig 4.4.05. We see that our SRT-data are distributed with approximately \( \sigma_{\tau_\Sigma} = 89 \) around the marginal posterior \(\tau_\Sigma | SRT \).

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------

## Pt 5 - Risk Calculations

Psychometric assessment is always problematic when there are only a few repeated measurements of the same person (Levy and Mislevy, 2016). The same is true for measuring simple reaction times (SRTs) when measuring the fitness of an individual vehicle driver (UbiCar, 2022). Test scores suffer from measurement errors and state fluctuations of the subject due to fatigue, alcohol, drugs, fever, depressive mood, or even heavy food (CogniFit, 2022),

#### 5.1 Questions concerning risks from the ego-perspective

In this situation, we suggest exploiting the diagnostic potential of our generative Bayesian SRT model. For demonstration purposes we present the** ego-perspective** of the new, paradoxical, and methaporical scenario of a cautious gunslinger. The agent has to answer himself three increasing complex

**:**

*counterfactual and metaphoric questions*- 1) "
*Can I draw my revolver fast enough, if my opponent needs only \(\tau_*\) milliseconds to do so ?*" - 2) "
*Can I draw my revolver as fast as a randomly selected person of a (younger) reference population, if my opponent needs only \(\tau_*\) milliseconds to do so ?*" - 3) "Is the probability of drawing my revolver slower than a randomly selected person of a (younger) reference population at most \(p=0.05\) if my opponent needs only \(\tau_*\) milliseconds
*?*"

In a first step, we assume that there is only *one fixed* critical SRT threshold (*SRT-at-risk*) \( \tau_*\) that separates safe from risky SRT behavior (Linsmeier & Pearson, 1996). A behavior is safe for the person under study if the subject's SRT is *below* the SRT-at-risk \( \tau_*\). Consequently, a behavior is unsafe or dangerous if the subject's SRT is *above* the SRT-at-risk \( \tau_*\). In this case, harmful consequences occur for the subject, brought about by an ** opponent **('

*gunslinger*').

#### 5.2 Answer to question 1

The external defined SRT-at-risk \( \tau_{\Sigma_*} \) defines a threshold on the *support* of the ** prior** \( \tau_{\Sigma} \) so that

$$P(\tau_{\Sigma} > \tau_{\Sigma_*}) = \alpha_{*_{prior}}(\tau_{\Sigma_*}). \qquad{(25)}$$

After testing the subject we have a set of personal SRTs and we can compute the ** personal risk probability** by marginalizing and filtering the

**\( f_{posterior}( \tau_P, \tau_C, \tau_M,\tau_\Sigma, \sigma_{\tau_{\Sigma}} | SRTs)\)**

*posterior*$$P(\tau_{\Sigma} > \tau_{\Sigma_*} | SRTs) = \alpha_{*_{posterior}}(\tau_{\Sigma_*}) \qquad{(26)} $$

$$ \alpha_{*_{posterior}}(\tau_{\Sigma_*}) = \int_{\tau_{\Sigma} > \tau_{\Sigma_*}} \int_{\tau_P} ... \int_{\sigma_{\tau_{\Sigma}}} \, f_{posterior}( \tau_P, \tau_C, \tau_M,\tau_\Sigma, \sigma_{\tau_{\Sigma}} | SRTs) \; \text{d} \sigma_{\tau_{\Sigma}} ... \text{d} \tau_P \, \text{d} \tau_\Sigma. $$

Now with (26) it is possible to answer question 1 of our gunslinger scenario. The only problem left is to define the vague concept '*fast enough*' by a tolerable probability \(\alpha_{*_{posterior}}(\tau_{\Sigma_*})\).

In WebPPL (26) will be solved numerically by Monte-Carlo methods. First we marginalize the posterior pdf with the Infer-function. Then the marginal posterior will be filtered to obtain the particles satisfying \(\tau_{\Sigma} > \tau_{\Sigma_*} | SRTs \). In the last step the probability \(\alpha_{*_{posterior}}(\tau_{\Sigma_*})\) can be estimated by the ratio of the length of this filtered array to the length of the unfiltered marginal posterior array.

#### 5.3 Answer to question 2

Next the psychometric personal ** risk-excess calculation** is done by comparing the

*posterior*personal risk probability \(\alpha_{*_{posterior}}\) with that from the

*prior*risk \(\alpha_{*_{prior}}\)

$$ P(\tau_{\Sigma} > \tau_{\Sigma_*} | SRTs) - P(\tau_{\Sigma} > \tau_{\Sigma_*}) = \alpha_{*_{posterior}}(\tau_{\Sigma_*}) - \alpha_{*_{prior}}(\tau_{\Sigma_*}) \qquad{(27)} $$

$$ \alpha_{*_{diff}}(\tau_{\Sigma_*}) = \alpha_{*_{posterior}}(\tau_{\Sigma_*}) - \alpha_{*_{prior}}(\tau_{\Sigma_*}) \qquad{(28)} $$

Risk-excess (27, 28) is a monotonic increasing function of the standard deviation of priors and the \(\tau_{\Sigma_*}-\)thresholds. The posterior \(3 \sigma_{\tau_\Sigma} \)-intervals from pt 4.1 - 4.4 are summarized in Table 2.

$$

\begin{array}{|c|c|c|c|} \hline

\tau_{P,C,M}\text{-priors} & \text{posterior }\sigma_{\tau_{\Sigma}} & \text{posterior } 3\sigma_{\tau_{\Sigma}} \text{- interval} & \text{interval range} \\ \hline

Triangle_{mode} & 24.2 & 310.0\; [237.5 \sim 382.5] & 145.0\; msec \\

Triangle_{median} & 25.1 & 307.2\; [232.1 \sim 382.8] & 150.7\; msec \\ Triangle_{mean} & 25.6 & 306.6\; [229.7 \sim 383.4] & 153.7\; msec \\

\Gamma & 25.5 & 294.9 \; [218.5 \sim 371.3] & 152.8\; msec\\ \hline

\end{array} \qquad(\text{Table } 2)

$$

The widest posterior interval is obtained from \(Triangle_{mean} \)-priors of the \(\tau_\Sigma\)-components. This prior was most strongly influenced by the subjects' data. Thus the use of this posterior for risk excess calculations is most unfavorable for the person under study, when the subject is suspected to be slower than a randomly chosen person from the reference population.

Answering question 2 we compute (28) for the range \(\{\tau_{\Sigma_*} | \tau_{Middleman} \le \tau_{\Sigma_*} \le \tau_{Slowman} \} \)

$$\alpha_{*_{posterior}}(\tau_{\Sigma_*}) - \alpha_{*_{prior}}(\tau_{\Sigma_*}). \; ; \; \tau_{Middleman} \le \tau_{\Sigma_*} \le \tau_{Slowman} \qquad{(29)}$$

Results of (29) are displayed in Fig 5.11 - Fig 5.22. The most important results can be seen for \(\tau_{\Sigma_*} \) in Fig 5.14, Fig 5.18, and Fig 5.22. The maximal risk-excess is nearby the threshold \(\tau_{\Sigma_*} = 290 \;msec \). This is the most unfavorable threshold for the subject. Depending on the prior the person-specific risk-excess ranges from p=0.25 to p=0.65. Risk-excess reduces with growing threshold \(\tau_{\Sigma_*} \) more and more till it reduces surprisingly to zero. This happens for threshold values \(\tau_{\Sigma_*} \gt 340 \; msec \).

Now we have provided a formal answer to question 2 of the counterfactual and metaphorical gunslinger scenario. This is even true for varying thresholds \((\tau_{\Sigma_*})\).

#### 5.4 Answer to question 3

Only if the risk-excess (27, 28) is substantial greater than e.g. \(p =.05\) then the subject's SRT should be considered as more risky than that of any randomly chosen subject from the reference population. We think that p = 0.05 can be accepted by convention. To answer question 3 we formalize it as

$$\tau_{X_{crit}} = \min_{\rm \tau_{\Sigma_*} \in \{\tau_{Middleman} \le \tau_{\Sigma_*} \le \tau_{Slowman}\}}

\alpha_{*_{diff}}(\tau_{\Sigma_*}) \le 0.05 \qquad{(30)} $$

(30) has two meanings. First, it is the *greatest upper bound* of risky challenges (= the slowest, least demanding/dangerous challenge with risky consequences). Second, it is the *least lower bound* of nonrisky challenges (= fastest or most demanding/dangerous challenge with nonrisky consequences).

The critical thresholds \(\tau_{X_{crit}}\; msec \; ; \; X \in \{P, C, M, \Sigma \} \) are collected in Table 3. They partition single subject's risk-excessive from non-risk-excessive SRT-regions:

$$ \begin{array}{|c|c|c|c|c|} \hline

prior & \tau_{P_{crit}} msec & \tau_{C_{crit}} msec & \tau_{M_{crit}} msec & \tau_{\Sigma_{crit}} msec\\ \hline

Triangle_{mode} & 172 & 144 & 83.8 & 336.6 \\

Triangle_{median} & 178 & 148 & 86.8 & 341.2 \\

Triangle_{mean} & 178 & 150 & 88.0 & 341.2 \\ \hline

\end{array} \qquad(\text{Table } 3)

$$

The entries of Table 3 can be identified quite easily in Fig 5.11 - Fig 5.22. They have the following meaning. If the opponent's SRT \(\tau_{X_*}\) (in process \(X \in \{P, C, M, \Sigma \} \)) is \(\tau_{X_*} > \tau_{X_{crit}} \) then the probability that the subject is slower than a randomly selected subject from the reference population is below p=0.05. In other words we can be quite certain that the single subject's SRT is no more risky than that of any subject from the reference population when the opponent's SRT \(\tau_{X_*}\) is greater than \(\tau_{X_{crit}}\).

Following the entries from Table 3 we can **answer question 3** when we know the opponent's SRT-value \(\tau_*\). Furthermore we can see that the **choice of prior** is

**for certain**

*not*important**risk-avoiding decisions**. Let's concentrate on the \(Triangle_{mode}\)-prior. E.g. is the SRT-value-at-risk \(\tau_\Sigma < 336.6 \; msec\) the increase in single subject's prior-to-posterior risk probability (risk-excess (27)) is greater than 0.05 (Fig 5.14). Otherway round we can say that if the

**SRT-value-at-risk**\(\tau_\Sigma \ge 336.6 \; msec\) the

**increase in single subject's prior-to-posterior risk probability (risk-excess (27))**

**is smaller than 0.05**. This means that

**for all SRT-values-at-risk greater than 336.6 msec the reaction times of our single subject in the age of 72 years do not include a significant higher risk than a typical (younger) person of the MHP-reference population.**This result is also true for the two other priors and a slightly greater \(\tau_{\Sigma_{crit}}=341.2\) (Fig 5.18, and Fig 5.22).

The results of this kind of individual Bayesian risk calculation are much more precise than general statements such as "... that the chronological age of a driver cannot be a clear indicator of his sensory-motor performance.**"** (Cohen, 2009, p.231)Instead we think that our Bayesian model combines in a near ideal way results of meta-analyses and with evidence-based single-case diagnostics.

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------

## Pt 6 - Transfer the Locus of Longitudinal Control by a Bayesian Decision Strategy

We try to map the answers developed in the scenario of the cautious gunslinger into a ** Bayesian decision strategy**. The strategy should provide a solution sketch to the applied engineering problem '

*transfer the locus of longitudinal control*' within a partial autonomous driver assistant system (

**). This problem seems to have been solved satisfactorily in the case of airbags. The airbag takes control and supports the driver only in those situations that are out of his control and in which he can no longer protect himself. We have something similar in mind for longitudinal control. The PADAS agent should only become active if it knows almost for certain from previous driver behavior that the driver cannot avoid a collision. Thus the research question of in time take-over control from the driver to avoid collision has to be solved.**

*PADAS*#### Pt 6.1 Bayesian Decision Agent

The building blocks of an agent following a Bayesian decision strategy are the answers (25) - (29) to questions 1 - 3. A Bayesian decision strategy (Robert, 2007; Kockelkorn, 2012, p.423ff; Murphy, 2012) can best be described by:

- an
*agent**with**perception, deliberative,**and*living in an environment*evaluation abilities* - the
can be described by*environment*\(\theta \in \Theta \) which are**a set of states****hidden**from the agent \(X\),*spaces of data*\(\Delta\), and a repertoire or**strategies***space of actions**\(A\) accessible by the agent*- the agent posses a
concerning the*prior belief distribution***states of the environment**with pdf \(f_\Theta(\theta) \) *the agent has the ability to*the*evaluate**goodness**of a*in*specific hypothesis \(\theta \in \Theta \)**explaining the data**\( x \in X \); the goodness of the hypothesis is described by the*\(f_{X|\Theta=\theta}(X=x) \)**likelihood of the hypothesis**- the agent has the
*ability to perceive**aspects of the environment which can be formalized as data**\(x \in X \)* - the
**agent revises his beliefs**with help of the likelihood \(f_{X|\Theta=\theta}(X=x) \) to aconcerning the*posterior belief distribution***states of the environment**with pdf \(f_{\Theta|X=x}(\theta|X=x) \) - the agent
an action \(a \in A \) based on either the prior or the posterior pdf*chooses* - the
**choice of an action**is guided by a\(\delta \in \Delta \) which is a function mapping \(\delta : X \rightarrow \Delta\)**strategy** - the agent is able to evaluate the utility or the
**prior****loss of**an**action**for a**specific state**\(l(a | \theta\) and marginalizing out \(\theta\) theis \(l(a | \Theta \) which is called*prior*total loss of action*prior risk of action \(a\).* - did
*the agent choose the action according a strategy \(\delta\)**after perceiving \(x\)**we have the*an*posterior*loss of**action**for a**specific state**\(l(\delta(x) | \theta) \) and theis \(l(\delta(x) | \Theta) \) called*posterior*total loss of action*posterior risk of action a**or***short***risk of action \(a\).* - similar concepts are true for evaluating the
and*prior**posterior risk of strategy**\(\delta \)*

#### Pt 6.2 Bayesian Longitudinal PADAS Agent

Our earlier proposals to use Bayesian models in PADAS design (Möbus & Eilers, 2011a, 2011b) were not cast in thre framework of Bayesian decision strategies. This will be changed now. With the answers (25) - (30) to questions 1 - 3 as building blocks we are able to propose two strategies \(\delta_1\) and \(\delta_2\) for a Bayesian agent transfering the locus of longitudinal control. Of course priors and likelihood of the generative model have to be modified to fit into the new domain longitudinal control. Priors could be obtained from e.g. Gratzer and Becke (2009). Data to be plugged into the likelihood have to be assessed from the driver of interest.

Using (26) strategy \(\delta_1\) can be formulated with \(\tau_{TTX}\) (** TTX** = time-to-the- last-possible-damage-avoiding-PADAS-intervention) as

$$ \delta_1(\tau_{TTX}, SRTs) := \left\{ \begin{array}{lllr} \alpha_{*_{posterior}}(\tau_{TTX}) > \alpha(loss) & \mapsto & control(PADAS) & \qquad{(31.1)} \\ \alpha_{*_{posterior}}(\tau_{TTX}) \le \alpha(loss) & \mapsto & control(driver). & \qquad{(31.2)} \end{array} \right.$$

Equivalently, using (27) and (28) strategy \(\delta_2\) is

$$ \delta_2(\tau_{TTX}, SRTs) := \left\{ \begin{array}{lllr} \alpha_{*_{diff}}(\tau_{TTX}) > \alpha(loss) & \mapsto & control(PADAS) & \qquad{(32.1)} \\ \alpha_{*_{diff}}(\tau_{TTX}) \le \alpha(loss) & \mapsto & control(driver). & \qquad{(32.2)} \end{array} \right.$$

\(\tau_{TTX} \) is the last possible time intervention point of a PADAS for preventing collison or damage. \(\alpha(loss)\) is the critical threshold probability for transferring the locus of control from the PADAS to the driver. It should have a very small value and it depends on costs in continuous operation. That is e.g. the cost of PADAS operation and the feeling of discomfort for the driver with monitoring by a PADAS.

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------

## Pt 7 - WebPPL-Code

#### WebPPL-code

All computations and graphics were done within WebPPL.

- WebPPL-code and a simulation run with results for Pt. 2.1, 4.1, and 5 can be obtained here.
- WebPPL-code and a simulation run with results for Pt. 2.2, 4.2, and 5 can be obtained here.
- WebPPL-code and a simulation run with results for Pt. 2.3, 4.3, and 5 can be obtained here.
- WebPPL-code and a simulation run with results for Pt. 2.4, 4.4, and 5 can be obtained here.

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------

## Pt 8: Summary

We developed a Bayesian methodology for studying the risks implied by the timed reactive behavior of single subjects when data of a reference population are at hand. Many hazardous traffic situations can be mapped to this scenario. To communicate the goal of this research we invented the metaphorical and counterfactual scenario of a cautious gunslinger deliberating in a Bayesian way whether s/he can draw his revolver in time, if the opponent needs \(\tau_{\Sigma_c}\) msec. The metaphorical gunslinger asks himself three questions which could be answered by the inference capabilities of our generative Bayesian model.

These questions are: 1) "*Can I draw my revolver fast enough, if my opponent needs only \(\tau_*\) milliseconds to do so ?*", 2) "*Can I draw my revolver as fast as a randomly selected person of a (younger) reference population, if my opponent needs only \(\tau_*\) milliseconds to do so ?*", 3) "*Is the probability of drawing my revolver slower than a randomly selected person of a (younger) reference population at most \(p=0.05\), if my opponent needs only \(\tau_*\) milliseconds ?*".

In our study the data of a meta-analysis are compiled into weakly informed evidence-based triangle prior distributions. The likelihood of the single subject's simple reaction time (SRT) data dependent on the prior hypothesis are formalized by a Gaussian distribution. Then the risk-excess of the single-subject's SRT-behavior is calculated for various thresholds by comparing prior against posterior probability distributions (pdfs). It could be demonstrated that for all opponent's challenges \(\tau_c\) longer than a critical threshold \(\tau_{\Sigma_{crit}} = 340 \; msec\) the SRT-behavior of a 72-year old BMW-car driver is no more risky than the SRT-behavior of a randomly chosen driver of the (younger) reference population.

The answers to the three questions of our cautious gunslinger agent can be used as building blocks of a Bayesian decision strategy. The strategy controls the transfer of the locus of longitudinal control from the driver to a PADAS and back again. This transfer depends on a \(\tau_{TTX} \)-random variable which can be defined e.g. as the situation-dependent time-to-the- last-possible-damage-avoiding-PADAS-intervention. Furthermore the transfer depends on a risk probability-threshold which is a function of the loss, when the human-PADAS-system is in operation. This can depend on, among other things, a feeling of discomfort with being monitored by a PADAS.

Though the results are dependent on the data of a single subject the ** process model of our Bayesian psychometric risk diagnosis** is not. We think that our Bayesian model combines in a near ideal way results of meta-analyses with evidence-based single-case diagnostics.

----------------------------------------------------------------------------------------

this is a DRAFT, please send comments to: **claus.moebus@uol.de**

----------------------------------------------------------------------------------------

## References

#### References

ARENS, T., HETTLICH, F., KARPFINGER, Ch., KOCKELKORN, U., LICHTENEGGER, K., STACHEL, H.: Mathematik. Springer Spektrum. Heidelberg (2018)

BESSIÈRE, P., LAUGIER, Ch., SIEGWART, R. (eds): Probabilistic Reasoning and Decision Making in Sensory-Motor Systems. Springer. Heidelberg (2008). ISBN 978-3-540-79006-8

BISHOP, Ch. M.: Pattern Recognition and Machine Learning. Heidelberg. Springer (2006)

CARD, S.K., MORAN, T.P., NEWELL, A.: *The Model Human Processor: An Engineering Model of Human Performance.* p.1–35. In: BOFF, K. R., KAUFMAN, L., and THOMAS, J. P. (Eds.). Handbook of Perception and Human Performance. Vol. 2: Cognitive Processes and Performance, New York. Wiley (1986), ISBN-13 : 978-0471829577

CARD, S.K., MORAN, T.P., NEWELL, A.: *The Psychology of Human-Computer Interaction*, Lawrence Erlbaum Associates, Inc. Publishers, Hillsdale, N.J. (1983), ISBN 0-89859-243-7

CHEBYSHEV's Inequality, en.wikipedia.org/wiki/Chebyshev%27s_inequality (visited 2020/08/23)

CogniFit, Reaction Time, Cognitive Ability- Neuropsychology, www.cognifit.com/science/cognitive-skills/response-time (visited 2022/08/16)

COHEN, A.S.: Informationsaufnahme beim Kraftfahrer. p.217-250. *in: Burg, H., Moser, A. (eds): Handbuch Verkehrsunfallrekonstruktion. 2/e. Vieweg+Teubner. Wiesbaden (2009). ISBN 978-3-8348-0546-1*

*COTTLIN, C., DÖHLER, S.: Risikoanalyse. 1/e. Vieweg+Teubner (2009). ISBN 978-3-8348-0594-2*

*Gamma distribution, *en.wikipedia.org/wiki/Gamma_distribution (visited 2020/11/18)

GELMAN, A., CARLIN, J.B., STERN, H.S., DUNSON, D.B., VEHTAR, A., RUBIN, D.B., Bayesian Data Analysis, CRC Press, 2013 (3/e)

GOODMAN, N.D., TENENBAUM, J.B., and The ProbMods Contributors: *Probabilistic Models of Cognition* (2nd ed.). 2016. Retrieved 2020/10/27 from probmods.org/

*GOODMAN, N.D., and STUHLMÜLLER, A.: The Design and Implementation of Probabilistic Programming Languages. Retrieved 2022/08/06 **from *dippl.org*.*

*GRATZER, W., BECKE, M.: Kinematik. p. 89-169. in: Burg, H., Moser, A. (eds): Handbuch Verkehrsunfallrekonstruktion. 2/e. Vieweg+Teubner. Wiesbaden (2009). ISBN 978-3-8348-0546-1*

HASTIE, T., TIBSHIRANI, R. & FRIEDMAN, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, Heidelberg (2001)

Human Benchmark, humanbenchmark.com/tests/reactiontime (last visit 2020/10/21)

JASTRZEMBSKI, T. S., CHARNESS, N.: * The Model Human Processor and the older adult: Parameter estimation and validation within a mobile phone task*. Journal of Experimental Psychology: Applied, J Exp Psychol Appl. 2007 Dec; 13(4): 224–248. doi: 10.1037/1076-898X.13.4.224

KOCKELKORN, U.: Statistik für Anwender, Springer Spektrum. Heidelberg (2012)

LEE, M.D., WAGENMAKERS, E.J.: Bayesian Cognitive Modeling. Cambridge University Press. Cambridge, UK (2013). ISBN 978-1-107-60357-8

LEFÈVRE, St., VASQUEZ, D., LAUGIER, Ch.: A Survey on Motion Prediction and Risk Assessment for Intelligent Vehicles. ROBOMECH Journal, **1**(1), 1--14 (2014). last access 2021/01/04

LEVY, R., MISLEVY, R.L.: Bayesian Psychometric Modeling. CRC Press. Boca Raton, Fl. (2016).

LINSMEIER, Th.J. & PEARSON, N.D.: Value at Risk. Financial Analysts Journal. 47-67, 2000, www.sfu.ca/~poitras/818_r1.pdf, Last access 2022/08/06

LUCE, R.D.: Response Times. Oxford University Press. Oxford, UK (1986)

LUNN, D., JACKSON, Ch., BEST, N., THOMAS, A., SPIEGELHALTER, D.: The BUGS Book - A Practical Introduction to Bayesian Analysis -. CRC Press. Boca Raton, Fl. (2013)

MACKAY, D.J.C.: Information Theory, Inference, and Learning Algorithms. Cambridge University Press, Cambridge, UK (2003)

MOEBUS, C.: Personalized Risk Calculations with a Generative Bayesian Model: Am I fast enough to react in time ?, (2022)

MOEBUS, C. & EILERS, M.: Integrating Anticipatory Competence into a Bayesian Driver Model, In: Cacciabue P.C., Hjälmdahl, M., Luedtke, A., and Riccioli, C. (eds.) Human Modelling in Assisted Transportation: Models, Tools and Risk Methods, pp.225--232. Springer, Heidelberg (2011). \doi{10.1007/978-88-470-1821-1_24}

MOEBUS, C. & EILERS, M.: Prototyping Smart Assistance with Bayesian Autonomous Driver Models, In: Mastrogiovanni, F. \& Chong, N.Y. (eds.) Handbook of Research on Ambient Intelligence and Smart Environments: Trends and Perspectives, pp 460--512, IGI Global Publications (2011).\doi{10.4018/978-1-61692-857-5}

MURPHY, K.P.: Machine Learning - A Probabilistic Perspective -. The MIT Press, Cambridge, Mass.(2012)

PEARL, J.: Causality - Models, Reasoning, and Inference - . 2/e. Cambridge University Press. Cambridge, UK (2009)

PEARL, J., GLYMOUR, M., JEWELL, N.P.: Causal Inference in Statistics - A Primer -, Wiley, Hoboken, N.J. (2016)

PEARL, J., MacKENZIE, D.: The Book of Why -The New Science of Cause and Effect -. Basic Books, New York (2018)

PETTERS, A.O., DONG, X.: An Introduction to Mathematical Finance with Applications, Springer. Heidelberg (2016). ISBN 978-1-4939-3781-3

PISHRO-NIK, H.: Introduction to Probability, Statistics, and Random Processes, Kappa Research LLC, Sunderland, MA (2014).

ROBERT, Ch.P., CASELLA, G.: Monte Carlo Statistical Methods. Springer. Heidelberg (2004)

ROBERT, Ch.P.: The Bayesian Choice - From Decision-Theoretic Foundations to Computational Implementation - . Springer. Heidelberg (2007)

STATON, S.: Probabilistic Programs as Measures. p.43--74. In: BARTHE, G., KATOEN, J.P., SILVA, A. (Eds.). Foundations of Probabilistic Programming, Cambridge, UK, Cambridge University Press (2021), \doi{10.1017/9781108770750}

Triangular Distribution, https://en.wikipedia.org/wiki/Triangular_distribution (visited 2020/09/25)

UbiCar, The Effect of Dangerous Driving Behaviours on Reaction Time, ubicar.com.au/blog/the-effect-of-dangerous-driving-behaviours-on-reaction/ (visited 2022/08/16)

VAN ZANDT, T.: How to fit a response-time distribution. Psychonomic Bulletin & Review, 2000, 7. Jg., Nr. 3, S. 424-465. last access 2021/01/04

WESTMEYER, H.: The diagnostic process as a statistical-causal analysis. Theory and Decision. **6**(1), 57-86 (1975)

WICKENS, T.D.: Models for Behavior: Stochastic Processes in Psychology,. W.H.Freeman and Co. San Francisco (1982). ISBN 0-7167-1353-5