# Latent Components of Single Subject's Simple Reaction Times

## Pt 0 - Abstract: The Bayesian (De-)Composition of Single Subject's Simple Response Times (SRTs) into Latent Components

### A Generative Bayesian Model with Priors Extracted from Card, Moran, and Newell's (CMN's) Meta-analytical Model Human Processor (MHP)

Here we want to pursue several goals. First, we want to decompose simple reaction times (SRTs) of a single arbitrary computer user into three latent time components related to perception, cognition, and motor processes. This decomposition is only possible by reusing results of a meta-analysis summarized in a static calculation guide, the Model Human Processor (MHP).The reuse of MHP is demonstrated by a walk through example 9 to prognose the expected simple-reaction time (SRT) of arbitrary computer users. Second, we reuse time-interval estimates of the MHP as Bayesian priors and SRTs of a single computer user (the author) as evidence in the likelihood function of a generative Bayesian model. The latent time components of the single user are obtained as Bayesian posteriors by a numerical simulation. All posteriors are obtained by runs of the Markov-Chain-Monte-Carlo--(MCMC)-algorithm provided by the probabilistic programming language (PPL) WebPPL. Third we want to demonstrate the expressiveness and usefulness of WebPPL a functional domain-specific language (DSL) embedded in Java Script (JS).

## Pt 1 - Introduction: The Bayesian (De-)Composition of Single Subject's Simple Response Times (SRTs) into Latent Components ...

### ... to according Card, Moran, and Newell's (CMN's) Meta-analytical Model Human Processor (MHP)

In their seminal book The Psychology of Human-Computer Interaction (PHCI) Card, Moran, and Newell (CMN) present the solution of several design problems in the field Human-Computer Interaction (HCI) based on some basic and abstract human information-processing mechanisms. These are summarized in the Model Human Processor (MHP). MHP is a simplified engineering model of the human perceptual-cognitive-motor system. "It can be divided into three interacting subsystems: (1) the perceptual system, (2) the motor system, and (3) the cognitive system, each with its own memories and processors. "(CMN, 1983, p.24) According to CMN this view belongs to applied information-processing psychology supporting engineering activities like task analysis, calculation, and approximation (CMN, 1983, p.9f; 1986).

To predict human response times, CMN present MHP as a simple static calculation guideline. MHP is in a sense a meta-analysis. It condenses the scientific knowledge about human perceptual, cognitive and motor processes up to the publication date 1983. This knowledge is quantified by time intervals with interval boundaries and typical values. "We can define three versions of the model: one in which all the parameters listed are set to give the worst performance (Slowman), one in which they are set to give the best performance (Fastman), and one set for a nominal performance (Middleman)." (CMN, 1983, p.44)

Data from more recent meta- or meta-meta-analyses (such as JASTRZEMBSKI & CHARNESS, 2007 ) can easily be integrated into our Bayesian SRT model by replacing the CMN intervals implemented in the domain-specific programming language (DSL) WebPPL with other more recent intervals.

### Prognosing Simple Reaction Times under MHP Guidance

#### Processor Cycle Times

The uncertainties in the parameters of the MHP are captured by three versions of the MHP (CMN, 1983, p.44)

• Middleman is the version ... in which all the parameters ... are set to give the normal perfomance.
• Fastman is the version ... in which all the parameters ... are set to give the best perfomance.
• Slowman is the version ... in which all the parameters ... are set to give the worst perfomance.

Cycle times of Perceptual (P), Cognitive (C), and Motor Processor (M) are reported as the (slightly modified) template $$\tau_X := \tau_{X_{Middleman}} [\tau_{X_{Fastman}} \sim \tau_{X_{Slowman}}] \;\;; \;\; X = P, C, M \qquad(1)$$

and accordingly for the total reaction time (TRT) $$\tau := \tau_{TRT} = \tau_{TRT_{Middleman}} [\tau_{TRT_{Fastman}} \sim \tau_{TRT_{Slowman}}] \qquad(2)$$

The meaning of (1) is that $$\tau_X$$ is ranging from $$\tau_{X_{Fastman}}$$ to $$\tau_{X_{Slowman}}$$ with a 'typical' value $$\tau_{X_{Middleman}}$$ and the meaning of (2) is that $$\tau_{TRT}$$ is ranging from $$\tau_{TRT_{Fastman}}$$ to $$\tau_{TRT_{Slowman}}$$ with 'typical' value $$\tau_{TRT_{Middleman}}$$. The semantics of a 'typical' value is left unspecified by CMN (CMN, 1983, p.28). We try various interpretations like mean, mode, or median for $$\tau_{X_{Middleman}}$$ in Bayesian priors.

$$\tau_{TRT_{Middleman}} = \sum_{X \in \{P,C,M\}} \tau_{X_{Middleman}}$$

$$\tau_{TRT_{Fastman}} = \sum_{X \in \{P,C,M\}} \tau_{X_{Fastman}}$$

$$\tau_{TRT_{Slowman}} = \sum_{X \in \{P,C,M\}} \tau_{X_{Slowman}}$$

For modeling purposes this is rather imprecise. So we want to study whether we can do better in constructing informative Bayesian priors.

#### Meta-analysis-based tauX-intervals

The meta-analysis-based processor-time tau-intervals are:

• $$\tau_P := 100 [50 \sim 200]$$ msec ; cycle time of perceptual processor (CMN, 1983, p.32f)
• $$\tau_C := 70 [25 \sim 170]$$ msec ; cycle time of cognitive processor (CMN, 1983, p.42f)
• $$\tau_M := 70 [30 ~\sim100]$$ msec ; cycle time of motor processor (CMN, 1983, p.34f)
• $$\tau_T := \tau_{TRT} = \sum_{X \in \{P,C,M\}} \tau_X = \tau_P + \tau_C + \tau_M ≡ 240 [105 \sim 470]$$.

#### Example 9: MHP-Prognosis of Simple Reactions Times

One of the standard problems in PHCI to motivate the use of MHP is example 9: A user sits before a computer display terminal. Whenever any symbol appears, he is to press the space bar. What is the time between signal and response? (Card, Moran & Newell, 1983, p.66). CMN use their MHP as a guideline to calculate the total user response time (TRT).

Solution. "Let us follow the course of processing through the Model Human Processor ...The user is in some state of attention to the display ...When some physical depiction of the letter A (we denote it alpha) appears, it is processed by the Perceptual Processor, giving rise to a physically-coded representation of the symbol (we write it alpha') in the Visual Image Store and very shortly thereafter to a visually coded symbol (we write it alpha'') in Working Memory... This process requires one Perceptual Process cycle $$\tau_P$$. The occurence of the stimulus is connected with a response...,requiring one Cognitive Processor cycle, $$\tau_C$$. The motor system then carries out the actual physical movement to push the key..., requiring one Motor Processor cycle, $$\tau_M$$. Total time required is $$\tau_P + \tau_C + \tau_M$$. Using Middleman values, the total time required is 100 + 70 + 70 = 240 msec. Using Fastman and Slowman values gives a range 105 ~ 470 msec. ". These are the cycle times in msec of the hypothetical perceptual processor, the cognitive processor, and the motor processor, respectively." $$\blacksquare$$. (CMN, Fig 2.1, p.26, p.66, p.433f).

#### References

CARD, St.K., MORAN, Th.P., and NEWELL, A., The Psychology of Human-Computer Interaction, 1983, Lawrence Erlbaum Associates, Inc. Publishers, Hillsdale, N.J., 1983, ISBN 0-89859-243-7

CARD, S.K; MORAN, T. P; and NEWELL, A. The Model Human Processor: An Engineering Model of Human Performance. In K. R. BOFF, L. KAUFMAN, & J. P. THOMAS (Eds.), Handbook of Perception and Human Performance. Vol. 2: Cognitive Processes and Performance, 1986, pages 1–35.

JASTRZEMBSKI, T. S. & CHARNESS, N., The Model Human Processor and the older adult: Parameter estimation and validation within a mobile phone task. Journal of experimental psychology: applied, J Exp Psychol Appl. 2007 Dec; 13(4): 224–248.

## Pt 2 - Priors: The Bayesian (De-)Composition of Single Subject's Simple Response Times (SRTs) into Latent Components

Cycle times of Perceptual (P), Cognitive (C), and Motor Processor (M) are reported as the (slightly modified) template $$\tau_X := \tau_{X_{Middleman}} [\tau_{X_{Fastman}} \sim \tau_{X_{Slowman}}] \;\;; \;\; X = P, C, M. \qquad(1)$$

The meaning of (1) is that $$\tau_X$$ is ranging from $$\tau_{X_{Fastman}}$$ to $$\tau_{X_{Slowman}}$$ with a 'typical' value $$\tau_{X_{Middleman}}$$. The semantics of a 'typical' value is left unspecified by CMN (CMN, 1983, p.28). We try various interpretations like mode (Pt 2.1, Pt 4.1), or median (Pt 2.2, Pt 4.2) of a triangular distribution and the mean (Pt 2.3, Pt 4.3) of an adapted gamma distribution for $$\tau_{X_{Middleman}}$$ in Bayesian priors. We do not interpret the 'typical' $$\tau_{X_{Middleman}}$$-values as means of triangular distributions because means lay in the middle of triangulars' support but not in the middle of the CMN-intervals.

This is different to Gamma distributions which can be skewed and thus asymmetric. There is another argument in favour of a Gamma prior. It could be the case, that each of the three latent processes is a result of several latent compents. If these components have a Gamma latency distribution their sum is also a Gamma distribution: the convolution of Gammas is a Gamma distribution.

## Pt 2.1 - Triangular Priors for Interval Modes Extracted From CMN's MHP

Because the CMN-intervals provide only scarce information in form of the 'typical' value and the lower and upper bounds we choose the 'lack of knowledge' triangular distribution. The triangular distribution is typically used as a subjective description of a population for which there is only limited sample data, and especially in cases where the relationship between variables is known but data is scarce (possibly because of the high cost of collection). It is based on a knowledge of the minimum and maximum and an "inspired guess" as to the modal value.

The PDF is

$$f_{Triangle}(x | a, b, c) = \left\{ \begin{array}{cl} 0 & \text{, for } x \lt a, \\ \frac{2(x-a)}{(b-a)(c-a)} & \text{, for } a \leq x \lt c, \\ \frac{2}{(b-a)} & \text{, for } x = c, \\ \frac{2(b-x)}{(b-a)(b-c)} & \text{, for } c \lt x \leq b, \\ 0 & \text{, for } b \lt x. \end{array} \right.$$

The generation of triangular-distributed random variates is done by the following mapping. Given a random variate $$U$$ drawn from the uniform distribution in the interval $$(0, 1)$$, then the variate

$$X_{Triangle} | a, b, c = \left\{ \begin{array}{cl} a + \sqrt{U(b-a)(c-a)} & \text{, for } 0 \lt U \lt F(c), \\ b - \sqrt{(1-U)(b-a)(b-c)} & \text{, for } F(c) \leq U \lt 1 \end{array} \right.$$

where $$F(c | a, b) = (c - a) / (b - a)$$.

## Pt 2.2 - Triangular Priors for Interval Medians Extracted From CMN's MHP

The median $$md_{Triangle}$$ of a triangular distribution is defined as:

$$md = \left\{ \begin{array}{c} a + \sqrt{\frac{(b-a)(c-a)}{2}} \text{, for } c \geq \frac{a+b}{2} \\ b - \sqrt{\frac{(b-a)(b-c)}{2}} \text{, for } c \lt \frac{a+b}{2} \end{array} \right.$$

If we interpret the 'typical' value $$\tau_{X_{Middleman}}$$ as the median $$md_{Triangle}$$ of a triangular pdf $$Triangle(a, b, c)$$ with bounds $$a = \tau_{X_{Fastman}}$$ and $$b = \tau_{X_{Slowman}}$$ then we have to obtain the unknown third parameter $$c = mode_{triangle}$$ of $$Triangle(a, b, c)$$ from the information given $$md_{Triangle}, \tau_{X_{Fastman}}, \text{ and } \tau_{X_{Slowman}}$$. In short, we derive the function $$h_{mode_{Triangle}}$$, so that its application to instantiated $$md_{Triangle}, \tau_{X_{Fastman}}, \text{ and } \tau_{X_{Slowman}}$$ as arguments generates $$mode_{Triangle} = c$$:

$$mode_{Triangle} = h_{mode_{Triangle}}( \tau_{X_{Fastman}}, \tau_{X_{Slowman}},\tau_{X_{Middleman}})= h_{mode_{Triangle}}(a, b, md_{Triangle}) = c.$$

First, we derive $$c$$ when $$c \geq \frac{a+b}{2}\;:$$

$$(md - a) = \sqrt{\frac{(b-a)(c-a)}{2}}$$

$$(md - a)^2 = \frac{(b-a)(c-a)}{2}$$

$$2(md - a)^2 = (b-a)(c-a)$$

$$\frac{2(md - a)^2}{(b-a)} = (c-a)$$

$$\boxed{h_{mode_{Triangle}}(a, b, md_{Triangle}) := \frac{2(md - a)^2}{(b-a)} + a = c \;}$$

Second, we derive $$c$$ when $$c \lt \frac{a+b}{2}\;:$$

$$(md - b) = - \sqrt{\frac{(b-a)(b-c)}{2}}$$

$$- (md - b) = \sqrt{\frac{(b-a)(b-c)}{2}}$$

$$(md - b)^2 =\frac{(b-a)(b-c)}{2}$$

$$2(md - b)^2 = (b-a)(b-c)$$

$$\frac{2(md - b)^2}{(b-a)} = (b-c)$$

$$\frac{2(md - b)^2}{(b-a)} - b = - c$$

$$\boxed{ h_{mode_{Triangle}}(a, b, md_{Triangle}) := - \frac{2(md - b)^2}{(b-a)} + b = c \;}$$

#### References

https://en.wikipedia.org/wiki/Triangular_distribution (visited 2020/09/25)

## Pt 2.3 - Gamma Priors for Interval Means Extracted From CMN's MHP

#### Convolution of Waiting Time and Processor Cycle Time Distributions

We treat all process cycle times $$\tau_X$$ as  gamma distributed waiting times though other distributions are popular by cognitive psychologists (Luce, 1986; Wickens, 1982; Van Zandt, 2000). This has various reasons. If we assume that the processor cycle times $$\tau_X$$ are results of many independent subprocesses with exponential waiting times then their sum is gamma distributed. The convolution of exponential distributions is a Gamma distribution.

The total response time $$\tau := \sum_{X \in \{P,C,M\}} \tau_X$$ is the sum of the three processor times $$\tau_X$$. Is it possible to obtain the probability distribution of $$\tau$$ by convoluting the component Gammas $$\tau_X$$. The question is whether the convolution of independent Gamma distributions is also a Gamma distribution ? The convolution of Gammas is known to be a Gamma only in the special case when one or both parameters of the to be convoluted Gammas are identical. It seems that in our general  case where all parameters are different we have to compute the result of the convolution with our probabililistic DSL WebPPL. We derive the distribution of $$\tau$$ by sampling from a simple generative probabilistic model extracted from MHP.

#### Extracting Gamma Distribution Parameters from MHP's Empirical Constraints

There are three parametrizations of the gamma distribution. We choose that one with shape parameter $$k$$ and scale parameter $$\theta$$ (with $$k>0, \theta > 0)$$. Mean, variance, and standard deviations of a gamma-distributed random variable $$X$$ are functions of these parameters:

$$E(X|k,\theta) = k\theta = \mu_X,$$

$$Var(X|k,\theta) = k\theta^2 =\sigma^2_X,$$

and $$\sqrt {Var(X)} = \sqrt {k}\theta\ = \sigma_X .$$

In the WebPPL scripts we use the substitutions $$k/a$$ ("a for k"),  $$\theta/b$$ ("b for θ"), $$\mu/m$$ ("m for $$\mu$$"), and $$\sigma/s$$ ("s for $$\sigma$$").

To specify the gamma parameters we map the concepts of CMN's empirical intervals

$$\tau_X := \mu_X [\tau_{X_{Fastman}} \sim \tau_{X_{Slowman}}] \;\;; \;\; X = P, C, M \qquad(1)$$

to the moments of the gamma distribution (the identifiers a, b, m, s, r are symbols in WebPPL scripts):

$$mode_{X|a,b} = (a-1)b \qquad(2)$$

$$\mu_{X|a,b} = ab =: m \qquad(3)$$

$$\sigma_{X|a,b} = \sqrt{a}b =: s \qquad(4)$$

$$\tau_{X_{Fastman}} \approx (ab - r\sqrt{a}b) =: (m-rs)$$

and

$$\tau_{X_{Slowman}} \approx (ab + r\sqrt{a}b) =: m+rs,$$

where $$r$$ is the number of standard deviations ('nSigma' in WebPPL) measuring the distance between interval boundaries $$\tau_{X_{Fastman}}$$ or  $$\tau_{X_{Slowman}}$$ and mean $$\mu_X$$.

If we assume that the distance between interval mean $$\mu_X$$ and interval borders $$\tau_{X_{Fastman}}$$ or $$\tau_{X_{Slowman}}$$ can be approximated by (7) so that $$\tau_{X_{Fastman}} = (\mu_X - r\sigma_X)$$ and $$\tau_{X_{Slowman}} = (\mu_X + r\sigma_X)$$ we can use Chebyshev's inequality

$$P(|X-\mu| \ge r\sigma) \le \frac{1}{r^2}\qquad(5)$$

to make an approximate determination of r according:

$$r\sigma\text{-Intervals Derived from Chebyshev's Inequality}$$

\text{Probability of Staying Within and Risk of Leaving} \begin{array}{c|c|c}
r & P(|X-\mu| \le r\sigma) \ge 1 - \frac{1}{r^2} & P(|X-\mu| \ge r\sigma) \le \frac{1}{r^2}   \\                           \text{ } & P(\text{Within CMN-Interval}) & P(\text{Outside CMN-Interval}) \\ \hline                                          \text{...} & \text{.........} & \text{.........} \\
3 & P(|X-\mu| \le r\sigma) \ge 0.8888 & P(|X-\mu| \ge r\sigma) \le \frac{1}{9} = 0.1111 \\
4 & P(|X-\mu| \le r\sigma) \ge 0.9375 & P(|X-\mu| \ge r\sigma) \le \frac{1}{16} = 0.0625 \\                          \text{...} & \text{.........} & \text{.........} \\  \hline
\end{array} \qquad(6) $$So the width of an -interval (r = 1,2,3,...) is$$[\tau_{X_{Slowman}} - \tau_{X_{Fastman}}] \approx 2r\sigma_X = 2r\sqrt{a}b =: 2rs.$$We choose $$r = 3$$ because the risk of observing a random value outside the $$3\sigma$$-inverval with probability $$p \le 0.1111$$ is sufficiently rare. The semantics of the CMN-distribution could be now be approximated by the moments of a gamma distribution:$$\tau_X := \mu_X [\tau_{X_{Fastman}} \sim \tau_{X_{Slowman}}] \approx  ab[ab-r\sqrt{a}b \sim ab+r\sqrt{a}b] \qquad(7).$$Of course this approximation includes an error, because gamma distributions are most time skewed so that the distance between $$\mu_{tau_X}$$ and $$\tau_{X_{Fastman}}$$ or $$\tau_{X_{Slowman}}$$ are not identical. Later when have generated the $$\tau_X$$-distributions we can compare our generated $$\tau_X$$-intervals with those from CMN and compute the risks of exceeding of the interval limits. As we shall see is the probability of exceeding the CMN-intervals by a sample from our simulated Gamma-distributions negligible small. Now, we have to identify the parameters or a and b. Solving towards a and b we get:$$a := \frac{ab}{b}=\frac{m}{b}b := \frac{[\tau_{X_{Slowman}} - \tau_{X_{Fastman}}]}{2r \sqrt{a}}$$We substitute $$a = \frac{m}{b}$$ into the formula for b:$$b=\frac{[\tau_{X_{Slowman}} - \tau_{X_{Fastman}}]}{2r \sqrt{\frac{m}{b}}}$$squaring both sides:$$b^2=\frac{[\tau_{X_{Slowman}} -\tau_{X_{Fastman}}]^2}{2^2r^2 \left(\frac{m}{b}\right)}=\frac{[\tau_{X_{Slowman}} -\tau_{X_{Fastman}}]^2}{2^2r^2}\frac{b}{m}$$Cancelling b on both sides:$$b := \frac{[\tau_{X_{Slowman}} - \tau_{X_{Fastman}}]^2}{2^2r^2 m} \qquad(8)$$Now, b is totally identified by the estimators or empirical constraints $$\tau_{X_{Slowman}}$$, $$\mu_X$$ =: m, and$$\tau_{X_{Fastman}}$$. This is not the case for a. So we substitute b back into the equation for a. This results in:$$a := \frac{m}{b}=m*\frac{2^2r^2m}{[\tau_{X_{Slowman}} - \tau_{X_{Fastman}}]^2}=\frac{(2rm)^2}{[\tau_{X_{Slowman}} - \tau_{X_{Slowman}}]^2}\qquad(9)$$We check the results:$$ab=\frac{(2rm)^2}{[\tau_{X_{Slowman}} - \tau_{X_{Fastman}}]^2} * \frac{[\tau_{X_{Slowman}} - \tau_{X_{Fastman}}]^2}{(2r)^2 m}=\frac{2^2r^2m^2}{2^2r^2m}=m $$The simulation starts with two constants $$nTrials$$ and $$nSigma$$. $$nTrials$$ is relevant for the $$Infer$$ function which generates $$nTrials (= 50000)$$ samples $$\hat \tau_X \sim \Gamma_X(a_X,b_X)$$ with the $$forward$$ method. The second constant is $$nSigma$$ which determines the interval width $$2 \cdot nSigma \cdot \sigma = 6 \cdot \sigma$$. We have chosen $$nSigma = 3$$ because the probability that a $$\hat \tau_X$$ is sampled outside the interval should be a rare event. According to Chebyshev's Inequality (s.above) the probability of staying in the $$3\sigma$$-interval is for any distribution is $$p < 0.1111$$. We shall demonstrate that our generated $$tauX$$-priors stay within the CMN-intervals with $$p < 0.98$$ and leave the CMN-intervals with $$p< 0.02$$. In the first step of the simulation we generate the $$\tau_X$$-intervals as published by CMN. We added a column $$\tau_{X_{Middleman}}$$. This designates the middle of the interval as can be guessed from the expression 'Middleman':$$\text{tauX_Intervals}\begin{array}{c|ccc}
\tau_X & \mu_X & \tau_{X_{Fastman}}  & \tau_{X_{Middleman}} & \tau_{X_{Slowman}} \\
\hline
\tau_P & 100 & 50 & 125 & 200 \\
\tau_C &   70 & 25 &   97.5 & 170 \\
\tau_M &  70 & 30 &   65 & 100 \\ \hline \tau_T = \sum_{X \in {P, C, M}} \tau_X & 240 & 105 & 287.5 & 470 \\ \hline
\end{array}$$The last line of the above table is a result of summing the $$\tau_X$$-interval values. Now, the parameters $$a_X$$ and $$b_X$$ can be obtained by (9) and (8). These values are documented in the second and third columns of the following table. From these two parameters $$a_X, b_X$$ we can compute $$mode_X, \mu_X$$ and $$\sigma_X$$ according (2), (3), and (4), which can be found in columns 4 - 6 of the following table.$$\text{                                                                                           }\text{tauX_ParmObjects} \begin{array}{c|cc|cc|c}
\tau_X & a_X (9) & b_X (8) & mode_X (2) & \mu_X (3) & \sigma_X (4) \\ \hline
\tau_P & 16.00 & 6.25 & 93.75 & 100 & 25 \\
\tau_C &   8.39 & 8.34 & 61.65 &   70  & 24.1 \\ \tau_M & 36.00 & 1.94 & 68.05 &  70 & 11.66 \\ \tau_T & 15.56 & 15.42 & 224.58 & 240 & 60.6 \\\hline
\end{array}\text{                                                                                           }$$Having identified the basic gamma-parameters $$a_X (7)$$ and $$b_X (6)$$ we are able to construct the $$\tau_X$$-intervalls according (5). These are found in columns 3 - 5 in the table below. As expected the $$\mu_X$$ are nearly identical the CMN's though the lower and upper interval bounds deviate slightly. The best fit is for $$\tau_M$$. As can be seen from (8) - (12), sampling is done for $$\hat\tau_X$$ from the the gamma distributions parametrized with the $$a_X (7)$$ and $$b_X (6)$$. $$a_X$$ and $$b_X$$ were directly obtained from CMN's intervall values. (13) is computed in a different way than (12). Whereas (12) is directly computed from CMN's intervals by (5), (6), and (7), the case for (13) is different. (13) is the sum of the sampled values (9) - (11). So (13) is in some way a numeric equivalent to a convolution of distributions (9) - (11). The range of this 'convoluted' interval is a bit smaller than the values (12) obtained directly from CMN's interval. Because this distribution is only existent as an array of nTrials samples it has no parameters $$a_X$$ and $$b_X$$. So we wanted to condense this array of numeric particles into a parametrized Gamma pdf with $$a$$ and $$b$$. These parameter estimates are presented in columns 2-3 in the last line of the next table. With the help of these two parameters we can first derive the interval values (columns 4-6 in last line of table). Interval values of (13) and (14) are nearly identical.$$ \hat \tau_X \sim \Gamma_X(a_X,b_X) ; X \in {P, C, M, T} \qquad(8) \text{Model-generated } \tau_X \text{-intervals (5)  and Sampled Distributions } \hat \tau_X \sim \Gamma_X(a_X,b_X)\begin{array}{c|cc|ccc|cr|c} \hat \tau_X & a_X (7) & b_X (6) & \mu_X & \hat \tau_{X_{Fastman}} & \hat \tau_{X_{Slowman}} & \hat \tau_X \sim \Gamma_X(a_X,b_X) & \text{#} & \text{Fig} \\ \hline \hat \tau_P & 16.0 & 6.3 & 99.97 & 24.7 & 175.2 & \hat \tau_P \sim \Gamma_P(a_P,b_P) & (9) & \text{Fig.01} \\ \hat \tau_C & 8.4 & 8.3 &  69.84 & -2.5 & 142.2 & \hat \tau_C \sim \Gamma_C(a_C,b_C) & (10) & \text{Fig.02} \\ \hat \tau_M & 36.0 & 1.9 & 70.05 & 35.2 & 104.9 & \hat \tau_M \sim \Gamma_M(a_M,b_M) & (11) & \text{Fig.03} \\ \hat \tau_T & 15.6 & 15.4 & 240.02 & 56.9 & 423.1 & \hat \tau_T \sim \Gamma_T(a_T,b_T) & (12) & \text{Fig.04} \\ \hline \hat \tau_\sum & - & - & 239.86 & 129.7 & 350.5 & \hat \tau_\sum = \sum_{X \in {P, C, M}} \hat \tau_X & (13) & \text{Fig.05} \\ \hat \tau_\sum & 42.4 & 5.7 & 240.12 & 129.8 & 350.5 & \hat \tau_\sum \sim \Gamma_\sum(42.4, 5.7) & (14) & \text{Fig.06} \\ \hline \end{array}$$Now, we study the risk of generating $$\hat\tau_X$$-random values laying outside the CMN-interval boundaries. These risk-probabilities are compiled in the following table:$$\text{Probability of a Gamma-generated Sample of Falling Outside the CMN-Interval Boundaries}\begin{array}{c|cc|cc|cc}
\hat \tau_X & a_X & b_X & \tau_{X_{Fastman}} & \tau_{X_{Slowman}} & P(\hat\tau_X < \tau_{X_{Fastman}}) & P(\hat\tau_X > \tau_{X_{Slowman}})\\
\hline
\hat \tau_P        & 16.0 & 6.3   &   50 & 200  & 0.00878 & 0.00054 \\
\hat \tau_C        &   8.4 & 8.3   &   25 & 170  & 0.00828 & 0.00074 \\
\hat \tau_M       & 36.0 & 1.9   &   30 & 100  & 0.00000 & 0.00926 \\ \hline                                                                  \hat \tau_T        & 15.6 & 15.4 & 105 & 470  & 0.00294 & 0.00138 \\                                                                                  \hat \tau_\sum & 42.4 & 5.7   & 105 & 470  & 0.00000 & 0.00000 \\ \hline
\end{array}

One can see that the risk of a Gamma-generated $$\hat\tau_X$$-sample falling outside a CMN-interval is as small as $$p < 0.009$$. This means that our Gamma distributions are not generating values untypical for the CMN-intervals. From this aspect they are good priors derived from the CMN-intervals. Maybe our Gamma-generated $$3\sigma$$-intervals are too narrow. We have to study this question, when visualizing the Gamma distributions.

### References

• https://en.wikipedia.org/wiki/Chebyshev%27s_inequality (visited 2020/0823)
• LUCE, R.D., Response Times: Their Role in Inferring Elementary Mental Organization, Oxford University Press, 1986
• PISHRO-NIK, H., Introduction to Probability, Statistics, and Random Processes", available at https://www.probabilitycourse.com, Kappa Research LLC, 2014.
• VAN ZANDT, T.. How to fit a response-time distribution. Psychonomic Bulletin & Review, 2000, 7. Jg., Nr. 3, S. 424-465.WICKENS, Th.D., Models for Behavior: Stochastic Processes in Psychology, San Francisco: W.H.Freeman and Co, 1982, ISBN 0-7167-1353-5

## Pt 3 - Function Signatures: The Decomposition of Single Subject's Simple Response Times (SRTs) into Latent Components

• Infer: Models $$\times$$ Methods $$\times$$ nSamples$$\to$$ tauMultivDistribution
• takeOneSampleOfGammaModel $$\in$$ Models
• 'forward'  $$\in$$ Methods
• nSamples $$\in$$ Integers
• tauMultivDistribution := (tauP-, tauC-, tauM-, tauT-, and tauDistribution)
• make_tauX_intvals: () $$\to$$ arrayOfIntervalObjects
• make_tauXParmObject: tauX_Intval $$\to$$ tauXParmObject
• myTauXDistribution: identifierString $$\times$$ tauXDistribution $$\times$$ modeTauX $$\to$$ meanSigmaTauObject
• printArrayOfObjects: identifierString $$\times$$ arrayOfObjects $$\to$$ printedString
• runTime:  () $$\to$$ Seconds
• takeOneSampleOfGammaModel: () $$\to$$ tauObject

## Pt 5 - WebPPL-Code: The Decomposition of Single Subject's Simple Response Times (SRTs) into Latent Components

### A Generative Bayesian Model with Priors Extracted from Card, Moran, and Newell's (CMN's) Meta-analytical Model Human Processor (MHP)

The listing of the WebPPL run can be obtained here.

#### Reference

N. D. Goodman and A. Stuhlmüller (electronic). The Design and Implementation of Probabilistic Programming Languages. Retrieved from dippl.org[bibtex]

Webmaster (manuela.wl+gues9tmstefeldmh@uolfmrw.d4ae) (Changed: 2020-09-28)