-
Notifications
You must be signed in to change notification settings - Fork 43
/
Copy path10.Rmd
1933 lines (1466 loc) · 96.2 KB
/
10.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
```{r, echo = F}
knitr::opts_chunk$set(fig.retina = 2.5)
knitr::opts_chunk$set(fig.align = "center")
```
# Model Comparison and Hierarchical Modeling
> There are situations in which different models compete to describe the same set of data...
>
> ...Bayesian inference is reallocation of credibility over possibilities. In model comparison, the focal possibilities are the models, and Bayesian model comparison reallocates credibility across the models, given the data. In this chapter, we explore examples and methods of Bayesian inference about the relative credibilities of models. [@kruschkeDoingBayesianData2015, pp. 265--266]
In the text, the emphasis is on the Bayes Factor paradigm. While we will discuss that, we will also present the alternatives available with information criteria, model averaging, and model stacking.
## General formula and the Bayes factor
So far we have spoken of
* the data, denoted by $D$ or $y$;
* the model parameters, generically denoted $\theta$;
* the likelihood function, denoted $p(D \mid \theta)$; and
* the prior distribution, denoted $p(\theta)$.
We also have some terms that are products of those elements, such as
* the numerator in Bayes' theorem $p(D \mid \theta)\ p(\theta)$;
* the denominator in Bayes' theorem, $p(D)$, which is a shorthand for $\sum\limits_{\theta^*}p(D\mid\theta^*)\ p(\theta^*)$, and we often call the marginal likelihood or the evidence; and
* the posterior distribution, denoted $p(\theta \mid D)$, which comes from dividing the numerator by the denominator.
Now we add $m$, which is a model index where $m = 1$ stands for the first model, $m = 2$ stands for the second model, and so on. So when we have more than one model in play, we might refer to the likelihood as $p_m(y \mid \theta_m, m)$ and the prior as $p_m(\theta_m \mid m)$. It's also the case, then, that *each model* can be given a prior probability $p(m)$.
We should also clarify what we mean by a "model." In Bayesian statistics, the *model* is the numerator of Bayes' theorem, $p(D \mid \theta)\ p(\theta)$. But in this context, we're not focusing on the realization of $p(D \mid \theta)\ p(\theta)$ as computed using a real data set. Rather, $p(D \mid \theta)\ p(\theta)$ is a model in the sense that it may be used as a data-generating function. IMO, Kruschke didn't sufficiently clarify this point in the text, though he tried to in prose on page 268. After some personal communications with Kruschke, I'm confident this is what he meant.
As you'll see later, we won't be using **brms** to fit a hierarchical model of multiple sub models $(m = 1, \dots, M;\ \text{where}\ M > 1)$. Therefore we won't be going to the trouble of recreating Kruschke's Figure 10.1. However, we still might consider some new insights, such as how our new model indicator $m$ can allow us to compute
$$
p(m \mid D) = \frac{p(D \mid m)\ p(m)}{\sum_m p(D \mid m)\ p(m)},
$$
where $p(D \mid m)$ is the marginal likelihood (i.e., evidence) for model $m$, and $p(m \mid D)$ is the posterior probability of a model, given a particular model set. If we have two models, $m = 1$ and $m = 2$, we can rewrite that equation as
$$
\frac{p(m = 1 \mid D)}{p(m = 2 \mid D)} = \underbrace{\frac{p(D \mid m = 1)}{p(D \mid m = 2)}}_\text{BF} \frac{p(m = 1)}{p(m = 2)} \underbrace{\frac{/ \sum_m p(D \mid m)\ p(m)}{/ \sum_m p(D \mid m)\ p(m)}}_{=1}.
$$
The last term in the equation is the same in the numerator and the denominator, which means they get canceled out, as indicated by the underbraced $=1$. The middle term on the right, $\frac{p(m = 1)}{p(m = 2)}$, is something we often don't have because many researchers are gutless and unwilling to place prior probabilities on their models. The first term on the right is a ratio of marginal likelihoods, by model. Or in other words, the first term on the right is the ratio of the evidence of $m = 1$ over the evidence of $m = 2$. As Kruschke put it, "The Bayes factor (BF) is the ratio of the probabilities of the data in models 1 and 2" (p. 268). To bring it into focus,
$$\text{BF} = \frac{p(D \mid m = 1)}{p(D \mid m = 2)}.$$
As ratios, BF's have a range of $(0, \infty)$, and a $\text{BF} = 1$ when the evidence (i.e., marginal likelihood) for the two models under comparison are the same. As for interpretations, Kruschke further explained that
> one convention for converting the magnitude of the BF to a discrete decision about the models is that there is "substantial" evidence for model $m = 1$ when the BF exceeds $3.0$ and, equivalently, "substantial" evidence for model $m = 2$ when the BF is less than $1/3$ [@jeffreysTheoryProbability1961; @kassBayesFactors1995; @wetzelsStatisticalEvidenceExperimental2011]. (p. 268)
However, as with $p$-values, effect sizes, and so on, BF values exist within continua and might should be evaluated in terms of degree more so than as ordered kinds.
## Example: Two factories of coins
Kruschke considered the coin bias of two factories, each described by the beta distribution. We can organize how to derive the $\alpha$ and $\beta$ parameters from $\omega$ and $\kappa$ with a tibble.
```{r, warning = F, message = F}
library(tidyverse)
d <-
tibble(factory = 1:2,
omega = c(0.25, 0.75),
kappa = 12) %>%
mutate(alpha = omega * (kappa - 2) + 1,
beta = (1 - omega) * (kappa - 2) + 1)
d %>%
knitr::kable()
```
Thus given $\omega_1 = .25$, $\omega_2 = .75$ and $\kappa = 12$, we can describe the bias of the two coin factories as $\operatorname{Beta}(\theta_{[m = 1]} \mid 3.5, 8.5)$ and $\operatorname{Beta}(\theta_{[m = 2]} \mid 8.5, 3.5)$. With a little wrangling, we can use our `d` tibble to make the densities of Figure 10.2. But before we do, we should discuss plotting.
In the past few chapters, we have explored different plotting conventions using themes from Wilke's [**cowplot** package](https://wilkelab.org/cowplot), such as `theme_cowplot()` and `theme_minimal_grid()`. We also modified some of our plots using principles from Wilke's [-@wilkeFundamentalsDataVisualization2019] text, [*Fundamentals of data visualization*](https://clauswilke.com/dataviz/), and his [-@Wilke2020Themes] [*Themes*](https://wilkelab.org/cowplot/articles/themes.html) vignette. To further build on those principles, each chapter from here onward will have its own color scheme. The scheme in this chapter is based on [Katsushika Hokusai](https://www.katsushikahokusai.org/)'s [-@HokusaiGreatWaveOffKanagawa1820] woodblock print, [*The great wave off Kanagawa*](https://artsandculture.google.com/asset/the-great-wave-off-kanagawa/MgHm0BHMRIT73g). We can get a prearranged color palette based on *The great wave off Kanagawa* from [Tyler Littlefield](https://twitter.com/tyluRp)'s [**lisa** package](https://github.com/tyluRp/lisa) [@R-lisa].
```{r, warning = F, message = F, fig.width = 3, fig.height = 1}
library(lisa)
lisa_palette("KatsushikaHokusai")
plot(lisa_palette("KatsushikaHokusai"))
```
The `"KatsushikaHokusai"` palette comes out of the box with five colors. However, we can use the `lisa_palette()` function to expand the palette by setting `type = "continuous"` and then increasing the `n` argument to a value larger than five. Here's what happens when you set `n = 9` and `n = 1000`.
```{r, fig.width = 3, fig.height = 1}
plot(lisa_palette("KatsushikaHokusai", n = 9, type = "continuous"))
plot(lisa_palette("KatsushikaHokusai", n = 1000, type = "continuous"))
```
Next, we will use the five base colors from `"KatsushikaHokusai"` to adjust the global theme default for all ggplots in this chapter. We can accomplish this with the `ggplot2::theme_set()` function. First, we start with the default `theme_grey()` as our base and then modify several of the settings with arguments within the `theme()` function.
```{r}
theme_set(
theme_grey() +
theme(text = element_text(color = lisa_palette("KatsushikaHokusai")[1]),
axis.text = element_text(color = lisa_palette("KatsushikaHokusai")[1]),
axis.ticks = element_line(color = lisa_palette("KatsushikaHokusai")[1]),
legend.background = element_blank(),
legend.box.background = element_blank(),
legend.key = element_blank(),
panel.background = element_rect(color = lisa_palette("KatsushikaHokusai")[1],
fill = lisa_palette("KatsushikaHokusai")[5]),
panel.grid = element_blank(),
plot.background = element_rect(color = lisa_palette("KatsushikaHokusai")[5],
fill = lisa_palette("KatsushikaHokusai")[5]),
strip.background = element_rect(fill = lisa_palette("KatsushikaHokusai")[4]),
strip.text = element_text(color = lisa_palette("KatsushikaHokusai")[1]))
)
```
You can undo this by executing `theme_set(theme_grey())`. Next we'll save the color names from a 9-color version of `"KatsushikaHokusai"` as a conveniently-named object, `kh`. We'll use `kh` to adjust the `fill` and `color` settings within our plots on the fly.
```{r}
kh <- lisa_palette("KatsushikaHokusai", 9, "continuous")
kh
```
Okay, it's time to get a sense of what we've done by making our version of Figure 10.2.
```{r, fig.width = 6, fig.height = 2}
length <- 101
d %>%
expand_grid(theta = seq(from = 0, to = 1, length.out = length)) %>%
mutate(label = str_c("factory ", factory)) %>%
ggplot(aes(x = theta, y = dbeta(x = theta, shape1 = alpha, shape2 = beta))) +
geom_area(fill = kh[6]) +
scale_y_continuous(NULL, breaks = NULL,
expand = expansion(mult = c(0, 0.05))) +
xlab(expression(theta)) +
facet_wrap(~ label)
```
We might recreate the top panel with `geom_col()`.
```{r, fig.width = 3, fig.height = 2}
tibble(Model = c("1", "2"), y = 1) %>%
ggplot(aes(x = Model, y = y)) +
geom_col(width = .75, fill = kh[5]) +
scale_y_continuous(expand = expansion(mult = c(0, 0.05)))
```
Consider the Bernoulli bar plots in the bottom panels of Figure 10.2. Often times, the heights of the bars in Kruschke's model diagrams are arbitrary and just intended to give a sense of the Bernoulli distribution. This time, however, we might the values $z = 6, N = 9$ Kruschke provided for the example at the top of page 270.
```{r, fig.width = 5, fig.height = 2}
n <- 9
z <- 6
tibble(flip = factor(c("tails", "heads"), levels = c("tails", "heads")),
prob = c((n - z) / n, z / n)) %>%
expand_grid(factory = str_c("factory ", 1:2)) %>%
ggplot(aes(x = flip, y = prob)) +
geom_col(width = 0.75, fill = kh[4]) +
scale_y_continuous(NULL, breaks = NULL,
expand = expansion(mult = c(0, 0.05))) +
xlab(NULL) +
theme(axis.ticks.x = element_blank(),
panel.grid = element_blank()) +
facet_wrap(~ factory)
```
We read:
> Suppose we flip the coin nine times and get six heads. Given those data, what are the posterior probabilities of the coin coming from the head-biased or tail-biased factories? We will pursue the answer three ways: via formal analysis, grid approximation, and MCMC. (p. 270)
Before we move on to a formal analysis, here's a more faithful version of Kruschke's Figure 10.2 based on the method from my blog post, [*Make model diagrams, Kruschke style*](https://solomonkurz.netlify.app/blog/2020-03-09-make-model-diagrams-kruschke-style/).
```{r, fig.width = 3.9, fig.height = 5, message = F}
library(patchwork)
library(ggforce)
p1 <-
tibble(x = 1:2,
d = 0.75) %>%
ggplot(aes(x = x, y = d)) +
geom_col(fill = alpha(kh[5], 0.9), width = 0.45) +
annotate(geom = "text",
x = 1.5, y = 0.2,
label = "categorical",
size = 5, color = kh[1]) +
annotate(geom = "text",
x = 1.5, y = 0.85,
label = "italic(P[m])",
size = 5, color = kh[1], family = "Times", parse = TRUE) +
coord_cartesian(xlim = c(-0.5, 3.5),
ylim = 0:1) +
theme_void() +
theme(axis.line.x = element_line(linewidth = 0.5, color = kh[1]))
## an annotated arrow
# save our custom arrow settings
my_arrow <- arrow(angle = 20, length = unit(0.35, "cm"), type = "closed")
p2 <-
tibble(x = 0.5,
y = 1,
xend = 0.5,
yend = 0) %>%
ggplot(aes(x = x, xend = xend,
y = y, yend = yend)) +
geom_segment(arrow = my_arrow, color = kh[1]) +
annotate(geom = "text",
x = 0.375, y = 1/3,
label = "'~'",
size = 10, color = kh[1], family = "Times", parse = T) +
xlim(0:1) +
ylim(0:1) +
theme_void()
p3 <-
tibble(x = seq(from = 0.01, to = 0.99, by = 0.01),
d = (dbeta(x, 5, 10) / max(dbeta(x, 5, 10)))) %>%
ggplot(aes(x = x, y = d)) +
geom_area(fill = alpha(kh[4], 0.85)) +
annotate(geom = "text",
x = 0.5, y = 0.2,
label = "beta",
size = 5, color = kh[1]) +
annotate(geom = "text",
x = 0.5, y = 0.6,
label = "list(italic(A)[1], italic(B)[1])",
size = 5, color = kh[1], family = "Times", parse = TRUE) +
scale_x_continuous(expand = c(0, 0)) +
ylim(0:1) +
theme_void() +
theme(axis.line.x = element_line(linewidth = 0.5, color = kh[1]))
p4 <-
tibble(x = seq(from = 0.01, to = 0.99, by = 0.01),
d = (dbeta(x, 10, 5) / max(dbeta(x, 10, 5)))) %>%
ggplot(aes(x = x, y = d)) +
geom_area(fill = kh[6]) +
annotate(geom = "text",
x = 0.5, y = 0.2,
label = "beta",
size = 5, color = kh[1]) +
annotate(geom = "text",
x = 0.5, y = 0.6,
label = "list(italic(A)[2], italic(B)[2])",
size = 5, color = kh[1], family = "Times", parse = TRUE) +
scale_x_continuous(expand = c(0, 0)) +
ylim(0:1) +
theme_void() +
theme(axis.line.x = element_line(linewidth = 0.5, color = kh[1]))
# bar plot of Bernoulli data
p5 <-
tibble(flip = factor(c("tails", "heads"), levels = c("tails", "heads")),
prob = c((n - z) / n, z / n)) %>%
ggplot(aes(x = flip, y = prob)) +
geom_col(width = 0.4, fill = alpha(kh[4], 0.85)) +
annotate(geom = "text",
x = 1.5, y = 0.15,
label = "Bernoulli",
size = 7, color = kh[1]) +
annotate(geom = "text",
x = 1.5, y = 2/3,
label = "theta",
size = 7, color = kh[1], family = "Times", parse = T) +
scale_x_discrete(expand = expansion(mult = 0.85)) +
scale_y_continuous(NULL, breaks = NULL,
expand = expansion(mult = c(0, 0.15))) +
theme_void() +
theme(axis.line.x = element_line(linewidth = 0.5, color = kh[1]))
# another bar plot of Bernoulli data
p6 <-
tibble(flip = factor(c("tails", "heads"), levels = c("tails", "heads")),
prob = c((n - z) / n, z / n)) %>%
ggplot(aes(x = flip, y = prob)) +
geom_col(width = 0.4, fill = kh[6]) +
annotate(geom = "text",
x = 1.5, y = 0.15,
label = "Bernoulli",
size = 7, color = kh[1]) +
annotate(geom = "text",
x = 1.5, y = 2/3,
label = "theta",
size = 7, color = kh[1], family = "Times", parse = T) +
scale_x_discrete(expand = expansion(mult = 0.85)) +
scale_y_continuous(NULL, breaks = NULL,
expand = expansion(mult = c(0, 0.15))) +
theme_void() +
theme(axis.line.x = element_line(linewidth = 0.5, color = kh[1]))
# another annotated arrow
p7 <-
tibble(x = c(0.375, 0.625),
y = 1/3,
label = c("'~'", "italic(i)")) %>%
ggplot(aes(x = x, y = y, label = label)) +
geom_text(size = c(10, 7), parse = T, family = "Times") +
geom_segment(x = 0.5, xend = 0.5,
y = 1, yend = 0,
arrow = my_arrow, color = kh[1]) +
xlim(0:1) +
ylim(0:1) +
theme_void()
# some text
p8 <-
tibble(x = 1,
y = 0.5,
label = "italic(y[i])") %>%
ggplot(aes(x = x, y = y, label = label)) +
geom_text(size = 7, color = kh[1], parse = T, family = "Times") +
xlim(0, 2) +
ylim(0:1) +
theme_void()
# dashed borders
p9 <-
tibble(x = c(0, 0.999, 0.999, 0, 1.001, 2, 2, 1.001),
y = rep(0:1, each = 2) %>% rep(times = 2),
z = rep(letters[1:2], each = 4)) %>%
ggplot(aes(x = x, y = y, group = z)) +
geom_shape(fill = "transparent", color = kh[1], linetype = 2,
radius = unit(1, "cm")) +
scale_x_continuous(NULL, breaks = NULL, expand = c(0, 0)) +
scale_y_continuous(NULL, breaks = NULL, expand = c(0, 0)) +
ylim(0:1) +
theme_void()
# define the layout
layout <- c(
# cat
area(t = 1, b = 5, l = 5, r = 9),
area(t = 6, b = 8, l = 5, r = 9),
# beta
area(t = 9, b = 13, l = 2, r = 6),
area(t = 9, b = 13, l = 8, r = 12),
# arrow
area(t = 14, b = 16, l = 2, r = 6),
area(t = 14, b = 16, l = 8, r = 12),
# bern
area(t = 17, b = 21, l = 2, r = 6),
area(t = 17, b = 21, l = 8, r = 12),
area(t = 23, b = 25, l = 5, r = 9),
area(t = 26, b = 27, l = 5, r = 9),
area(t = 8, b = 23, l = 1, r = 13)
)
# combine and plot!
(p1 + p2 + p3 + p4 + p2 + p2 + p5 + p6 + p7 + p8 + p9) +
plot_layout(design = layout) &
theme(plot.margin = margin(0, 5.5, 0, 5.5))
```
Note how we used the `geom_shape()` function from the [**ggforce** package](https://CRAN.R-project.org/package=ggforce) [@R-ggforce] to make the two dashed borders with the rounded edges. You can learn more from Pedersen's [-@pedersenDrawPolygonsExpansion] vignette, [*Draw polygons with expansion/contraction and/or rounded corners — geom_shape*](https://ggforce.data-imaginist.com/reference/geom_shape.html).
One thing we might rehearse before moving on is that that figure depicts two models, and each of of the models is represented within one of the dashed areas. By *model*, we mean $p_m(D \mid \theta_m, m)\ p_m(\theta_m \mid m)$, conceptualized as a data-generating function.
### Solution by formal analysis.
Here we rehearse if we have a $\operatorname{Beta}(\theta, a, b)$ prior for $\theta$ of the Bernoulli likelihood function, then the analytic solution for the posterior is $\operatorname{Beta}(\theta \mid z + a, N – z + b)$. Within this paradigm, if you would like to compute $p(D \mid m)$, don't use the following function. If suffers from [underflow](https://en.wikipedia.org/wiki/Arithmetic_underflow) with large values.
```{r, eval = F}
# naive, don't use
p_d <- function(z, n, a, b) {
beta(z + a, n - z + b) / beta(a, b)
}
```
This version is more robust.
```{r p_d}
# robust, do use
p_d <- function(z, n, a, b) {
exp(lbeta(z + a, n - z + b) - lbeta(a, b))
}
```
You'd use our `p_d()` function like this to compute $p(D \mid m_1)$.
```{r}
p_d(z = 6, n = 9, a = 3.5, b = 8.5)
```
So to compute our BF, $\frac{p(D \mid m_1)}{p(D \mid m_2)}$, you might use the `p_d()` function like this.
```{r}
p_d_1 <- p_d(z = 6, n = 9, a = 3.5, b = 8.5)
p_d_2 <- p_d(z = 6, n = 9, a = 8.5, b = 3.5)
p_d_1 / p_d_2
```
If we computed the BF the other way, it'd look like this.
```{r}
p_d_2 / p_d_1
```
Since the BF itself is only $\text{BF} = \frac{p(D \mid m = 1)}{p(D \mid m = 2)}$, we'd need to bring in the priors for the models themselves to get the posterior probabilities, which follows the form
$$\frac{p(m = 1 \mid D)}{p(m = 2 \mid D)} = \left [\frac{p(D \mid m = 1)}{p(D \mid m = 2)} \right ] \left [ \frac{p(m = 1)}{p(m = 2)} \right].$$
If for both our models $p(m) = .5$, then we can compute $\frac{p(m = 1 \mid D)}{p(m = 2 \mid D)}$ like so.
```{r}
(p_d_1 * 0.5) / (p_d_2 * 0.5)
```
As Kruschke pointed out, because we're working in the probability metric, the sum of $p(m = 1 \mid D)$ and $p(m = 2 \mid D)$ must be 1. By simple algebra then,
$$p(m = 2 \mid D) = 1 - p(m = 1 \mid D).$$
Therefore, it's also the case that
$$\frac{p(m = 1 \mid D)}{1 - p(m = 1 \mid D)} = 0.2135266.$$
Thus, our value for $\frac{p(m = 1 \mid D)}{p(m = 2 \mid D)}$, which is 0.2135266, is in an odds metric. If you want to convert odds to a probability, you follow the formula
$$\text{probability} = \frac{\text{odds}}{1 + \text{odds}}.$$
Thus, the posterior probability for $m = 1$ is
$$p(m = 1 \mid D) = \frac{0.2135266}{1 + 0.2135266} = 0.1759554.$$
We can express that in code like so.
```{r}
odds <- (p_d_1 * 0.5) / (p_d_2 * 0.5)
odds / (1 + odds)
```
Relative to $m = 2$, our posterior probability for $m = 1$ is about .18. Therefore the posterior probability of $m = 2$ is 1 minus that value.
```{r}
1 - (odds / (1 + odds))
```
Given the data, the two models and the prior assumption both models were equally credible, we conclude $m = 2$ is about .82 probable.
### Solution by grid approximation.
Before we jump to making Figure 10.3, we should take note of how Kruschke adjusted the notation in this section from the notation he used in the last section. At the bottom of page 271, we read:
> In our current scenario, the model index, $m$, determines the value of the factory mode, $\omega_m$. Therefore, instead of thinking of a discrete indexical parameter $m$, we can think of the continuous mode parameter $\omega$ being allowed only two discrete values by the prior.
To show what this means in practice, here we make the initial version of the primary data frame `d`. The `omega` column provides an index for a sequence of models $(m)$, each defined by its mode $(\omega)$. The `kappa` column has a constant value `12`, and we use some familiar algebra to define the values in the `a` and `b` columns, based on those `omega` and `kappa` values. In the `prior_omega` column we define the prior probability of each model with an `ifelse()` statement. Then within `expand_grid()` we apply a dense comb of `theta` values to each level of `omega`.
```{r}
d <- tibble(omega = seq(from = 0.005, to = 0.995, by = 0.005),
kappa = 12) %>%
mutate(a = omega * (kappa - 2) + 1,
b = (1 - omega) * (kappa - 2) + 1,
prior_omega = ifelse(omega %in% c(0.25, 0.75), 0.5, 0)) %>%
expand_grid(theta = seq(from = 0, to = 1, length.out = length))
# what?
glimpse(d)
```
Next, we define the prior values for $\theta$, what we have traditionally called $p(\theta)$ in mathematical notation. Within our model-comparison framework where we're using $\omega$ as an index for a sequence of models defined by their prior modes, we might more fully call that prior $p_\omega(\theta_\omega \mid \omega)$. Regardless of our new naming complexities, we still define the prior in code using the `dbeta()` function.
```{r}
d <-
d %>%
mutate(prior_theta_d = dbeta(x = theta, shape1 = a, shape2 = b))
```
We have saved the results as `prior_theta_d`. The first two parts of the name `prior_theta_d` are meant to help differentiate this as the prior for $\theta$, rather than the prior for the model, which we have already saved in the `prior_omega` column. We do have another complication, however. As we will see shortly, some of the panels in Kruschke's Figure 10.3 are based on $p_\omega(\theta_\omega \mid \omega)$ as expressed in the metric of the beta density, such as we have computed with the `dbeta()` function. We started using these sensibilities back in Chapter 6, where we learned about analytic solutions, and introduced the beta distribution for probability parameters, such as $\theta$. Yet other panels in Figure 10.3 are displayed in metrics that indicate Kruschke was using the prior in a probability metric that sums to one, such as we practiced in Chapter 5 with grid approximation. For those panels, we need another column in the `d` data frame.
```{r}
d <-
d %>%
group_by(omega) %>%
# normalize `prior_theta` to a proper probability scale
mutate(prior_theta_p = prior_theta_d / sum(prior_theta_d)) %>%
ungroup()
# what?
glimpse(d)
```
Thus we have two columns for the prior of $\theta$, what we've called $p_\omega(\theta_\omega \mid \omega)$ in mathematical notation. The first column, `prior_theta_d`, is in the beta-density metric, such as we introduced in Chapter 6. The `_d` suffix stands for "density." The second column, `prior_theta_p`, is in a probability metric that sums to one, such as we practiced in Chapter 5. The `_p` suffix stands for "probability."
I'm sorry this has to be so technical, but this is what it takes to complete our task. Onward!
As in earlier chapters, we won't be able to make the wireframe plots on the left of Figure 10.3. We will start, instead, by making the plot in the second column of the first row. We'll save the object as `p12`, where the numbers indicate the row and column, respectively. Here is the code and its results.
```{r, fig.height = 3, fig.width = 4.3}
p12 <-
d %>%
ggplot(aes(x = theta, y = omega, fill = prior_theta_d * prior_omega)) +
geom_raster(interpolate = T) +
scale_fill_gradient(expression(p(theta[omega])), low = kh[1], high = kh[9]) +
scale_x_continuous(expression(theta), breaks = 0:5 / 5, expand = c(0, 0)) +
scale_y_continuous(expression(omega), breaks = 0:5 / 5, expand = c(0, 0), limits = 0:1)
p12
```
In the body of Figure 10.3, as well as in the surrounding prose, Kruschke referred to the third dimension of this figure as the "prior." I find that language a little vague however, given how we now have prior probabilities for parameters, as well as prior probabilities for the models in which we have embedded these parameters. To be more verbose with our notation, what we have plotted is $p(\theta_\omega)$, which we can define as
$$
p(\theta_\omega) = p_\omega(\theta_\omega \mid \omega)\ p(\omega).
$$
In words, we have multiplied the each model's prior for $\theta$ by the probability of that model, itself. In this figure, we are using the prior for $\theta$ as expressed in the metric of the beta density, which is why we used `prior_theta_d` in the computation code within the `aes()` function. At the moment it might not be obvious why we have preferred `prior_theta_d` in this panel, but my hope is it will become more apparent when we make the panels in the second row. For now, notice that when we set `fill = prior_theta * prior_omega` within the `aes()` function, that is where we defined the `fill` gradient as $p(\theta_\omega)$, what Kruschke called the "prior." Do further note that the highest values for the `fill` dimension in the plot topped out at about 1.5.
Next we move to the third panel in the first row, what we will save as `p13`. Notice that the x-axis in the text ranges up to about 25, and the "Marginal $p(\omega)$" values Kruschke displayed along that axis took on the values of zero and 15 (or thereabouts). Near the top of page 273, Kruschke wrote: "The absolute scale on $p(\omega)$ is irrelevant because it is the probability density for an arbitrary choice of grid approximation." Based on personal communications with Kruschke, it looks like the scale of the x-axis in this panel is a mistake. It's unclear how the mistake made it's way into the published text, but I'm now confident Kruschke meant this scale to range from zero to one, and that the values that appear around 15 should actually be 0.5.
In the initial code chunk for the `d` data, we defined the prior probabilities for each level of $\omega$ as `prior_omega = ifelse(omega %in% c(0.25, 0.75), 0.5, 0)`. Those are our "$p(\omega)$" values. By the language of "Marginal $p(\omega)$," Kruschke is indicating he was summing over other dimensions. For each level of the `theta` sequence in our data, Kruschke multiplied $p(\omega)$ by $p_\omega(\theta_\omega \mid \omega)$ for that respective `theta` point, and them summed up the products. In more formal notation, I believe we could describe that as
$$
\sum_{\theta_\omega} p_\omega(\theta_\omega \mid \omega)\ p(\omega),
$$
But since we just learned above that that product term over which we're summing is also called $p(\theta_\omega)$, we can re-write that equation as
$$
\sum_{\theta_\omega} p(\theta_\omega),
$$
where we are summing the $p(\theta_\omega)$ values across the parameter space $\theta$, separately for each level of the model space $\omega$.
From a computational standpoint, the question is: *Do we use the* $p_\omega(\theta_\omega \mid \omega)$ *term in the beta-density metric* (`prior_theta_d`) *or in the probability metric* (`prior_theta_p`)*?* For this panel in Figure 10.3, the answer is `prior_theta_p`. To help show why, here's a plot that's not in the text.
```{r, fig.height = 3, fig.width = 6.72}
d %>%
filter(omega %in% c(0.20, 0.25)) %>%
group_by(omega) %>%
mutate(product = prior_theta_p * prior_omega) %>%
pivot_longer(prior_theta_p:product) %>%
mutate(key = ifelse(name == "prior_theta_p",
"p(theta)",
"p(theta)*p(omega)"),
facet = str_c("{omega==", omega, "}*'; '*p(omega)==", prior_omega)) %>%
ggplot(aes(x = theta, y = value, color = key, shape = key)) +
geom_point(size = 2) +
scale_x_continuous(expression(theta), breaks = 0:5 / 5, expand = c(0, 0),
labels = c(0, 1:4 / 5, 1)) +
scale_y_continuous(NULL, expand = expansion(mult = c(0, 0.1))) +
scale_color_manual(NULL, values = kh[c(6, 4)], labels = scales::parse_format()) +
scale_shape_manual(NULL, values = c(20, 18), labels = scales::parse_format()) +
guides(colour = guide_legend(position = "inside"),
shape = guide_legend(position = "inside")) +
facet_wrap(~ facet, labeller = label_parsed) +
theme(legend.position.inside = c(0.90, 0.85),
panel.spacing.x = unit(0.75, "lines"))
```
In this plot, we're just highlighting two of our models, $\omega = 0.2$ and $\omega = 0.25$, which you can see indicated in the facet strip labels on the top of the panels. The x-axis is the $\theta$ sequence. On the y-axis, we are showing the corresponding `prior_theta_p` values in two ways. With the beige circular dots, we have shown those values as the are in the `d` data frame. Those values are in the probability metric, and thus they will sum to one within each level of $\omega$ (i.e., within each facet of the plot). With the dark-blue diamond dots, we have depicted those same values as multiplied by `prior_omega`, $p(\omega)$. As $p(\omega) = 0$ for the panel on the left, the product of $p(\theta)$ and $p(\omega)$ is zero in all cases. But as $p(\omega) = 0.5$ for the panel on the right, which shows one of our focal models for this section of the the text, the heights for the $p(\theta)p(\omega)$ dots are always half of their corresponding $p(\theta)$ values. And thus just as when you sum `prior_theta_p` to one in the right facet, you sum `prior_theta_p * prior_omega` to 0.5. This is what we want for the correct version of Figure 10.3, row 1, column 3. Here's the plot.
```{r, fig.height = 3, fig.width = 3.5}
p13 <-
d %>%
group_by(omega) %>%
summarise(marginal_prior_omega = sum(prior_omega * prior_theta_p)) %>%
ggplot(aes(xmin = 0, xmax = marginal_prior_omega, y = omega)) +
geom_linerange(color = kh[4]) +
scale_x_continuous(expression(Marginal~p(omega)), expand = c(0, 0), limits = 0:1) +
scale_y_continuous(expression(omega), expand = c(0, 0), limits = 0:1)
p13 + labs(subtitle = "Our x-axis differs from the text.")
```
What Kruschke called "Marginal $p(\omega)$" is
$$
\sum_{\theta_\omega} p_\omega(\theta_\omega \mid \omega)\ p(\omega),
$$
which we have realized in code as:
```{r}
d %>%
group_by(omega) %>%
summarise(marginal_prior_omega = sum(prior_theta_p * prior_omega)) %>%
glimpse()
```
And you'll note that the sum of those marginal values is indeed one.
```{r}
d %>%
group_by(omega) %>%
summarise(marginal_prior_omega = sum(prior_theta_p * prior_omega)) %>%
summarise(sum_of_marginal_prior_omega = sum(marginal_prior_omega))
```
Building on those sensibilities, next we turn to the panel in the second column of the second row. On page 273 in the text, Kruschke described this as
> the marginal prior distribution on $\theta$ when averaging across models. Specifically, the middle panel of the second row shows $p(\theta)$, where you can see it is a bimodal distribution. This illustrates that the overall model structure, as a whole, asserts that biases are probability near 0.25 or 0.75.
In that panel we're literally depicting what we might describe as
$$
\sum_{\omega_\theta} p_\omega(\theta_\omega \mid \omega)\ p(\omega),
$$
which is summing over the prior probability of the parameter given the model, $p_\omega(\theta_\omega \mid \omega)$, multiplied by the prior probability for the model itself, $p(\omega)$, separately for each level of $\theta$. This summation operation is similar to the one we just did above, but with the positioning of the $\omega$ and $\theta$ terms inverted below the summation symbol $\sum$. And recall that since we learned the product term over which we are summing is also called $p(\theta_\omega)$, we can re-write that equation as
$$
\sum_{\omega_\theta} p(\theta_\omega).
$$
Perhaps frustratingly, this time we are using the `prior_theta_d` version of $p_\omega(\theta_\omega \mid \omega)$, not `prior_theta_p` as in the last panel. If you focus closely to the metric of the y-axis, you'll get a clue as to why.
```{r, fig.width = 3.5, fig.height = 3}
p22 <-
d %>%
group_by(theta) %>%
summarise(marginal_prior = sum(prior_theta_d * prior_omega)) %>%
ggplot(aes(x = theta, y = marginal_prior)) +
geom_area(fill = kh[4]) +
scale_x_continuous(expression(theta), breaks = 0:5 / 5, expand = c(0, 0)) +
scale_y_continuous(expression(Marginal~p(theta)),
expand = expansion(mult = c(0, 0.05)), limits = c(0, 3.33))
p22
```
The maximum value on the y-axis is a little above 1.5 for both modes. Compare that to the highest values in the `fill` gradient back in the plot for our first panel, `p12`. To further demystify what we're depicting, it might help if we made a variation of that plot where we use a stacked area plot, with the `fill` grouped by `omega`.
```{r, fig.width = 4.35, fig.height = 3}
d %>%
filter(omega %in% c(0.25, 0.75)) %>%
group_by(theta) %>%
mutate(prior_theta_omega = prior_theta_d * prior_omega,
omega = factor(omega, levels = c(0.75, 0.25))) %>%
ggplot(aes(x = theta, y = prior_theta_omega,
group = omega, fill = omega)) +
geom_area() +
scale_x_continuous(expression(theta), breaks = 0:5 / 5, expand = c(0, 0)) +
scale_y_continuous(expression(Marginal~p(theta)),
expand = expansion(mult = c(0, 0.05)), limits = c(0, 3.33)) +
scale_fill_manual(expression(omega), values = kh[5:6])
```
By summing over the values of $p(\theta_\omega)$, we are metaphorically stacking the density values one atop the other.
Now we turn our attention to the panel in the third column of the second row. For this, we'll show the $\theta$ priors for the two focal models, $\omega \in \{.25, .75\}$, in a faceted plot. And just as in the last panel, we use the `prior_theta_d` version of $p_\omega(\theta_\omega \mid \omega)$ to return the values in the metric of the beta densities.
```{r, fig.width = 3.5, fig.height = 3}
omega_levels <- str_c("omega == ", c(0.75, 0.25))
p23 <-
d %>%
filter(omega %in% c(0.25, 0.75)) %>%
mutate(omega = factor(str_c("omega == ", omega), levels = omega_levels)) %>%
ggplot(aes(x = theta, y = prior_theta_d)) +
geom_area(fill = kh[4]) +
scale_x_continuous(expression(theta), breaks = 0:5 / 5, expand = c(0, 0)) +
scale_y_continuous(expression(Marginal~p(theta*"|"*omega)),
expand = expansion(mult = c(0, 0.05)), limits = c(0, 4)) +
facet_wrap(~ omega, ncol = 1, labeller = label_parsed) +
theme(strip.text = element_text(margin = margin(b = 1, t = 1)))
p23
```
Both facets show the priors for the two focal models in the metric of their respective beta densities. Both beta densities have maximum values at about 3. Thus when we multiplied those beta-density values by their 0.5 $p(\omega)$ values for the `fill` gradient in `p12`, their maximum values reduced to about 1.5.
Next we turn to the likelihood. Since the likelihood can in principle differ by each model, we might refer to each model's likelihood as $p_\omega(D \mid \theta_\omega, \omega)$. In code, we simply need to feed the data-summary values ($z$ and $N$) and the grid of $\theta$ values into the old `dbern()` function. The row structure of the `d` data frame is already set up to compute these likelihood values separately by each model, $\omega$. Thus, we might just save the results as `likelihood`. Then we can make the 2-dimensional density plot of the likelihood at the heart of Figure 10.3.
```{r, fig.height = 3, fig.width = 4.6}
# define the `dbern()` function
dbern <- function(x, z = NULL, n = NULL) {
return(x^z * (1 - x)^(n - z))
}
# define the data
z <- 6
n <- 9
# compute the likelihood
d <-
d %>%
mutate(likelihood = dbern(x = theta, z = z, n = n))
# plot
p32 <-
d %>%
ggplot(aes(x = theta, y = omega, fill = likelihood)) +
geom_raster(interpolate = T) +
scale_fill_gradient(expression(p[omega]('D|'*theta[omega]*', '*omega)),
low = kh[1], high = kh[9]) +
scale_x_continuous(expression(theta), breaks = 0:5 / 5, expand = c(0, 0)) +
scale_y_continuous(expression(omega), breaks = 0:5 / 5, expand = c(0, 0), limits = 0:1)
p32
```
Though each model differs by the prior, $p_\omega(\theta_\omega \mid \omega)$, they all share the same structure for the likelihood. Thus in this case we can reduce our verbose likelihood notation $p_\omega(D \mid \theta_\omega, \omega)$ to the simpler conventional notation $p(D \mid \theta)$, which Kruschke also discussed in his caption for Figure 10.1 (p. 267).
At this point in the text (the middle of page 273), Kruschke wrote:
> The posterior distribution is shown in the lower two rows of Figure 10.3. The posterior distribution was computed at each point of the $\langle \theta, \omega \rangle$ grid by multiplying prior times likelihood, and then normalizing (exactly as done for previous grid approximations such as in Figure 9.2, p. 227).
I think we need a little more detail to pull this off in code. For the next step, we'll want to compute the marginal likelihood for each model. In the context of our $\omega$ paradigm, this is
$$
p(D \mid \omega) = \sum_{\theta_\omega} p_\omega(D \mid \theta_\omega, \omega)\ p_\omega(\theta_\omega \mid \omega),
$$
which just means we're summing the product of likelihood and prior for $\theta$ separately within each level of the model $\omega$. We'll save the results as `marginal_likelihood_omega`. Here's the computation.
```{r}
d <-
d %>%
group_by(omega) %>%
mutate(marginal_likelihood_omega = sum(likelihood * prior_theta_d)) %>%
ungroup()
```
Again, $p(D \mid \omega)$ means we have a unique marginal likelihood value for each model. Though it wasn't done in the text, it might be worth showcasing those values in a plot.
```{r, fig.height = 3, fig.width = 3.5}
d %>%
distinct(omega, marginal_likelihood_omega) %>%
ggplot(aes(x = omega, y = marginal_likelihood_omega)) +
geom_point(color = kh[4], size = 2/3) +
geom_segment(data = . %>%
filter(omega %in% c(0.75, 0.25)),
aes(y = 0, yend = marginal_likelihood_omega - 0.004),
arrow = arrow(length = unit(0.06, "inches")),
color = kh[5], linewidth = 1/3) +
scale_x_continuous(expand = c(0, 0), limits = 0:1) +
scale_y_continuous(expand = expansion(mult = c(0, 0.05))) +
labs(subtitle = "This plot is not in the text.",
x = expression(omega),
y = expression(p(D*'|'*omega)))
```
I added the vertical arrows to help mark off the marginal likelihood values for our two focal models, $\omega = .25$ and $\omega = .75$. Just to rehearse, we compute Bayes factors by taking ratios of any two values in that plot. For example, here's one way we might compute $\textit{BF}_{\omega_{0.75} / \omega_{0.25}}$.
```{r}
d %>%
distinct(omega, marginal_likelihood_omega) %>%
summarise(bf = marginal_likelihood_omega[omega == 0.75] /
marginal_likelihood_omega[omega == 0.25])
```
Just as you would hope, that value matches nicely with the Bayes factor value Kruschke reported in the middle of page 273.
Anyway, now we have the marginal likelihood each model, $p(D \mid \omega)$, we can compute the posterior probabilities for the models, $p(\omega \mid D)$, which we can define as
$$
p(\omega \mid D) = \frac{p(D \mid \omega)\ p(\omega)}{\sum_\omega p(D \mid \omega)\ p(\omega)}.
$$
Computationally, it might make sense to save these values as a separate data frame called `d_posterior_omega`.
```{r}
d_posterior_omega <-
d %>%
distinct(omega, prior_omega, marginal_likelihood_omega) %>%
mutate(posterior_omega = (marginal_likelihood_omega * prior_omega) /
sum(marginal_likelihood_omega * prior_omega))
# what?
d_posterior_omega %>%
arrange(desc(posterior_omega), omega) %>%
head()
```
```{r, echo = F, eval = F}
# Yes, they sum to 1
d_posterior_omega %>%
summarise(p = sum(posterior_omega))
```
The values in the `posterior_omega` column are the posterior probabilities for each model, as indexed by $\omega$, *in a true probability metric*. Thus, the `posterior_omega` columns will sum to one. Note that in the second half of the code chunk we have sorted the data in descending order by `posterior_omega`. Due largely to the values in the `prior_omega` column, $p(\omega \mid D) = 0$ for all models except our focal models $\omega \in \{.25, .75\}$. Further, the ratio of the two non-zero $p(\omega \mid D)$ values gives the odds of one model over the other. Here we compute that ratio both ways.
```{r}
d_posterior_omega %>%
summarise(posterior_odds_0.75_over_0.25 = posterior_omega[omega == 0.75] / posterior_omega[omega == 0.25],
posterior_odds_0.25_over_0.75 = posterior_omega[omega == 0.25] / posterior_omega[omega == 0.75])
```
Because both models had the same prior probability, 0.5, their posterior odds are the same as $\textit{BF}_{\omega_{0.75} / \omega_{0.25}}$ and $\textit{BF}_{\omega_{0.25} / \omega_{0.75}}$, respectively.
Within our grid approach, we can still just use the old formula $\operatorname{beta}(\theta \mid z + a, N - z + b)$ from back in Section 6.3 to compute the posterior probability for $\theta$, within each model $\omega$. In mathematical notation, we might call these values $p_\omega(\theta_\omega \mid D, \omega)$. In the `d` data frame, we'll save them as `posterior_theta_d`. But just as with the priors for $\theta$, it will also come in handy to have a version of those values in a probability metric, rather than in a beta-density metric. We'll save those results in a column called `posterior_theta_p`. Then we'll add the posterior model probabilities `posterior_omega` to the data frame by way of a `left_join()` call.
```{r}
d <-
d %>%
mutate(posterior_theta_d = dbeta(x = theta, shape1 = z + a, shape2 = n - z + b)) %>%
group_by(omega) %>%
# normalize `posterior_theta` to a proper probability scale
mutate(posterior_theta_p = posterior_theta_d / sum(posterior_theta_d)) %>%
ungroup() %>%
left_join(d_posterior_omega,
by = join_by(omega, prior_omega, marginal_likelihood_omega))
```
We can now plot what Kruschke called the marginal $p(\omega \mid D)$ values in our version of the panel in the fourth row and third column.
```{r, fig.height = 3, fig.width = 3.5}
p43 <-
d %>%
group_by(omega) %>%
summarise(marginal_posterior_omega = sum(posterior_omega * posterior_theta_p)) %>%
ggplot(aes(xmin = 0, xmax = marginal_posterior_omega, y = omega)) +
geom_linerange(color = kh[4]) +
scale_x_continuous(expand = c(0, 0), limits = 0:1) +
scale_y_continuous(expand = c(0, 0), limits = 0:1) +
labs(x = expression(Marginal~p(omega*'|'*D)),
y = expression(omega))
p43 + labs(subtitle = "Our x-axis differs from the text.")
```
In more verbose notation, I believe we could describe those values as
$$
\sum_{\theta_\omega} p_\omega(\theta_\omega \mid D, \omega)\ p(\omega \mid D),
$$
In words, we multiplied each model's posterior for $\theta$, what we're calling $p_\omega(\theta_\omega \mid D, \omega)$, by the posterior probability of that model itself, $p(\omega \mid D)$. By $\sum_{\theta_\omega}$, we are indicating we computed those values for each level of `theta` within our `d` data grid, and then summed them up separately (i.e., marginalize them) within each level of the model index $\omega$.
In a similar move to what we did with the priors, we might consider simplifying the product term to $p(\theta_\omega \mid D)$. That would leave us with the thriftier version of the equation
$$
\sum_{\theta_\omega} p(\theta_\omega \mid D).
$$
But anyways, as Kruschke wrote in the text: "Visual inspection suggests that the ratio of the heights is about 5 to 1, which matches the Bayes factor of 4.68" (p. 273). We proactively confirmed that ratio and its relation to the Bayes factor in a code chunk a little above the one for our `p43` plot.
Now we might turn our attention to the panel the second column of the fourth row. As an analogue to its sister panel, `p12`, this time we will use the `fill` gradient to depict $p(\theta_\omega \mid D)$, which we just discussed in mathematical notation, above.
```{r, fig.height = 3, fig.width = 4.33}
p42 <-
d %>%
ggplot(aes(x = theta, y = omega, fill = posterior_theta_d * posterior_omega)) +
geom_raster(interpolate = T) +
scale_fill_gradient(expression(p(theta[omega]*'|'*D)), low = kh[1], high = kh[9]) +
scale_x_continuous(expression(theta), breaks = 0:5 / 5, expand = c(0, 0)) +
scale_y_continuous(expression(omega), breaks = 0:5 / 5, expand = c(0, 0), limits = 0:1)
p42
```
Notice how the maximum value in the `fill` gradient is about two times as high in this panel as it was for its analogue of the priors, `p12`. You might also compare these results of these two panels to related wireframe plots in Kruschke's original version of Figure 10.3 in the text.
Building on those sensibilities, next we turn to the panel in the second column of the fifth row. In that panel we're literally depicting
$$
\sum_{\omega_\theta} p_\omega(\theta_\omega \mid D, \omega)\ p(\omega \mid D),
$$
which is summing over the posterior probability of the parameter given the model, $p_\omega(\theta_\omega \mid D, \omega)$, multiplied by the posterior probability for the model itself $p(\omega \mid D)$, separately for each level of $\theta$. As has become our custom by this point, we can re-write that equation with the thriftier notation
$$
\sum_{\omega_\theta} p(\theta_\omega \mid D).
$$
In his title for the y-axis, though, Kruschke further simplified the notation to $p(\theta \mid D)$ by dropping the $\omega$ subscript.
```{r, fig.width = 3.5, fig.height = 3}
p52 <-
d %>%
group_by(theta) %>%
summarise(marginal_posterior_theta = sum(posterior_theta_d * posterior_omega)) %>%
ggplot(aes(x = theta, y = marginal_posterior_theta)) +
geom_area(fill = kh[4]) +
scale_x_continuous(expression(theta), breaks = 0:5 / 5, expand = c(0, 0)) +
scale_y_continuous(expression(Marginal~p(theta*"|"*D)),
expand = expansion(mult = c(0, 0.05)), limits = c(0, 3.33))
p52
```
As we did with the prior analogue for this plot, here we'll show this in a different way where we take a stacked area plot approach, with the fill color grouped by `omega`.
```{r, fig.width = 4.35, fig.height = 3}
d %>%
filter(omega %in% c(0.25, 0.75)) %>%
mutate(posterior_theta_omega = posterior_theta_d * posterior_omega,
omega = factor(omega, levels = c(0.75, 0.25))) %>%
ggplot(aes(x = theta, y = posterior_theta_omega,
group = omega, fill = omega)) +
geom_area() +
scale_x_continuous(expression(theta), breaks = 0:5 / 5, expand = c(0, 0)) +
scale_y_continuous(expression(Marginal~p(theta*"|"*D)),
expand = expansion(mult = c(0, 0.05)), limits = c(0, 3.33)) +
scale_fill_manual(expression(omega), values = kh[5:6])
```
Buy summing over the values of $p(\theta_\omega \mid D)$, we are metaphorically stacking the density values one atop the other.
Compare the height of the density of the official version of this plot, `p52`, to its analogue for the prior, `p22`. In the text, Kruschke described this panel as follows:
> Given the data, the head-biased factory (with $\omega = 0.75$) is about five times more credible than the tail-biased factory (with $\omega = 0.25$), and the bias of the coin is near $\theta = 0.7$ with uncertainty shown by the oddly-skewed distribution. (p. 273)
Now we make our version of the final panel, row five, column three. Here we depict the posterior distributions for $\theta$ for the focal models, $\omega \in \{.25, .75\}$, in a faceted plot.
```{r, eval = F, echo = F}
d %>%
filter(omega %in% c(0.25, 0.75)) %>%
summarise(max = max(posterior_theta_d))
```
```{r, fig.width = 3.5, fig.height = 3}
p53 <-
d %>%
filter(omega %in% c(0.25, 0.75)) %>%
mutate(omega = factor(str_c("omega == ", omega), levels = omega_levels)) %>%
ggplot(aes(x = theta, y = posterior_theta_d)) +
geom_area(fill = kh[4]) +
scale_x_continuous(expression(theta), breaks = 0:5 / 5, expand = c(0, 0)) +
scale_y_continuous(expression(Marginal~p(theta*"|"*omega*","*D)),
expand = expansion(mult = c(0, 0.05)), limits = c(0, 4)) +
facet_wrap(~ omega, ncol = 1, labeller = label_parsed) +
theme(strip.text = element_text(margin = margin(b = 1, t = 1)))
p53
```
Finally, save a few more ggplots and combine them with the previous bunch to make the full version of Figure 10.3.
```{r, fig.width = 6.5, fig.height = 9, warning = F, message = F}
p21 <-
tibble(x = 1,
y = 8:7,
label = c("Prior", "K==12"),
size = c(2, 1)) %>%
ggplot(aes(x = x, y = y, label = label, size = size)) +
geom_text(color = kh[1], hjust = 0, parse = T, show.legend = F) +
scale_size_continuous(range = c(3.5, 5.5)) +
coord_cartesian(xlim = c(1, 2),
ylim = c(4, 11)) +
theme(axis.text = element_text(color = kh[9]),
axis.ticks = element_blank(),
panel.background = element_rect(color = kh[9]),
text = element_text(color = kh[9]))
p33 <-
tibble(x = 1,
y = 8:7,
label = c("Likelihood", "D = 6 heads, 3 tails"),
size = c(2, 1)) %>%
ggplot(aes(x = x, y = y, label = label, size = size)) +
geom_text(color = kh[1], hjust = 0, show.legend = F) +
scale_size_continuous(range = c(3.5, 5.5)) +
coord_cartesian(xlim = c(1, 2),
ylim = c(4, 11)) +
theme(axis.text = element_text(color = kh[9]),
axis.ticks = element_blank(),
panel.background = element_rect(color = kh[9]),
text = element_text(color = kh[9]))
p51 <-
ggplot() +
annotate(geom = "text",
x = 1, y = 8, label = "Posterior",
color = kh[1], size = 6, hjust = 0) +
coord_cartesian(xlim = c(1, 2),
ylim = c(3, 11)) +
theme(axis.text = element_text(color = kh[9]),
axis.ticks = element_blank(),
panel.background = element_rect(color = kh[9]),
text = element_text(color = kh[9]))
p11 <- plot_spacer()
p12 <- p12 + theme(legend.position = "none")
p32 <- p32 + theme(legend.position = "none")
p42 <- p42 + theme(legend.position = "none")
# combine and plot!
(p11 | p12 | p13) / (p21 | p22 | p23) / (p11 | p32 | p33) / (p11 | p42 | p43) / (p51 | p52 | p53)
```
Oh mamma!
## Solution by MCMC
Kruschke started with: "For large, complex models, we cannot derive $p(D \mid m)$ analytically or with grid approximation, and therefore we will approximate the posterior probabilities using MCMC methods" (p. 274). He's not kidding. Welcome to modern Bayes.
### Nonhierarchical MCMC computation of each model's marginal likelihood.
Before you get excited, Kruschke warned: "For complex models, this method might not be tractable. [But] for the simple application here, however, the method works well, as demonstrated in the next section" (p. 277).