In Fig. I show a graph for China as produced by algorithm 0, which has been calibrated by shifting the model six weeks forward in time and by adopting a reproduction number pf 1.78. The horizon of Fig. 1 is 13 weeks: its numbers show from day 35 till day 93. What is remarkable in hindsight is that the cumulative number of infections has almost completely flattened after five or six weeks of a seriously visible epidemic, say from day 70 after the coronavirus showed itself in Wuhan.

Fig. 1

According to these numbers, the *engine* of the Wuhan *inferno* came to a halt at that moment. Yet, in the presentation in Fig. 1 the model only fits acceptably from day 49 till day 63. Which would allow the observation that during that period the reproduction number was 1.78 (or close to it).

I could have shown another picture:

Fig. 2

This is the same model with the same data and parameter settings — only the model data are shifted 5 in stead of 6 weeks. What we see now is that the model fits the data in an acceptable manner from the beginning up till day 49. And what we can also observe is that during that period the reproduction number was 1.78 (or close to that) too.

What follows is that we can use the model to find periods in which it calibrates nicely with the data observed, and that such can help us establish, *post hoc*, the reproduction factor.

What also follows is that such a *modus operandi* shows that the reproduction number can seriously diverge from period to period. This brings in a strategic issue.

Will we try to adapt the model such that it provides better overall predictions? This will most likely lead into a quagmire of higher math. Or will we partition the dynamics of the observations into different compartments (periods) for which we will make different models that subsequently relay the interpretability baton?

My choice is, of course, for the latter option.