Skip to content

From Algorithm 0 to Algorithm 1

Today is April 16, 2020. On April 9 they took a breather to discuss how to proceed.

Mr. Buddy suggested to display the observed and the model’s numbers side by side from the beginning and not to rely on making ad hoc comparisons (as on April 9). Stickler agrees and believes that new versions of the model should be designed in such a way that the results can be tested methodologically and according to the rules. Mister Winner is concerned with what it will all cost and wants versions to be arranged so that they can be reused using different initial settings. In addition, he requires that programs be documented in such a way that in case that a new programmer must pick up the work she/he can continue it in stead of starting all over again. And Mr. Node, who has to program it all, points out that each new version will require a joint evaluation meeting where evaluation material should be independently measured and collected and where proposals on the follow-up are discussed.

They submit their considerations to Mr. Sum who is qualified to decide. He gives the green light to realize all the stated wishes in a new version of algorithm 0 and to call the result algorithm 1. Here is the printout of a first test run:

Algorithm 1 – first test run on 16 april 2020

Buddy’s wish was met by setting the model’s start date on December 29, 2019 and displaying the values ​​that were observed in addition to the values ​​of the model. (A list of data from Worldometer has been added to the program.) Winner’s wish was given shape by allowing the user to set 10 initial values ​​(parameters): the number of periods in the window that is displayed; the first period in that window; the lengths of the incubation period, the mean duration of the disease and the period in which it can be infected (all set at 5 days, equal to the duration of one period or generation); the survival and immunity ratios, the reproduction number; the date of the pandemic’s origin; a scaling factor for numbers to be printed; a calibration slider because the actual date of origin is unknown. Mister Node was also instructed to document his program effectively. It’s here. Stickler’s wish has been honored by making the initial values ​​visible along with what the algorithm yields, placed next to what is registered. Methodically correct interpretation can take place on the basis of this. Node’s wish can be realized in and simultaneously with the (normative) debate on interpretation issues.

Interpretation (the normatieve debate)

What the first test of algorithm 1 in the graph shows are the observed values in red and the calculated values in black over a number of 17 periods, starting at period 5. A first impression is that until the lines intersect (that must be in the beginning of period 19) they exhibit fairly similar behaviors, while then both go up, but the model rises much steeper than what has been observed in the world. Mr Stickler is concerned that it is difficult to see what is happening prior to period 19 because the graph has been scaled to widely divergent extremes.

Mr. Buddy believes it is important to first see if comparability improves when the reproduction number is adjusted. This is investigated in the second and third test runs. In the second, the reproduction number is set at 1.9 and in the third at 2.1. this produces considerably different images. It is clear that with a reproduction number of 1.9 the model can be compared somewhat longer with what has been observed. The moment when that new model starts to deviate seriously from what was observed, it seems, shifts from period 15 to period 18. Stickler again makes a reservation because he thinks that the model structurally underestimates the second attempt that needs an explanation.

In the fourth test, the picture is calibrated using parameter settings. Assuming a difference of three periods between the two initial dates of origin used and the reproduction factor set at 1.8, the picture from test 4 is created. All four believe that this picture is most appropriate for more serious interpretation. In doing so, an explanation should be found as to why the first five periods were not recorded, whereas in the subsequent 7 periods the registrations were much higher than the expectations of the model. It is conceivable that due to all the uncertainties of a new virus and the fear of a pandemic, the registration of the first two months was not very accurate and, moreover, it was not taken seriously enough so that the reproduction number may have been higher in the first period. If that is to be assumed, a group of three periods follows in which reality follows the model, an explanation may be that the reproduction number dropped to 1.8 during that period because the threat of the virus started to become clear and people started to become more cautious.

In the fifth test, all parameter settings of test number four are kept, only the number of periods in the window has been increased from 17 to 20. It is remarkable how much the change in scale leads to focusing on different aspects. Test five mainly shows how much after period 17/22 the model goes its own way and becomes detached from the observations. An obvious explanation seems to be that measures had been implemented across Europe around March 27, 2020.

Well, everyone agrees that the model is no good assuming that it is intended to be a longer-term description of reality. But that can only reasonably be assumed for a limited period of time, for example at the beginning of an epidemic, when no measures have yet been taken and everyone is still susceptible. For the longer term, the model must be adapted. But perhaps that also applies to the shorter term, as the picture of test four suggests.

The debate on how to proceed with the model of algorithm 1 ends in the conclusion that it can be used tentatively at the beginning of an epidemic to estimate the basic reproduction number through a period of observation and calibrations. Otherwise, the model seems useless.

It is best now, the gentlemen believe, to first look at how an algorithm based on the dogmatic SIR model (algorithm 2, so to speak) behaves in the light of the observed numbers.