Get A Testing Quote

Recent Articles

Simple mathematical models are easy to create. Under certain circumstances they are very accurate. For example, let us consider the well known model people use to predict the velocity of an object as it falls to earth at sea level in a vacuum:

velocity = 9.8 meters per second, per second (v=9.8m/s/s)

As any physicist will tell you, that particular model is exceptionally accurate. Unfortunately not all models are so good. Inaccurate source data or application of the model to dissimilar systems leads to failure.

The problems that arise from errors or inaccuracy in source data are fairly obvious. Problems that come from mis-application of a model to a dissimilar environment are less obvious but often more severe.

Keeping with the example of the object falling to earth, let us explore mis-application of a model to a dissimilar environment:

Consider a baseball falling in air rather than a vacuum. Since the baseball is subject to friction from air, it doesn’t quite accelerate at 9.8 m/s/s. If measurements were taken, they might show that the baseball actually accelerates at 9.5 m/s/s. Thus, our velocity model’s application environment was changed slightly, and the model became slightly – though tolerably – inaccurate.

But what if we’re interested in the rate at which a basketball falls to earth? A basketball has more surface area, so it will fall more slowly in the presense of air. Our model will then further overestimate its speed. What if we use our model to predict how fast a parachute will fall to earth? The parachute will not fall at a rate near 9.8 m/s/s. If there is an updraft, it may even move in the opposite direction!

The point is, models really do break down when they are used to predict future events that are dissimilar from the data sets that were used to create them. The more dissimilarity, the worse mathematical models perform. In some instances, mathematical models become virtually useless. This phenomenon is only amplified in the context of antimicrobial efficacy, where systems are inherently complex.

Example: Modeling of UV Dose vs. Microbicidal Effect for Hospital Disinfection

Some UV device makers rely entirely on mathematical models to support claims that their devices kill microorganisms. This is weak support, because the source data is subject to error and because virtually none of it is hospital-derived. Data used for UV mathematical models may come from food studies, water studies or small-scale surface disinfection studies. A partial list of sources of measurement error from that type of UV source data is presented below:

  • Variability in the genus and species of microorganism used
  • Variability in the initial number of microorganisms present in the study
  • Variability in protective proteins expressed at the time of the study
  • Errors in the measurement of the UV dose (mJ/square centimeter)
  • Errors in the enumeration of surviving microorganisms

Hospital rooms have all sorts of complexities that are not accounted for in much of the existing UV disinfection literature, so the models are often mis-applied. For instance, hospital rooms have objects at all angles, casting shadows on other objects. Hospital microorganisms may have little in common with the laboratory-grown microorganisms used in much academic testing.

In summary, publicly available UV disinfection source data is often generated in contexts other than hospital disinfection, such as by drinking water studies. Thus, models built on those data sets are not robust. Further, the models may be mis-applied by manufacturers to the complex hospital environment, which vastly reduces the level of predictive power. Therefore, buyers of UV whole-room disinfection devices should not rely on mathematical models as “proof” that a device disinfects a room. Laboratory tests on the actual device in an ordinary use setting are much better evidence of efficacy, as are blinded, peer-reviewed clinical outcome studies.

Get A Testing Quote

Recent Articles