Send As SMS

Monday, February 13, 2006

Dark days

Personally I'm having a tough time of it at the moment. I've had cause to be frequenting the local hospital and was considering today during a rather traumatic time how much electronics is used during intensive care of patients. Obviously we've all heard the horror stories regarding the good ole Therac-25 cancer machine and the smoking patients who received massively high doses of "treatment". An inquiry pondered how it happened and concluded that some unknown scapegoat was to blame that their model of the problem had been wrong, assumptions had been Wrong the safety precautions had been WRONG. My question is; Could it happen again ? We have these elaborate systems for testing software and exercising code paths, but in electronics I still feel there could be “ghosts in the machine”. I remember a hardware lecture once, when discussing hardware fault checking / discovery, said you can't design complete test coverage because of time / money / scale. If you design a few latch big system or even an encoding device designed in software for an FPGA that's small yeah you can you can test it, just. Given all these large systems that monitor so much can analysing the small units that make it up really confirm the safety of the overall, massive, system. I also remember once a friend being horrified that critical systems safety / fault fixing was judged one a monitory basis. I took the view that this was acceptable given that the alternative was not to have any product and then loose all the benefits that do exist despite of possible problems. However recently I’ve been looking at how the patients are now so very nearly “plugged into” the machines. That sensors, measures, valves and dials informed the care teams decisions so directly. Is monitory considerations acceptable as a prime indicator of risk ?
There is ofcouse a sliding scale in these things; if another colleague of mine messes up maybe my new phone will crash – annoying but no biggie. If a different colleague of mine messes up – a patient might get incorrect dose, which they can, hopefully, notice quickly and correct. If data I am curator of breaks someone might loose 30 quid for a week, again it’s no biggie it can be fixed. If someone so far back down a chain of design messes up a no one notices and that system then tries to fix the problem it’s caused no one notices until it can be too late a dangerous loop develops – think HAL in 2001; Frank dies because HAL is trying to fix the mistake it had been told it couldn’t make. Was Arthur C Clarke right then ? Apparently. Therac-25 shows this, the machine was believed to be infalable. Is he still right now ? Are the risks any worse than the ones that human nurses can make under pressure ? A bit short and I’ve missed a few I know, like I said it’s all a little bit tough right now.


Post a Comment

<< Home