Menu Close

Some Lessons Learned in MedDevOps

During my years of developing a remote patient monitoring system at Nokia, I learned a few things about good development practices in a regulated environment. Most of these are just good common sense; but it is surprising how often something is common sense only once you have tried to do it the wrong way. The below issues are not a comprehensive list, just the ones that I thought of first. I will make new blog posts when new issues come to mind.

Automate Testing End-to-End

If you are doing DevOps, using test automation is a given (Robot Framework is popular, but whatever works for you). The thing is, if you are doing a remote monitoring system, you are often using third-party devices to measure and monitor patients, which then wirelessly transmit the results to the cloud via a mobile phone or a dedicated gateway device. You are likely not able to command the third-party devices to produce specific test data, so there is a temptation to bypass them entirely during testing, either with a laptop that mimics the wireless behavior of the third party device, or by using mock data in the mobile phone or gateway.

The problem with this approach is that it does not replicate bugs in the third party devices, and it does not allow you to test non-functional requirements like stability and battery lifetime. In our case, one third party device had a firmware bug that caused it to consume battery far more than anticipated. We were probably using the device’s Bluetooth functionality in a different way than the manufacturer had been testing it with. You have the obligation to provide objective evidence that your system works according to requirements. Testing only in simulation does not cut it. You could of course do a final manual testing round with real devices just before release, but then you are not doing DevOps.

So, to guarantee that passing the automated testing also means that the system works, you need to instrument your third party devices. This may mean some good old-fashioned electro-mechanical engineering for your testing team, e.g. using solenoid actuators to push buttons on the device. There are simulators that can be connected to the measurement device’s sensors to produce the desired input, e.g. we used a Fluke ECG simulator to produce data for an ECG monitor. You could also use a camera to read the screen and recognize whether it shows the desired output, but that gets more complicated. Fortunately that is usually not needed since the successful wireless transfer of data is a sufficient indication of correct operation.

Obviously, using real devices for testing is not feasible (or necessary) for all test cases, such as server load testing, but for the most part it works. You will probably need to set up a few devices working in parallel to get acceptable performance for your test suite.

Use Program Increment Planning Events

When you have multiple teams in multiple locations participating in system development, and various dependencies between the work done by the teams, Program Increment Planning Events as described by the Scaled Agile Framework (SAFe) work rather nicely. A Program Increment is a period consisting of multiple sprints, typically 4-6, and before each such period you have a face-to-face planning meeting where the teams meet each other and the Business Owners to plan the work ahead. The planning meeting allows the teams to discuss design decisions and discover dependencies between their work.

The nice thing about these planning events is that they are the perfect place to introduce security thinking and medical device risk management into the development workflow when the awareness level in the development teams still needs to grow. It is enough to have one security expert and one risk management expert who participate in the planning work and help the teams to consider the security and safety risks in their design and mitigate those risks accordingly.

Note, however, that if you only have a single team, or very few dependencies between teams, Program Increments are probably not very valuable for you.

Co-Creation with Customer

Unless you are a start-up founded by doctors (which is not uncommon), you will need someone with clinical experience to tell you how your product should work. You could hire someone, since you will anyway need clinical expertise for risk management and clinical evaluation, but it is more effective to find a healthcare provider sympathetic to your cause – one that sees the value in your product – and talk them into co-creating the solution with you. Naturally you cannot do that in a way that would endanger patients, but you can use technical pilots, where only the technical feasibility and usability of the system is tested, to experiment. The downside is that your system can only be used alongside the existing procedures, not replace them, so you cannot prove the value yet – but you can prove the feasibility, and give your clinical partner the possibility to try out the system in practice. It does depend very much on the healthcare provider whether they are interested in participating in such pilots though, and attracting interest is much easier if you are well-known large company than if you are a small, unknown startup.

Once you have obtained initial interest, DevOps will work in your favor by allowing you to iterate the product with your clinical partner using short turnaround times. It is a very powerful experience for medical professionals when they make a suggestion and see it already working two weeks later. People in the medical domain are not used to fast turnaround times because the regulatory verification and validation requirements normally prevent fast iteration. But within the context of a technical pilot it is possible. Also, when your system is in production, and fast update cycles are no longer possible due to regulatory constraints, it is worthwhile to maintain a demo environment accessible to your clinical partners where you can gather feedback about new features. Such a demo environment also has the advantage that multiple customers can participate. Unfortunately, healthcare providers are a notoriously heterogeneous bunch, and a system optimized for just one of them is not going to make all of them happy.

Do Risk Analysis Early and Often, but not Too Much

There is a definite temptation to leave risk analysis to a later phase in the project, “when we have all the facts”. After all, we will understand the risks better once we know what the detailed implementation looks like, right?

The problem with that kind of thinking is that risk analysis guides the implementation, not the other way around. If you do risk analysis at a late stage in a project, there is a high probability that you will discover new risk control requirements, which will delay the project at best and require significant re-design at worst. By doing risk analysis early, you will discover the most important risk control requirements early enough to include them in your system design.

However, the original thought – we will understand the risks better once we know what the detailed implementation looks like – is also right. You have to revisit risk analysis whenever you learn something new that affects it. Risk management is a continuous, iterative process, and you need to integrate it in your development process. I already mentioned doing risk management at the Program Increment planning stage. Developers should also consider risks related to their implementation whenever they complete an implementation task. One possibility is to require developers to add a short risk analysis section to each User Story as part of the Definition of Done, and of course add any identified new risks to the risk management file. This may be as short as a few sentences, and it is not required for User Stories related to low-risk components, but it should be reviewed as part of the code review.

It is also critical to fine-tune the risk management process so that you optimize the effort spent on it. It is easy to waste a lot of effort on risk management, if you concentrate on the wrong things. Risk management is not done for its own sake, but rather to improve safety. This may sound trivial, but when you are implementing standards-compliant procedures, you easily start optimizing for compliance, rather than safety. After all, the standard is your requirement specification, whereas safety is just a vague concept mentioned somewhere in the standard’s Introduction chapter that you probably skipped.

In other words, be wary of defining your procedures to achieve the highest level of compliance, rather than achieving the highest level of safety and performance. Your goal should be to achieve the highest level of safety and performance, while remaining sufficiently compliant with the standard. Also, standards are not the final word, the legislation is. Compliance with a harmonized standard proves compliance with the relevant medical device legislation, but that just means it is a known good solution. Other solutions are also acceptable. If your way is better, you can probably convince your Notified Body to accept it.

Comment on this article on Linkedin