Manikin Death: Can We Do It Right?

I have been interested in the topic of preparing students for a patient death ever since we used to run an unsuccessful mock code with nursing students on the last day of their pediatric simulation rotation. This was done at the request of the pediatric faculty back in the days when we didn’t understand much about simulation education and training. She wanted the students to do the activity and then experience the sense of loss. She had come up through the ranks as a pediatric oncology nurse and wanted their first patient death to be in a controlled (as much as we could control things back then) environment. I did get the requesting person to accept my telling one student in the group ahead of time that it was going to be unsuccessful. I felt this was necessary to avoid the group feeling they “failed” and we were just not being honest about their incompetence. Having one of their own support us when we told them the truth was critical to regaining their trust on more than one occasion.

We have “evolved” since then; we no longer do any death scenarios with nursing students, but the memories and emotions  of those exercises have stayed with me. Off and on over the years I’ve looked at the literature on the topic of simulation and training for death. While no expert on the matter, I have come to believe we are doing students a disservice by either (a) ignoring the subject of death in their training, (b) treating death as a certainty only in the elderly, or – in the case of resuscitation drills -- (c) making patient death equal to provider failure.

At the IMSH in 2016, I sat in on a panel discussion on the “death of the manikin”. The panelists were experts in the field of medical simulation and, while they agreed that manikin death was reasonable in end-of-life scenarios where the learners knew what was going to happen, the two groups disagreed when it came to learner performance in mock code drills. For the pro-death (YES) panelists, the manikin should be allowed to die if the actions of the learners warranted it. The people with the opposite position (NO) felt the scenario should be stopped prior to flat-line because the learning essentially stops or cannot proceed when emotions have taken over one’s intellect. The members of the YES group were adamant that if you were going to learn to do codes correctly, you had to suffer the consequences of not doing them correctly. Likewise, the NO group felt that when a code scenario was going south it should be stopped so people could discuss it rationally without the emotional burden of having watched the patient “die”. 

Neither side in that debate considered the probability of a lack of success in a mock code. I know this because I asked this during the Q&A and was met with rather puzzled looks from both camps.

The probability of success in an in-hospital resuscitation is between 35 and 45%, depending upon whose numbers you use. If these are accurate estimates, over half of in-hospital code situations are unsuccessful. Does that mean over half the code teams in the world are failing to perform resuscitations adequately? I don’t think so. I think sometimes you can do everything right and the patient is still not going to survive.

I believe there is a better way to handle the issue of death in training healthcare providers. What if we set up our mock code drills such that the probability of success was factored in? Think of a simulation where you have ten envelopes containing ten index cards. Four cards are printed with “Successful” and six with “Unsuccessful.” This would approximate the real probability of a successful resuscitation. During the briefing, after the case has been laid out for the learners, ask one of the students to select one of the ten envelopes and hand it to the manikin operator. Then tell them the truth – they have a four in ten probability of success. Six out of ten times death is going to happen, no matter what they do. They are to do their best and practice what they have learned. Whatever happens will happen; either way, it will all be discussed in the debrief. Based upon what’s on that card, the operator will follow a specific flow. Regardless of the actions of the learners during the scenario, the operator would follow that flow to its conclusion unless directed by the facilitator to alter it. If the designated path is for success and a serious error is made, the facilitator could direct the operator to switch to the unsuccessful path. All things being equal, if the plan is for the patient to survive, the patient would survive. Similarly, if the participants are doing a great job on all the learning objectives, the facilitator can direct the operator to move to the successful path. Only at the end of the debrief do the learners see what was on the card. By that point in the simulation, the debrief should have covered any serious mistakes made during the exercise. The participants would understand what they did incorrectly and not be surprised at seeing that the patient should have survived. Similarly, after going over the events and working through any possible shortcomings in performance, if the card showed “Unsuccessful” the participants would be able to see that the lack of a successful resuscitation was preordained, not caused by anything they necessarily did or did not do.

My idyllic scenario makes several assumptions that may not be true. First, it assumes the facilitator would be able to catch a serious error or remarkable actions during the mock code and dispassionately direct the switch in the flow. Second, it assumes a level of debriefing expertise in the facilitator that may not exist. Third, it assumes the participants are sufficiently prepared and have sufficient expertise and maturity to objectively evaluate their actions. These are important factors in the success of any simulation. In a potentially unsuccessful resuscitation exercise, they are absolute necessities. Therefore, I do not recommend anyone embark on creating a probability-based mock code drill lightly. Rather, consider it as an idea for future growth in your program. If you were to use this method, would you have the requisite structure in place to make it successful?

Four Years of Reviewing for Clinical Simulation in Nursing

I started reviewing manuscripts for Clinical Simulation in Nursing in May of 2014. I just checked my records and the manuscript review I completed today is my 50th. That doesn't count the R1 & R2 reviews on the same manuscript, just the initial requests that I've accepted from the editors. That averages out to about one manuscript a month. Most of those manuscripts have gone on to be published, too. Yes, I do recommend rejecting a paper now and then, and a couple of the ones I thought were good were rejected by other reviewers and turned back to the authors for submission to other journals. In my small way, I have helped advance the science of simulation education.

Vaginal Examination Simulation Using Citrus Fruit to Simulate Cervical Dilation and Effacement

Back in 2015, Kathleen Shea and I published an article in Cureus, an on-line peer reviewed medical journal that is indexed in PubMed. The article can be found at In a nutshell, we created a way to use oranges and grapefruit to simulate cervical dilation and effacement. If you are looking for an inexpensive way to teach nursing and medical students what a dilated and effaced cervix feels like, give the article a look. To access the entire article, you will need to sign up with Cureus, but it is free and the information that’s showing up on Cureus is really very good.

Secretion-based Nasal Obstruction Trainer (SNOT)

This is an idea that’s been around since 2013, but I just recently started thinking it might be worth doing in our simulation center. We are currently creating standardized scenarios for our Pediatrics rotations and we are looking at the cueing that is currently being required. We want to be able to determine what explicit cues we are giving to learners with the hope of determining ways to reduce those cues. Paige and Morin (2013) did a very good review of the use of cues in simulation and found that the amount of cues is directly tied to the fidelity of the simulation. (Yes, I know there is a debate going on right now about the use of the word “fidelity” in healthcare simulation, but that’s a topic for another post.)  If we see that learners are consistently being cued about snotty noses, that’s going to drive my management to want to improve the fidelity around infants with URIs (Upper Respiratory Infections). This trick may well be one way to make our Respiratory Distress scenario better.

Ed Rovera

Paige, J. B., & Morin, K. H. (2013). Simulation Fidelity and Cueing: A Systematic Review of the Literature. Clinical Simulation in Nursing, 9(11), e481–e489.


LLEAP Multi-Column Event Window Display

While going through our testing of our new coded scenarios, we noticed that one Instructor Station (IS) computer displayed the events differently when we compared it to another of our LLEAP computers.  The IS in our Pediatrics control room showed only one category expanded; the others were minimized as in Figure 1. On the other hand, our Medical/Surgical IS presented the categories across the screen shown in Figure 2.

Figure 1. Single Column Event Display

Figure 2. Multi-Column Event Display

The single column presentation forces the operator to expand the specific menu to access the needed event. And expanding one menu minimized the other so we were constantly clicking on category headers to get to the event we wanted to invoke. We really wanted the expanded event display we had in Med/Surg.

We looked through the various menus and help files and could find nothing that indicated a way to change the Event category display. At that point, we reached out to Laerdal Tech Support. After a few emails back and forth, the technician assigned to the request, Edward Carter, took a long look at the issue and discovered that there WAS NO setting for this; the scenario code itself was determining how the events would be displayed. He dug into it further and discovered a flag inside the code that was determining how the display would look. While he couldn’t be certain, Edward though it might be an artifact of how the scenario came into being in the first place. He asked if either of the two sample scenarios we sent him were originally converted from the old Legacy format. The Med/Surg bay was the first one upgraded and that was probably the one where we converted all the scenarios to see how good the converter worked. And since we knew we had coded the Pediatrics scenarios from scratch to apply Hub programming, it made sense that we would see the single column presentation on that IS. Thus, Edward’s idea had some validity and we proceeded to test his theory.

Edward sent me a workaround that involved editing the XML file inside the scenario program and changing that flag value. Here are his instructions, almost word for word:

  1. Back up your scenario file(s)
  2. Rename the scenario file’s extension from .scx to .zip
  3. Unpack the zip file (simply opening it will usually not allow changes to the file)
  4. Find the Scenario.xml file and open it with Notepad. (We use NotePad++ here at SF State)
  5. Find the string containing “UsingStrictLegacyConversion” and change the value from false to true
  6. Save the file and close it
  7. Re-zip the scenario files/folders (make sure not to include the parent folder in the zip)
  8. Rename the new zip file to an .scx extension
  9. Test the new version

Step 7 can be explained as zipping the files IN the parent folder, not zipping the entire folder. To do this, open the folder, select all the files in the folder and zip them to a new location. If you do zip the entire folder, when you open the scenario in LLEAP your result looks like a newly defined, empty scenario with everything set at the default.

Changing that flag inside the XML did the trick. Thanks to Edward Carter’s research on the problem, now all our production scenario programs present the Events window with all event categories expanded. We no longer have to click to open event categories while running the scenario.