INSPIRE 2022 Funding Request for Proposals

INSPIRE 2022 Funding Request for Proposals

INSPIRE is excited to announce broad funding opportunities for investigators in pediatric simulation-based research.


The International Network for Simulation-based Pediatric Innovation, Research and Education (INSPIRE) was established in 2011 from a group of simulation-based pediatric researchers from a variety of disciplines looking to improve collaboration, mentorship, and productivity. Our mission is to improve the delivery of medical care to acutely ill children and ultimately improve survival from acute illness in the pediatric population.

INSPIRE is seeking studies that are both innovative and have strong potential for a positive impact on healthcare delivery. INSPIRE is proud to announce broad funding opportunities for simulation-based research in pediatrics.

We are announcing the following awards:

  • INSPIRE Research Award: One proposal for broad research will be funded for a two-year duration with a maximum budget of $20,000 USD total.

  • INSPIRE novice and LMIC investigator awards: Up to two proposals focused on smaller research projects for novice investigators and investigators from low- and middle-income countries will be funded for a two-year duration with a maximum of $7,000 USD each

These projects must be simulation-based and have strong potential for a positive impact on healthcare delivery processes and outcome. Award proposals will be due on September 30, 2022 by 11:59pm EDT.
 
Thank you,

On Behalf of the INSPIRE Awards/Funding Committee

 

 

 

Find us online

 

Follow us online for regular updates.

 

Showing a Return on Investment for a Simulation Program with Outdated, Unsupported Equipment and Zero Budget

A friend and fellow simulationist posed the following four questions to me. After spending the better part of the morning preparing my response to him, I thought there might be others who could benefit from this.

  1. What is the answer for ROI when everything is at zero?
    There’s an old saying: “There’s always a cost; there’s not always a value.” In other words, the cost is never zero. Your time doing anything has to be compensated so – if all other aspects are provided for nothing -- you are the cost.

  2. Should the student hours in the lab be tracked?
    Yes, and whatever tangible assets you have should be tracked as well. And staff are tangible assets!

  3. If the student pays x dollars for y hours per semester and the student spends z hours in the lab, do we then get the hours in the lab represented in dollars?
    Nice idea, but it isn’t always easy to determine what the “x dollars” is. Perhaps you are fortunate to work where simulation activities are added into specific course costs, but in my experience, this is rarely the case. So, how do you actually calculate what the student pays per hour of instruction? If you have your simulations as part of a clinical course, you can say the tuition for that course is the price they pay, but what about lab fees, insurance costs, materials the student pays for to use in clinical/simulation? What other costs might they incur for a course?

  4. If there is no money spent, is the return on investment all profit?
    There is money spent. See #1 above. The scenario had to be written and the setting created. That has a cost. Whatever is in the simulation had to be obtained somehow and at some expense. Don’t know what something cost back when it was purchased or donated? Look at the cost of replacing the “free” equipment should it fail. Apply the cost of a replacement to the one you have and treat it as if you had bought it for that price. What about the spaces where you run your simulations? Are you paying for the space? If you weren’t doing a simulation in that space, what would it be used for? Would the organization be able to generate revenue from that space during the hours you are in there simulating? If so, the cost of the simulation space is what the company didn’t make off that space because you were in there. And finally, there is always what you cost as an employee. There is no such thing as a free lunch or a free simulation.

So, in answering the above questions I have raised even more questions. But you can determine your simulation costs. You just do it using what my friend, Eddie Luevano1, calls the Mythical 2080. Its mythical because it really doesn’t exist, but it is still a useful construct. Assuming you could run simulations 8 hours a day every day of the work week and do it for each week of the year, you can simulate a total of 2080 hours per year (8*5*52=2080). This is not practical or even possible, yet it is the maximum number of hours of simulation per year using a 40-hour work week. Using this theoretical upper bound, if you are paid annually, that means your compensation can be divided by 2080 to get your hourly wage. For example, if your annual salary is $80,000/year, your hourly wage is $38.46/hour. Now, every hour you are involved in simulation can be costed at $38.46. This includes all setup time, execution time – including prebriefs and debriefs, if you are involved in those aspects -- and teardown time.

Your equipment can be costed out the same way. A $5,000 IV pump has a useful life of, say, five years. Five years at 2080 hours per year is 10,400 hours. Over 5 years, $5,000 / 10,400 = $0.48 / hour, if you used that pump every day of every week for all five years. But you won’t do that, will you? No, you’ll use it maybe three times a week for four two-hour long sessions during a semester. If we assume that you will use it for 12 of the 15 weeks of that semester, and you have two semesters per year, that’s 768 hours of use per year, not the Mythical 2080 hours. Over 5 years, that’s 3,840 hours of use (3*8*12*2=768; 768*5=3,840). Now, your $5,000 pump is costing you $1.32 per hour of use.

Okay, now you can see how to determine the cost of a simulation: Add up the hourly cost of your staff, equipment, and space (if you have to pay for that space) and divide that by the number of hours of simulation where you use that staff, equipment, and space. That’s what your simulation program is costing.

You now have to add in the learners to your cost calculations. Often, the best you can do for your organization is just define the cost factors and determine how best to delineate the activities. For our shop, we settled on defining a simulation session as the same set of learners doing the same set of activities. Thus, a session could be two students in a room with a manikin while a facilitator and an operator observed, or it could be eight students in two rooms – a bay and an observation room – with only a facilitator observing. It could be manikin-based or an SP experience. It could also be skills training and have only an instructor and, say, four students participating. The definition is based on the students involved and the activity or activities being the same. If in a single day we do the same activities twice with different students, that’s two simulation sessions logged for that day.

Once you can say what a simulation is and how many learners are in that session, you can figure out what costs to apply to that session and then divide that by the number of learners to get a per-learner cost. Per-learner costing is simply counting “butts in the seats”. For example, if you ran 10 sessions in a week with eight learners per session, then you served 80 learners. These are called learner encounters (LE). It doesn’t matter if you had 80 different learners that week or you had 40 learners and had them each in twice that week. Still a total of 80 LE.

Another way to look at costing is by learner contact hours (LCH). Here, you don’t just count the learners in the rooms, you count how long they are in those rooms. Taking our earlier example, let’s assume that in six of the ten sessions you had four learners in each session and eight students in each of the four remaining sessions. Let’s also assume that the sessions with four students were each six hours and the sessions with eight students were two hours long. Doing the math:  4 students * 6 hours * 6 sessions = 144 LCH; 8 students * 2 hours * 4 sessions = 64 LCH; 144 + 64 = 208 LCH. Using LCHs makes more sense in academia because counting hours of instruction times number of students is how schools measure instructor workload.

But what does all that mean in terms of value to learners? In simple terms, not much. The value is what the learners get out of simulation, which is often very hard to measure with any reasonable reliability and accuracy. Using the Mythical 2080 you can calculate the hourly staff costs and an approximate cost per hour of any given piece of equipment or of the space being used during any given simulation session. Dividing that total session cost by the number of LCHs provided in that session will give you a dollar cost per LCH. That will make your management happy, but it doesn’t say squat about the value provided by those dollars. For that, you would need to look to something like the Kirkpatrick Model and/or one of the many simulation evaluation instruments out there.

If you are lucky enough to have a “before simulation” expense model, you can then compare it to an “after simulation” model. Let’s say your hospital has a given issue, like an in-hospital falls, that results in two additional days of hospitalization on average.  After training the staff in simulation to avoid patient falls, the number of falls decreases by 50% over a six-month period. This would indicate that simulation training on fall risk mitigation has resulted in fewer extended patient stays with a reduction in the costs associated with those extended stays. If you know what it costs to keep a person in the hospital for a day, then you can say the value of your simulation training is equal to the money saved by not having as many patients hospitalized after having fallen. Reduction in hospital cost minus the cost of the training to reduce that cost equals the value of that training.

Okay, that’s my take on how to quantify costs when discussing Return on Investment (ROI) and a little on determining the value of the investment. Another factor, one we don’t always consider, is the Return on Expectations (ROE). What do stakeholders expect of your simulation program and are those expectations being met? That’s a topic for another post.

Bottom Line on ROI:

  • Determine who are your learners and how you serve them.

  • Determine what tangible assets – people, equipment, spaces, etc., -- you use to serve your learners.

  • Determine the cost of each tangible asset in the same units you use to serve your learners. (Hint: This is usually going to be measured in hours.)

  • Report your results in costs per LE and LCH, preferably separated into the types of simulation activities you provide.

1Eddie Luevano is Associate Director of Administration at the Training and Educational Center for Healthcare Simulation (TECHS), which is part of the Texas Tech University Health Sciences Center, El Paso. (See https://elpaso.ttuhsc.edu/TECHS/Staff.aspx.)

 

Ed Rovera

 

 

Manikin Death: Can We Do It Right?

I have been interested in the topic of preparing students for a patient death ever since we used to run an unsuccessful mock code with nursing students on the last day of their pediatric simulation rotation. This was done at the request of the pediatric faculty back in the days when we didn’t understand much about simulation education and training. She wanted the students to do the activity and then experience the sense of loss. She had come up through the ranks as a pediatric oncology nurse and wanted their first patient death to be in a controlled (as much as we could control things back then) environment. I did get the requesting person to accept my telling one student in the group ahead of time that it was going to be unsuccessful. I felt this was necessary to avoid the group feeling they “failed” and we were just not being honest about their incompetence. Having one of their own support us when we told them the truth was critical to regaining their trust on more than one occasion.

We have “evolved” since then; we no longer do any death scenarios with nursing students, but the memories and emotions  of those exercises have stayed with me. Off and on over the years I’ve looked at the literature on the topic of simulation and training for death. While no expert on the matter, I have come to believe we are doing students a disservice by either (a) ignoring the subject of death in their training, (b) treating death as a certainty only in the elderly, or – in the case of resuscitation drills -- (c) making patient death equal to provider failure.

At the IMSH in 2016, I sat in on a panel discussion on the “death of the manikin”. The panelists were experts in the field of medical simulation and, while they agreed that manikin death was reasonable in end-of-life scenarios where the learners knew what was going to happen, the two groups disagreed when it came to learner performance in mock code drills. For the pro-death (YES) panelists, the manikin should be allowed to die if the actions of the learners warranted it. The people with the opposite position (NO) felt the scenario should be stopped prior to flat-line because the learning essentially stops or cannot proceed when emotions have taken over one’s intellect. The members of the YES group were adamant that if you were going to learn to do codes correctly, you had to suffer the consequences of not doing them correctly. Likewise, the NO group felt that when a code scenario was going south it should be stopped so people could discuss it rationally without the emotional burden of having watched the patient “die”. 

Neither side in that debate considered the probability of a lack of success in a mock code. I know this because I asked this during the Q&A and was met with rather puzzled looks from both camps.

The probability of success in an in-hospital resuscitation is between 35 and 45%, depending upon whose numbers you use. If these are accurate estimates, over half of in-hospital code situations are unsuccessful. Does that mean over half the code teams in the world are failing to perform resuscitations adequately? I don’t think so. I think sometimes you can do everything right and the patient is still not going to survive.

I believe there is a better way to handle the issue of death in training healthcare providers. What if we set up our mock code drills such that the probability of success was factored in? Think of a simulation where you have ten envelopes containing ten index cards. Four cards are printed with “Successful” and six with “Unsuccessful.” This would approximate the real probability of a successful resuscitation. During the briefing, after the case has been laid out for the learners, ask one of the students to select one of the ten envelopes and hand it to the manikin operator. Then tell them the truth – they have a four in ten probability of success. Six out of ten times death is going to happen, no matter what they do. They are to do their best and practice what they have learned. Whatever happens will happen; either way, it will all be discussed in the debrief. Based upon what’s on that card, the operator will follow a specific flow. Regardless of the actions of the learners during the scenario, the operator would follow that flow to its conclusion unless directed by the facilitator to alter it. If the designated path is for success and a serious error is made, the facilitator could direct the operator to switch to the unsuccessful path. All things being equal, if the plan is for the patient to survive, the patient would survive. Similarly, if the participants are doing a great job on all the learning objectives, the facilitator can direct the operator to move to the successful path. Only at the end of the debrief do the learners see what was on the card. By that point in the simulation, the debrief should have covered any serious mistakes made during the exercise. The participants would understand what they did incorrectly and not be surprised at seeing that the patient should have survived. Similarly, after going over the events and working through any possible shortcomings in performance, if the card showed “Unsuccessful” the participants would be able to see that the lack of a successful resuscitation was preordained, not caused by anything they necessarily did or did not do.

My idyllic scenario makes several assumptions that may not be true. First, it assumes the facilitator would be able to catch a serious error or remarkable actions during the mock code and dispassionately direct the switch in the flow. Second, it assumes a level of debriefing expertise in the facilitator that may not exist. Third, it assumes the participants are sufficiently prepared and have sufficient expertise and maturity to objectively evaluate their actions. These are important factors in the success of any simulation. In a potentially unsuccessful resuscitation exercise, they are absolute necessities. Therefore, I do not recommend anyone embark on creating a probability-based mock code drill lightly. Rather, consider it as an idea for future growth in your program. If you were to use this method, would you have the requisite structure in place to make it successful?

Four Years of Reviewing for Clinical Simulation in Nursing

I started reviewing manuscripts for Clinical Simulation in Nursing in May of 2014. I just checked my records and the manuscript review I completed today is my 50th. That doesn't count the R1 & R2 reviews on the same manuscript, just the initial requests that I've accepted from the editors. That averages out to about one manuscript a month. Most of those manuscripts have gone on to be published, too. Yes, I do recommend rejecting a paper now and then, and a couple of the ones I thought were good were rejected by other reviewers and turned back to the authors for submission to other journals. In my small way, I have helped advance the science of simulation education.


Vaginal Examination Simulation Using Citrus Fruit to Simulate Cervical Dilation and Effacement

Back in 2015, Kathleen Shea and I published an article in Cureus, an on-line peer reviewed medical journal that is indexed in PubMed. The article can be found at  http://bit.ly/1EvOjZh. In a nutshell, we created a way to use oranges and grapefruit to simulate cervical dilation and effacement. If you are looking for an inexpensive way to teach nursing and medical students what a dilated and effaced cervix feels like, give the article a look. To access the entire article, you will need to sign up with Cureus, but it is free and the information that’s showing up on Cureus is really very good.

Secretion-based Nasal Obstruction Trainer (SNOT)

This is an idea that’s been around since 2013, but I just recently started thinking it might be worth doing in our simulation center. We are currently creating standardized scenarios for our Pediatrics rotations and we are looking at the cueing that is currently being required. We want to be able to determine what explicit cues we are giving to learners with the hope of determining ways to reduce those cues. Paige and Morin (2013) did a very good review of the use of cues in simulation and found that the amount of cues is directly tied to the fidelity of the simulation. (Yes, I know there is a debate going on right now about the use of the word “fidelity” in healthcare simulation, but that’s a topic for another post.)  If we see that learners are consistently being cued about snotty noses, that’s going to drive my management to want to improve the fidelity around infants with URIs (Upper Respiratory Infections). This trick may well be one way to make our Respiratory Distress scenario better.

https://www.jumpsimulation.org/research-innovation/our-blog/2013/march/building-a-nasal-secretions-simulator

Ed Rovera
+References

Paige, J. B., & Morin, K. H. (2013). Simulation Fidelity and Cueing: A Systematic Review of the Literature. Clinical Simulation in Nursing, 9(11), e481–e489. https://doi.org/10.1016/j.ecns.2013.01.001

 

LLEAP Multi-Column Event Window Display

While going through our testing of our new coded scenarios, we noticed that one Instructor Station (IS) computer displayed the events differently when we compared it to another of our LLEAP computers.  The IS in our Pediatrics control room showed only one category expanded; the others were minimized as in Figure 1. On the other hand, our Medical/Surgical IS presented the categories across the screen shown in Figure 2.

Figure 1. Single Column Event Display

Figure 2. Multi-Column Event Display

The single column presentation forces the operator to expand the specific menu to access the needed event. And expanding one menu minimized the other so we were constantly clicking on category headers to get to the event we wanted to invoke. We really wanted the expanded event display we had in Med/Surg.

We looked through the various menus and help files and could find nothing that indicated a way to change the Event category display. At that point, we reached out to Laerdal Tech Support. After a few emails back and forth, the technician assigned to the request, Edward Carter, took a long look at the issue and discovered that there WAS NO setting for this; the scenario code itself was determining how the events would be displayed. He dug into it further and discovered a flag inside the code that was determining how the display would look. While he couldn’t be certain, Edward though it might be an artifact of how the scenario came into being in the first place. He asked if either of the two sample scenarios we sent him were originally converted from the old Legacy format. The Med/Surg bay was the first one upgraded and that was probably the one where we converted all the scenarios to see how good the converter worked. And since we knew we had coded the Pediatrics scenarios from scratch to apply Hub programming, it made sense that we would see the single column presentation on that IS. Thus, Edward’s idea had some validity and we proceeded to test his theory.

Edward sent me a workaround that involved editing the XML file inside the scenario program and changing that flag value. Here are his instructions, almost word for word:

  1. Back up your scenario file(s)
  2. Rename the scenario file’s extension from .scx to .zip
  3. Unpack the zip file (simply opening it will usually not allow changes to the file)
  4. Find the Scenario.xml file and open it with Notepad. (We use NotePad++ here at SF State)
  5. Find the string containing “UsingStrictLegacyConversion” and change the value from false to true
  6. Save the file and close it
  7. Re-zip the scenario files/folders (make sure not to include the parent folder in the zip)
  8. Rename the new zip file to an .scx extension
  9. Test the new version

Step 7 can be explained as zipping the files IN the parent folder, not zipping the entire folder. To do this, open the folder, select all the files in the folder and zip them to a new location. If you do zip the entire folder, when you open the scenario in LLEAP your result looks like a newly defined, empty scenario with everything set at the default.

Changing that flag inside the XML did the trick. Thanks to Edward Carter’s research on the problem, now all our production scenario programs present the Events window with all event categories expanded. We no longer have to click to open event categories while running the scenario.