Get the Artemis Pulse to your inbox every month. It's full of benefits news and popular content you're sure to enjoy.
If you’re an HR and Benefits professional, you spend a lot of time thinking about the programs you have in place for employees and their families. You did thorough, detailed evaluations of each vendor before choosing the right one for your population. You carefully guided the implementation process. You created and shared educational materials with the right members at the right time. You added these new benefits to your Open Enrollment process and documentation. You did everything you could to ensure each initiative would succeed.
Do you know if your efforts worked? Are your programs improving the health and well-being of your members? Are you reducing healthcare costs? Are your programs delivering what they promised?
These seem like simple questions, but they are actually pretty challenging to answer. Measuring program performance can be incredibly complex, difficult, and time consuming. Each program is different, and each member population is unique. While finding the value in each program is undoubtedly a chore, there are a number of ways benefits leaders can make it easier on themselves.
In our latest ebook, we share four best practices for measuring the success of your programs. Today, we’ll look at two tips you can use to hold vendors accountable.
When evaluating program vendors, employers are offered a number of enticing promises. You might see brochures, websites, and case studies that advertise a 20% drop in hyperglycemia, 70% decrease in diabetes medication, or 20% weight loss for participants. These results are impressive, but it’s crucial to remember that every employee population is different. As they say in those late night infomercials, “results may vary.”
Employers and their advisors must approach program evaluation with the goal of finding out how a program might work for their members, their business needs, and their unique situation. This starts with a data analysis that examines the demographics, eligibility, and potential participation in the program. Here’s an example from the Artemis Platform.
Portico Benefit Services, an Artemis customer, was evaluating two new mental well-being programs to support their members. They dove into diagnoses to find out how many members would potentially benefit from these vendors, including the current costs for these conditions. This put them on the right track for understanding the potential impact of these programs.
Once they settled on these vendors, Portico worked with the vendor to personalize their programming for their members. They partnered to include online lessons and video modules that are more spiritual and faith-based in nature, which appeal to Portico members, most of whom are rostered members of the clergy. They even included a course on vocational well-being featuring ministers from the same faith community sharing ways to avoid burnout and grow as spiritual leaders.
By personalizing the offering and setting unique goals with the vendor, Portico is seeing great initial results. They are benchmarking engagement against other populations using these same vendors and finding their own employees are engaging for an average of 15 minutes longer with the video and online courses. They are also tracking behavioral health claims, costs, and utilization over time to see how the vendor programs might lead to better well-being outcomes.
This is a great example of an employer setting clear goals alongside their vendors and clearly understanding the ins and outs of the program’s offerings.
If you’re trying to measure the value of a benefits program, you should look at the data feed provided by the vendor, right? Yes, of course, but that’s just the beginning. To truly find the data you need to determine program success, you must look beyond just the vendor’s data and “cross-walk” your data to see how the program is affecting members holistically.
Let’s look at an example from the Artemis Platform. One of our clients was concerned about member behavioral health. They noted from employee survey data that their members were reporting high levels of stress. Their EAP program data also showed higher utilization in recent months than previously. They wanted to dive deeper on this issue, so we went to work to create an analysis that drew on multiple types of data.
First, we looked at a number of different metrics:
Once we had gathered these metrics, we built “cohorts,” or groups of members, with behavioral health diagnoses of depression and anxiety so we could compare them to members who did not have these conditions. By using multiple, distinct data sources and cohorts to compare members, you ensure a more accurate picture of what’s really happening. Here’s what we found in this analysis.
You can see that the “In Cohort” group, those with depression or anxiety, were likely to miss more work, more costly for medical and prescription claims, more likely to miss work due to disability, more likely to visit the emergency room, and more likely to suffer from a comorbid musculoskeletal condition. If the analysis had focused just on the EAP program data, we wouldn’t have a complete view of what other factors are contributing to these members’ health and wellness.
The Artemis customer took this information to the C-suite and justified the need to shore up their EAP offering, evaluate new behavioral health programs, and better support these members. They continue to track this cohort as they roll out new programs and measure their effectiveness.