Composite variable
A composite variable combines variables to increase the number of events and decrease the sample size.
How many times have we heard this phrase or others like it?. The advantages of working together as a team are perfectly demonstrated. Almost no one denies the fact that two (or more) heads are better than one.
But teamwork has its dark spots, because within the group usually there is diversity of characters that can disturb the harmony that should reign within the team. If these differences are important enough, they can ruin collaboration and make it more advantageous individual than collective effort.
And the same reasoning applies to the use of outcome variables in scientific studies. Sometimes, usually to increase the frequency of the number of events that occur during the study and so reduce the need for a larger sample size, researchers combine several outcome variables in a single event, called composite outcome variable.
An example of composite variable
A common example is the composite variable MACE, widely used in cardiology and including death, myocardial infarction, stroke, coronary revascularization, restenosis, recurrent ischemia, and re-hospitalization. Considering an event when either of these situations occurs we can measure more events than expecting each of them separately. Consequently, the sample size required to detect significant differences is reduced.
The problem with composite variables is that, in many cases, they’re used more than necessary or without fulfilling a series of requirements for its proper use. Moreover, the components of the variable often vary from trial to trial, which greatly contribute to the heterogeneity between studies, which can hinder reaching global conclusions.
Pros and cons
Ideally, all components should be interchangeable and meet three conditions. First, have more or less the same importance from a clinical point of view. Second, all components should occur with similar frequency. And third, should have a similar sensitivity to treatment.
However, these conditions are met practically never. It is usual to combine some hard and infrequent variables (the hardest of all is mortality) with other less severe and more frequent. The consequence is that the more frequent effects (and therefore, the less severe) will usually condition the frequency of the outcome composite variable and, so, the sensitivity to detect the effect of the intervention.
Something else to avoid is to combine variables that respond to different pathophysiological mechanisms or to build a variable compounded by components with a large gradient of clinical significance or susceptibility to treatment.
This becomes very apparent when safety outcomes are combined with efficacy outcomes to assess a net clinical benefit. If the intervention is very effective, the net valuation may mask the existence of adverse effects, especially if a weighting of the various components of composite variable is not done.
This combination of safety and efficacy in the same outcome variable poses an additional problem in non-inferiority studies. It may happen that an inefficient but very safe procedure may seem as good as or better than the intervention of the comparison group.
Finally, the heterogeneity of susceptibility to the effect of the intervention of the various components of the variable can also pose problems in interpreting the results. For example, a positive effect on the components with minor clinical relevance may occur along with a negative on the most important ones (such as mortality). If given this fact, the net effect may be beneficial for the increased frequency of milder components, even though the negative effects on more severe components are more relevant, but much less frequent.
A solution to this problem may be to assign a weight to each of the components of the variable. The problem is that this is pretty subjective and also can decrease the power of the study, so that the theoretical benefit of using composite variables is lost.
So what can we do? To renounce to composite variables?. Well, I think there’s no need of such a thing. We can follow some recommendations when we want to use them to do so with maximum guarantees: 1) we must justify the validity of each individual component; 2) we will avoid using components with little importance from a clinical point of view; 3) we will not include components unlikely to be influenced by the intervention; 4) we have to describe composite variable and its components separately; 5) to assess the heterogeneity of treatment effect on each of the components; 6) weighing the components according to their clinical importance; and 7) to do a sensitivity analysis to determine how they affect the results using the weighting of the components of composite variable.
We’re leaving…
And with that we ended for today. We have had the opportunity to think about just one of the many tools often used incorrectly in the works that are published in scientific journals, even in those with a high impact. And it’s not the only one. There are many more that are misused to achieve the desired results. For example, comes into my mind the case of multiple comparisons of subgroups. But that’s another story…