Spider Impact can now show you how initiatives are affecting your scorecard items. When you connect initiatives and scorecard items as related items, Spider Impact analyzes both current performance and performance trends to determine if the initiative is having a positive, negative, mixed, or no impact.
Overview
When viewing the Overview tab of a scorecard or initiative item, you'll see impact icons next to related items of the opposite type. Hover over these icons to see a quick summary, or click them to view detailed analysis.
When you click on an impact icon, Spider Impact shows you a detailed analysis breaking down how the initiative is affecting the scorecard item:
Details
The impact analysis looks at two key factors:
Factor 1: Actual Performance Impact
Spider Impact compares the scorecard item's current performance against projections based on pre-initiative data. If there's a statistically significant difference:
- Positive Impact: Performance is better than projected
- Negative Impact: Performance is worse than projected
- No Impact: Any difference is not statistically significant
The software determines actual performance impact by:
- Creating a trend line using data points from before the initiative started
- Extending this trend line to project expected performance
- Comparing actual performance against the projection
- Using a 95% confidence interval to determine if the actual performance difference is statistically significant
If actual performance falls outside the confidence intervals, the impact is considered significant, with the direction determining if it's positive or negative.
Factor 2: Performance Trend Impact
Spider Impact analyzes how the rate of change has shifted by comparing two trend lines:
- The trend line from before the initiative started
- The trend line since the initiative began
If there's a statistically significant change in the trend line angles:
- Positive Impact: The trend has improved
- Negative Impact: The trend has worsened
- No Impact: Any change is not statistically significant
The software analyzes performance trends by creating two separate trend lines using linear least squares regression: one for data before the initiative started and the other for data after the initiative started. It then compares the slopes of these trend lines using a p-value threshold of 0.1 to determine if differences are statistically significant.
Combined Impact Results
Spider Impact combines the actual performance and trend impacts to determine an overall impact. The overall impact is determined by these rules:
- Two positives = Positive Impact
- Two negatives = Negative Impact
- Positive + Negative = Mixed Impact
- Two "no impacts" = No Impact
- No Impact + Positive = Positive Impact
- No Impact + Negative = Negative Impact
Data Requirements
For accurate analysis, Spider Impact requires at least 3 data points before the initiative start date and at least 3 data points after the initiative start date. When the scorecard item is a KPI, the software uses its assigned calendar, and when it's not a KPI, the software uses the smallest standard calendar defined in administration.
Real-World Example
Sometimes initiatives can show seemingly contradictory effects. For example, an IT infrastructure initiative might cause:
- A one-time spike in costs (negative performance impact)
- A long-term reduction in the rate of cost increases (positive trend impact)
This would result in a "Mixed Impact" overall, helping you understand both the short-term costs and long-term benefits of your initiative.
Tips for Using Impact Analysis
- Regular Updates: Keep your scorecard data current for the most accurate analysis
- Timeline Consideration: Allow enough time after initiative start for meaningful trend analysis
- Context Matters: Use impact analysis alongside other metrics for complete understanding
- Statistical Significance: Remember that small changes may show as "No Impact" if not statistically significant