Risk Register

From MAGEEC
Revision as of 12:01, 2 September 2014 by Jeremybennett (talk | contribs)
Jump to: navigation, search

The following describes the risks to this project, their likelihood and impact on the project, as well as procedures for mitigating the risk.

For each risk, the likelihood of it occurring is rated on a scale of 1 (very unlikely) to 10 (certain to happen) and the impact of this occurring is rated on a scale of 1 (minor impact), 2 (significant impact), 3 (total wipe out of the project). Risk is calculated as the product of the two numbers.

A risk mitigation strategy is provided when the risk is greater than or equal to 10 or where the impact is 3.

Work Package Description Likelihood Impact Risk Mitigation
WP4 Unable to find sufficient benchmark, test programs and case study 4 2 8 BEEBS v2 provides 80 tests and some data variants. Per-function datagathering should mean this is effectively a much larger data set. Three credible real-world case studies identified. Likelihood still left at 4, to reflect doubts over per-function gathering and desire for even more tests.
WP4 Data Gather phase taking too long 5 2 10 Software and board reliability issues now resolved. However number of tests still remains very large. Mitigation through Plackett-Burnham experimental design and use of large numbers of target platforms in parallel.
All Milestones not met 10 1 12 Milestones 6/3, 6/4, 7/1 and 7/2 very unlikely to be met (see milestones for details). Impact reduced to 1, since this relates to a non-core aspect of the project. Mitigations are listed against specific issues in this risk register and the project plan.
WP7 The compiler infrastructure does not work for energy 3 3 9 The system would be reconfigured for speed minimisation based on experience that a) this is possible (MILEPOST) and b) speed is a rough proxy for energy. However, this does not represent as significant advance on current knowledge, but does make it more accessible. Work carried out thoughout this project by the team and James Pallister is increasingly showing that compiling for energy demonstrates benefits.
WP10 Failure to engage with open source communities. 3 2 6 Likelihood further reduced due to success of GNU Tools Cauldron presentation, which was reported more widely than just the GNU community.
WP9 Compiler energy efficiency optimisation passes yield no measurable benefit 2 2 4 Proof of concept implementation demonstrates that this does work. Likelihood placed at 2, to reflect the fact that this will not be implented in an actual compiler (see WP 7).
All Loss of key personnel 3 2 6 This has happened in repsect of WP 7, due to illness, but this is a non-core part of the project. Embecosm has recruited more staff, and UoB work packages are almost complete, so further loss is unlikely to cause any problems.
All Unable to recruit suitable staff 0 3 0 Risk deleted, due to nearing project completion.
WP5 Failure to identify viable features (technical issue) 1 3 3 Fallback is to use the MILEPOST features. We have started engagement with the MILEPOST team to gain from their experience as much as possible. Likelihood reduces in Q2, since we are now using the MILEPOST features. Still some risk that we are unable to use any new features.
WP1, WP2 Compiler infrastructure cannot be made generic 1 2 2 Ensuring good interface design before building infrastructure. Note that non-fully generic solution still has value. Likelihood reduced in Q2, since implementation n progress appears to be completely viable for both GCC and LLVM via clean interfaces.
All Failure of consortium to work together 0 2 0 Consortium has already worked together.
WP3 MAGEEC v2 board doesn't work 0 1 0 Continue to use existing board. (MAGEEC v2 board is nice to have). Lowered in Q4 as we are successfully using v2 boards.
WP6 Cannot measure per-function energy data. 6 2 12 Identified relatively late on in Q4. We are looking at approaches including: a) using cycle accurate modeling to apportion energy usage; 2) using performance pofiling as a proxy to apportion energy usage; 3) using static energy analysis to approtion energy usage; 4) using per-application energy usage as the per-function energy usage.
WP3 Need to calibrate measurements 3 3 9 When using many measurement boards, we find that results vary depending on the actual chip, room temperature and other environmental factors. Mitigate by callibrating each test run with a standard benchmark.
WP3 Lack of knowledge on programming the energy measurment board. 5 2 10 James Pallister is the only person who knows how to code the firmware of the energy measurement board. Mitigation: train up additonal engineers and document the software.