Meeting 19 August 2014, Embecosm
From MAGEEC
Review of action points from last time
Blog posts
- [ABOpen] Oliver's ready to go.
- [JP] Blog post on how not-to measure
SC Update
- [SC] contact David Malcom at Redhat re abstracting GCC plugin interface to make consistant across releases.
- Profiling of time spent per function now available. Large variation, some large, some too small to measure. Suggest arithmetic power allocation of shorter spans pro-rata for shorter blocks.
- ? How long does it take to train with this granularity, since simulation takes longer than running.
Calibration
- Physical location variation after taking into account placement ~3%
- Need a calibration step with extra start and stop trigger with a known calibration routine. Compiled in with the main benchmark to save flashing time
- ? Can it be at the end of memory so that it does not influence the benchmarked code. We agreed that this is probably not a big issue, but we should keep an eye on not influincing measurement via calibration.
- [JP] To commit an appropriate calibration benchmark to BEEBS.
[JB] to chase Joern to see what progress is on flash alignment optimisation.
- [JB] Joern working on it. Progress may be behind schedule. JB to confirm whether deadline of 29th will be hit.
[SH+KIE] by next meeting structure of monster MAGEEC journal paper with all authors, describing process of developing a system with machine learning, plus the evaluation. Adds in the software engineering + evaluation. Not a push on the novelty of the machine learning.
[JB] Update events page
[SH] Add CASES + Craig's ILP (using BEEBS) to page
Boards:
[ABOpen] GBP50 target for board, base and cables. ABOpen creating publicity this week for end of September orders.
George:
- Comparison and sampling algorithms – not started.
- Minimal analysis started. Script error → not much data yet.
- Substantiates <3% board/run variation test-to-test bound
- Identifies the three {most significant helping and harming} flags.
- Moving forward: complete analysis runs. Remove outliers by <100ms threshold. Manual review on <1s run time.
- Then comparison algorithms
[SC&GF] Some pass combinations give weird results. To examine and see if it's a bug.
Stoil
- GIMPL Tree analysis of instruction distribution of benchmarks (BEEBS V1)
- Counting the different statements.
- TODO: Graph the results
- What grouping
- Existing ones + sub-division of ALU to horizonal and vertical (JP to advise)
- What grouping
- We can connect the basic block GIMPL analysis to the feature vector.
- An extension is to consider a histogram of the number of GIMPL statements (block size approximation) per basic block
- [RQ] We could compare the machine code allocations both before and after MAGEEC to see e.g. if MAGEEC learns to substibute memory accesses with ALU ops.
- Weather station case study up and running, with remote data send and receive.
- [SG] Need to deploy MAGEEC to see the improvement on the weather station.
- Iterative compilation on weather station hardware.
- Compare with MAGEEC's output with MAGEEC trained on other platforms.
- [JP] Energytool interactive improvement by review progress.
- [SG] Need to deploy MAGEEC to see the improvement on the weather station.
- Satellite case study needs compilation in of MSP430 support.
- Battery + RTOS on STMDiscovery
Greg
- PCA results including Energy measurement from -O0
- Seems that there are a small number of features that influence energy a lot.
- Energy itself if the best indicator of a benchmark.
- Features need normalising to prevent over-easy correlation.
- The results set needs more detailed exploration.
- Automatic program generation work
- Ideas, but not going to be able to complete in the remaining time
- Going forward
- How to predict energy in terms of the features we have already captured.
- Could use WECA to explore how to use supervised learning to predict energy from the features already
- How to predict energy in terms of the features we have already captured.
BEEBS
- Some cross-compilation problems on -Wall for ~11 benchmarks
- [SC] Add noinline_benchmarks on BEEBS
- [AB] New super-duper header mechanism to prevent duplicated code bases from diverging and still allow for build mechanisms to work.
- Ignored benchmarks infrastructure to be on a per-test basis
- BEEBS V2.0 ready for release
- Approx 10 have queries on release under GPL. Need to clarify license or find replacements ?resourcing?