Meeting 01-08-2014

Revision as of 16:34, 1 August 2014 by Simon (talk | contribs) (Meeting 01-08-2014)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
MAGEEC Meeting 01/08/2014 - Bristol


  • GCC:
    • New patch available to extract more features (41 features)
  • LLVM:
    • No new pass manager available. Problem with plugins.
    • Shall we write a plugin interface? No. No time. → changes or interface with releases.
    • Feature Extractor has had no progress. Needs some.
  • [SC] contact David Malcom at Redhat re abstracting GCC plugin interface to make consistant across releases.


  • All done.
  • Review of initial implementation was Greg's blog post.


  • How to get per-function energy basis?
    • Sampling with triggers have too much latency/error.
    • Template-based approach (a la crypto)?
    • Use cycle-accurate simulation to predict when functions occur and sample only at these points.
    • [SC to give priority to solving the measurement problem]
  • James measurement errors:
  • AVRs with non-grounded pins have big energy variation → need to ground.
  • AVR position in ZIF socket has a 5% variation left → right
  • [JP] Blog post on how not-to measure
  • Board-board variation of energy consumption of ATMEGA328 ~10%. Batch is “1404”


[JB] to chase Joern to see what progress is.


  • Currently using -O0 with some prediction, selecting from -O2 passes. Has made some decisions, but query how much different from base -O2 we have really got.
  • George: 7h to do 2.5 runs of BEEBS V2 on one board. JP says it should take 4000s to do this. Problem is time-outs.
  • AB has rolled out compilation patches for BEEBS.
    • DejaGNU is working.
    • V2 BEEBS still on target for end-of-August release
  • Data variance:
    • The order of data elements as well as number can influence.
    • Best/worst/average case analysis.
      • Large number of runs would be needed
      • Ideally, we'd auto-generate best/worst cases or hand-program cases
  • [Future RQ] A separate evaluation: for those categorisable programs and data sets, we can hand craft tests to see how MAGEEC responds on best/worst/avg cases
  • Case studies
    • [SG] to run MAGEEC over the weather station.
    • [SG] get the satellite MSP430 code working
    • RTOS (battery manager on RTOS) – perhaps too ambitious for this project.

Next project review is 8th September in Lymington @ 10:30am

For this, two major targets:

  • Working case studies to demonstrate
  • BEEBS V2.0 release



[SH+KIE] by next meeting structure of monster MAGEEC journal paper with all authors, describing process of developing a system with machine learning, plus the evaluation. Adds in the software engineering + evaluation. Not a push on the novelty of the machine learning.

  • What's the story
  • What results do we need?
  • BEEBS paper. Explains V2.0 benchmarks suite. Evaluation after MAGEEC.
  • Linux Plumbers event in Dortmund? (ENTRA+MAGEEC)
  • Innovate UK
  • [JB] Update events page
  • [SH] Add CASES + Craig's ILP (using BEEBS) to page
  • FOSDEM 2015
    • Proposal for 1 day workshop on compilers (morning: compilers; afternoon: compilers and energy efficiency)
  • [SC+ABOpen] needs setting up.
  • Blog post schedule (roughly chronological)
    • [ABOpen] “If you want a wand, put orders in now”. End of September target for fabrication.
    • [GC] Update PCA blog draft and publish
    • [OR] Blog post on what we are not looking at in project and ILP paper.
    • [SG] Case study of weather station with MAGEEC running
    • [GF] Initial analysis of pass effectiveness.
    • [JB] GNU cauldron
    • [Aburgess] BEEBS 2.0 (when released)
  • “WAND” approved. Backronym to be considered ;)
  • [ABOpen] Update wiki accordingly.

Summer work


  • Need to improve the running of the tests

With use of Placket-Burnham analysis to select tests, we need to see which flags had what weighting for

Experimental design (stats)

TODO: Ask if there are any EngMaths people who can help with the stats of the work.

Mann-Whitney comparison algorithms,

Bootstrap, Jacknife sampling algorithms.

[2 Week goal]: Minimal set of analysis done.


Some BEEBS work, keeping AB happy.

  • PCA anlysis on BEEBS V2 per function feature vectors for x86 (680 total).
    • Looking at variation of features
    • TODO: Extend to include energy.
  • [2 week goal: GC ]: Apply C5 decision tree on the PCA results to see if you can produce a (supervised learning) prediction for energy based on PCAs. Builds on Moon's work from last year.
    • OR has a lot of knowledge about how to go about this.
  • [JB] To talk to Atmel about what output from cycle-accurate simulation can be made available.


  • GIMPL Tree analysis of instruction distribution of benchmarks (BEEBS V1)
  • Also AVR for static and dynamic
  • Reason is to check that BEEBS has a broad spread.
  • How to group the instructions?
    • In a way that includes e.g. compare and skip (count it twice?)
      • Or between register and non-register architectures
    • Data movement should be split between register-register and memory operations.
    •  ? Do we separate read and write?
  • Suggested set:
    1. ALU operations (inc. compare)
    2. Memory ops
      1. Each access to a memory logged separately
      2. Reads/writes
    3. Move register/accumulator
    4. Control flow
    5. Floating point


  1. [2 week goal] Get the data for BEEBS V2 in AVR and Cortex M3/4. Post on github. Blog post
  2. Then weather station running MAGEEC'd code.