Meeting 21 July 2014

From MAGEEC
Revision as of 15:39, 21 July 2014 by Simon (talk | contribs)
Jump to: navigation, search
MAGEEC Meeting 21/07/2014 - Lymington


Review of action points from last time


Greg


  • Script to generate PCA configurations
  • 700 runs on 10 programs from BEEBS: training ~15s
  • GC Q: Is the training exponential in programs
  • Communal database needs SC input
  • Auto-generation of benchmarks.

George

  • DejaGnu framework
  • ARM tools
  • Cuttlefishes made. Programmers needed.
  • Plackett-Burnham
    • Aliasing: not a high prority
    • Assuming factors independent
    • Aim it to find factors that actually matter
    • FFD on factors in cascaded manner to make number of tests tractable. Estimate ~2000 tests taking ~5h in parallel on 6 boards.

Stoil

  • SG: Categorisation of BEEBS V2.0 not started

Generic issues

  • [AB+JP+GF] Agreed method to compile in pseudo-random data sets in a generic manner
  • GIMPL SSA headers missing from Ubuntu => MAGEEC build
  • [JP+SG] Energy measurement code needs to be compilable.
  • [SH] IDD header on measurement boards needs to be taller.

BEEBS

  • Branching

Calibration of hardware

  • From tech node analysis, perhaps 2% total variation in system energy (0.4% static power; 1-2% dynamic power) with temperature variation.
  • Board-to-board variation needs more analysis.
    • [GC+JP] Measure and analyse the variation
  • [JP] Actually measure board-to-board variation.
  • We may need multiple runs of energy measurements across multiple boards to smooth the probability distribution.

BEEBS

  • CMSIS-DSP tests
  • BEEBS name across repository → “Bristol/Embecosm Energy Benchmark Suite”
  • ARM CMSIS Maths + OS functions to add
  • Push on self-validation with conditional compilation
    • Can turn off for code size analysis
    • Validation after the measurement STOP trigger
  • AB 'gatekeeper' of gitbub repository. Push requests to him.

Case studies

  • ARM – built out of CMSIS DSP functions
  • CMSIS-RTOS – Keil RTX

Optimisations on a per-function basis

  • Some kind of profiling to show the per-function distribution of execution costs is needed
    •  ? Is -O2 a good basis for counting instructions
  • We may need to assume that time and energy are proportional for this.

Work going forward


Stoil

  • Simulation of BEEBS for categorisation
    • AVR simulavr simulator → trace
    • qemu for ARM for each instruction. Also CGEN & Keil.
    • [2 week challenge] Analyse different evaluations of the BEEBS at three different levels, following this question:
  •  ??? Do we categorise based on input program (generic) or instruction output (architecture-specific). What does the feature vector (intermediate level) expose from these for the generic level.
    • SC will help from 28/07/2014
    • gprof for dynamic execution count
    • Raspberry Pi may be a good compilation platform.

George

    • [2 weeks] AVR tests to do with new hardware

Greg

  • Start moving on PCA of BEEBS.
    • Coordinate with Craig.

Oliver

  • Short blog post

Simon

  • Boards quotations
    • 100, 250, (500), 1000 volumes
    • Lead times

James

  • Calibration tests on Embecosm boards