http://mageec.org/w/api.php?action=feedcontributions&user=Simon&feedformat=atomMAGEEC - User contributions [en]2024-03-29T04:48:31ZUser contributionsMediaWiki 1.28.1http://mageec.org/w/index.php?title=Meeting_02-09-2014&diff=601Meeting 02-09-20142014-09-02T15:42:50Z<p>Simon: Meeting 02-09-2014</p>
<hr />
<div><center>'''MAGEEC Meeting 02/09/2014'''</center><br />
<br />
<center>''Bristol''</center><br />
<br />
<center>''Present: SH, JB, JP, KIE, GC, SG, GF, CB''</center><br />
<br />
<br />
* Upcoming events updated on wiki<br/> <br />
<br />
* We need to focus on documenting now we are moving towards the end of the project.<br/> <br />
<br />
* '''[JP]''' By November Paper needed on BEEBS V2.0 + instruction distributions + effect on ML.<br />
* '''[JP]''' Write up calibration problem and methodology.<br/> <br />
<br />
* '''[GC]''' This week: write up the results and methodology from your work and ensure all the files are collected and made available to the project repository.<br/> <br />
<br />
* '''RQ:''' Having more benchmarks with different data values – how much variance does this add. Is it significant. Can we claim that these are 'additional tests' that push BEEBS up to 100 tests?<br/> <br />
<br />
* '''[AB/SG]''' Should the scripts and/or the graphs of instruction distributions be included alongside BEEBS in the repository.<br/> <br />
<br />
* Joern is unlikely to be able to complete the optimisation integration into GCC due to the linker focus of this optimisations.<br/> <br />
<br />
* WP8.1 – Training on Cortex M3 still needs to be done. Embecosm will deploy some resource on this. [https://github.com/xobs/senoko-chibios-3/ https://github.com/xobs/senoko-chibios-3/] contains laptop battery controller.<br/> <br />
<br />
* '''[SG]''' WP8.1 – ATMega case study can validate work at Embecosm. Stoil to document weather station (inc. new blog post) and make hardware available.<br />
* '''[JP+SG]''' Add more measurement points to weather station to seperate out enegy from sensors and processor.<br/> <br />
<br />
* Moon's MSc be made available online<br/> <br />
<br />
* Draft MAGEEC paper headings completed<br />
* '''[SH]''' To organise paper flow and distribute tasks of sections to write<br/> <br />
<br />
* George has some new data plotting ranges of energy for benchmarks for runs determined by Plackett Burnham<br/> <br />
<br />
* '''[GF]''' Run MAGEEC on case study on the small data set that has already been gathered.<br/> <br />
<br />
* Energy Measurement Boards available – order via Embecosm's website<br />
<br />
[[Category:Meetings]]</div>Simonhttp://mageec.org/w/index.php?title=Meeting_19_August_2014,_Embecosm&diff=583Meeting 19 August 2014, Embecosm2014-08-19T14:49:28Z<p>Simon: Meeting 19-08-2014</p>
<hr />
<div><center>'''MAGEEC Meeting 19/08/2014 -Lymington'''</center><br />
<br />
<center>Present: OR, JP, GC, SG, GF, CB, SC, AB, SH</center><br />
<br />
<br />
'''Review of action points from last time'''<br />
<br />
<br />
Blog posts<br />
<br />
* '''[ABOpen] '''Oliver's ready to go.<br />
* [JP] Blog post on how not-to measure<br />
<br />
SC Update<br />
<br />
* '''[SC]''' contact David Malcom at Redhat re abstracting GCC plugin interface to make consistant across releases.<br />
* Profiling of time spent per function now available. Large variation, some large, some too small to measure. Suggest arithmetic power allocation of shorter spans pro-rata for shorter blocks.<br />
** ? How long does it take to train with this granularity, since simulation takes longer than running.<br />
<br />
Calibration<br />
<br />
* Physical location variation after taking into account placement ~3%<br />
* Need a calibration step with extra start and stop trigger with a known calibration routine. Compiled in with the main benchmark to save flashing time<br />
** ? Can it be at the end of memory so that it does not influence the benchmarked code. We agreed that this is probably not a big issue, but we should keep an eye on not influincing measurement via calibration.<br />
** '''[JP]''' To commit an appropriate calibration benchmark to BEEBS.<br />
<br />
'''[JB] '''to chase Joern to see what progress is on flash alignment optimisation.<br />
<br />
* [JB] Joern working on it. Progress may be behind schedule. JB to confirm whether deadline of 29<sup>th</sup> will be hit.<br />
<br />
'''''[SH+KIE] by next meeting''''' structure of monster MAGEEC journal paper with all authors, describing process of developing a system with machine learning, plus the evaluation. Adds in the software engineering + evaluation. Not a push on the novelty of the machine learning.<br />
<br />
<br />
'''[JB]''' Update events page<br />
<br />
'''[SH]''' Add CASES + Craig's ILP (using BEEBS) to page<br />
<br />
<br />
Boards:<br />
<br />
'''[ABOpen]''' GBP50 target for board, base and cables. ABOpen creating publicity this week for end of September orders.<br />
<br />
George:<br />
<br />
<br />
* Comparison and sampling algorithms – not started.<br />
* Minimal analysis started. Script error → not much data yet.<br />
** Substantiates <3% board/run variation test-to-test bound<br />
** Identifies the three {most significant helping and harming} flags.<br />
** Moving forward: complete analysis runs. Remove outliers by <100ms threshold. Manual review on <1s run time. <br />
*** Then comparison algorithms<br />
<br />
[SC&GF] Some pass combinations give weird results. To examine and see if it's a bug.<br />
<br />
<br />
Stoil<br />
<br />
<br />
* GIMPL Tree analysis of instruction distribution of benchmarks (BEEBS V1)<br />
** Counting the different statements.<br />
** TODO: Graph the results<br />
*** What grouping<br />
**** Existing ones + sub-division of ALU to horizonal and vertical (JP to advise)<br />
** We can connect the basic block GIMPL analysis to the feature vector.<br />
*** An extension is to consider a histogram of the number of GIMPL statements (block size approximation) per basic block <br />
** '''[RQ]''' We could compare the machine code allocations both before and after MAGEEC to see e.g. if MAGEEC learns to substibute memory accesses with ALU ops.<br />
<br />
* Weather station case study up and running, with remote data send and receive.<br />
** '''[SG]''' Need to deploy MAGEEC to see the improvement on the weather station.<br />
*** Iterative compilation on weather station hardware.<br />
*** Compare with MAGEEC's output with MAGEEC trained on other platforms.<br />
** [JP] Energytool interactive improvement by review progress.<br />
* Satellite case study needs compilation in of MSP430 support.<br />
* Battery + RTOS on STMDiscovery<br />
<br />
'''Greg'''<br />
<br />
<br />
* PCA results including Energy measurement from -O0<br />
** Seems that there are a small number of features that influence energy a lot.<br />
** Energy itself if the best indicator of a benchmark.<br />
** Features need normalising to prevent over-easy correlation.<br />
** The results set needs more detailed exploration.<br />
<br />
* Automatic program generation work<br />
** Ideas, but not going to be able to complete in the remaining time<br />
<br />
* Going forward<br />
** How to predict energy in terms of the features we have already captured.<br />
*** Could use WECA to explore how to use supervised learning to predict energy from the features already <br />
<br />
'''BEEBS'''<br />
<br />
<br />
* Some cross-compilation problems on -Wall for ~11 benchmarks<br />
* '''[SC] '''Add noinline_benchmarks on BEEBS<br />
* '''[AB]''' New super-duper header mechanism to prevent duplicated code bases from diverging and still allow for build mechanisms to work.<br />
* Ignored benchmarks infrastructure to be on a per-test basis<br />
* BEEBS V2.0 ready for release<br />
* Approx 10 have queries on release under GPL. Need to clarify license or find replacements ?resourcing?<br />
<br />
[[Category:Meetings]]</div>Simonhttp://mageec.org/w/index.php?title=Meeting_01-08-2014&diff=567Meeting 01-08-20142014-08-01T15:34:19Z<p>Simon: Meeting 01-08-2014</p>
<hr />
<div><center>'''MAGEEC Meeting 01/08/2014 - Bristol '''</center><br />
<br />
<br />
'''WP2'''<br />
<br />
* GCC:<br />
** New patch available to extract more features (41 features)<br />
<br />
* LLVM: <br />
** No new pass manager available. Problem with plugins.<br />
** Shall we write a plugin interface? No. No time. → changes or interface with releases.<br />
** Feature Extractor has had no progress. Needs some.<br />
* [SC] contact David Malcom at Redhat re abstracting GCC plugin interface to make consistant across releases.<br />
<br />
'''WP5'''<br />
<br />
* All done.<br />
* Review of initial implementation was Greg's blog post.<br />
<br />
'''WP6'''<br />
<br />
<br />
* How to get per-function energy basis?<br />
** Sampling with triggers have too much latency/error.<br />
** Template-based approach (a la crypto)?<br />
** Use cycle-accurate simulation to predict when functions occur and sample only at these points. <br />
** [SC to give priority to solving the measurement problem]<br />
<br />
* James measurement errors:<br />
<br />
* AVRs with non-grounded pins have big energy variation → need to ground.<br />
* AVR position in ZIF socket has a 5% variation left → right<br />
* [JP] Blog post on how not-to measure<br />
<br />
* Board-board variation of energy consumption of ATMEGA328 ~10%. Batch is “1404”<br />
<br />
'''WP7'''<br />
<br />
<br />
[JB] to chase Joern to see what progress is.<br />
<br />
<br />
'''WP8 '''<br />
<br />
<br />
* Currently using -O0 with some prediction, selecting from -O2 passes. Has made some decisions, but query how much different from base -O2 we have really got.<br />
<br />
* George: 7h to do 2.5 runs of BEEBS V2 on one board. JP says it should take 4000s to do this. Problem is time-outs.<br />
* AB has rolled out compilation patches for BEEBS.<br />
** DejaGNU is working.<br />
** V2 BEEBS still on target for end-of-August release<br />
* Data variance:<br />
** The order of data elements as well as number can influence.<br />
** Best/worst/average case analysis.<br />
*** Large number of runs would be needed<br />
*** Ideally, we'd auto-generate best/worst cases or hand-program cases<br />
* [Future RQ] A separate evaluation: for those categorisable programs and data sets, we can hand craft tests to see how MAGEEC responds on best/worst/avg cases<br />
* Case studies<br />
** [SG] to run MAGEEC over the weather station.<br />
** [SG] get the satellite MSP430 code working<br />
** RTOS (battery manager on RTOS) – perhaps too ambitious for this project.<br />
<br />
Next project review is 8<sup>th</sup> September in Lymington @ 10:30am<br />
<br />
For this, two major targets:<br />
<br />
* Working case studies to demonstrate<br />
* BEEBS V2.0 release<br />
<br />
'''WP9:'''<br />
<br />
Paper:<br />
<br />
<br />
'''''[SH+KIE] by next meeting''''' structure of monster MAGEEC journal paper with all authors, describing process of developing a system with machine learning, plus the evaluation. Adds in the software engineering + evaluation. Not a push on the novelty of the machine learning.<br />
<br />
* What's the story<br />
* What results do we need?<br />
<br />
* BEEBS paper. Explains V2.0 benchmarks suite. Evaluation after MAGEEC.<br />
<br />
* Linux Plumbers event in Dortmund? (ENTRA+MAGEEC)<br />
* Innovate UK<br />
<br />
* [JB] Update events page<br />
* [SH] Add CASES + Craig's ILP (using BEEBS) to page<br />
* FOSDEM 2015<br />
** Proposal for 1 day workshop on compilers (morning: compilers; afternoon: compilers and energy efficiency)<br />
* [SC+ABOpen] BEEBS.eu needs setting up.<br/> <br />
<br />
* Blog post schedule (roughly chronological)<br />
** [ABOpen] “If you want a wand, put orders in now”. End of September target for fabrication.<br />
** [GC] Update PCA blog draft and publish<br />
** [OR] Blog post on what we are not looking at in project and ILP paper.<br />
** [SG] Case study of weather station with MAGEEC running<br />
** [GF] Initial analysis of pass effectiveness.<br />
** [JB] GNU cauldron<br />
** [Aburgess] BEEBS 2.0 (when released)<br />
<br />
* “WAND” approved. Backronym to be considered ;)<br />
* [ABOpen] Update wiki accordingly.<br />
<br />
'''Summer work'''<br />
<br />
<br />
George:<br />
<br />
* Need to improve the running of the tests<br />
<br />
With use of Placket-Burnham analysis to select tests, we need to see which flags had what weighting for <br />
<br />
Experimental design (stats)<br />
<br />
<br />
TODO: Ask if there are any EngMaths people who can help with the stats of the work.<br />
<br />
<br />
Mann-Whitney comparison algorithms,<br />
<br />
<br />
Bootstrap, Jacknife sampling algorithms.<br />
<br />
<br />
'''[2 Week goal]: Minimal set of analysis done.'''<br />
<br />
<br />
Greg<br />
<br />
<br />
Some BEEBS work, keeping AB happy.<br />
<br />
* PCA anlysis on BEEBS V2 per function feature vectors for x86 (680 total).<br />
** Looking at variation of features<br />
** TODO: Extend to include energy.<br />
<br />
* '''[2 week goal: GC ]''': Apply C5 decision tree on the PCA results to see if you can produce a (supervised learning) prediction for energy based on PCAs. Builds on Moon's work from last year.<br />
** OR has a lot of knowledge about how to go about this.<br/> <br />
<br />
* [JB] To talk to Atmel about what output from cycle-accurate simulation can be made available.<br />
<br />
Stoil<br />
<br />
<br />
* GIMPL Tree analysis of instruction distribution of benchmarks (BEEBS V1)<br />
* Also AVR for static and dynamic<br />
* Reason is to check that BEEBS has a broad spread.<br />
* How to group the instructions?<br />
** In a way that includes e.g. compare and skip (count it twice?)<br />
*** Or between register and non-register architectures<br />
** Data movement should be split between register-register and memory operations.<br />
** ? Do we separate read and write?<br />
* <br />
<br />
* Suggested set:<br />
<br />
# <br />
## ALU operations (inc. compare)<br />
## Memory ops<br />
### Each access to a memory logged separately<br />
### Reads/writes<br />
## Move register/accumulator<br />
## Control flow<br />
## Floating point<br />
<br />
'''Priorities:'''<br />
<br />
# '''[2 week goal]''' Get the data for BEEBS V2 in AVR and Cortex M3/4. Post on github. Blog post<br />
# Then weather station running MAGEEC'd code.<br />
<br />
[[Category:Meetings]]</div>Simonhttp://mageec.org/w/index.php?title=Meeting_21_July_2014&diff=566Meeting 21 July 20142014-07-21T15:39:39Z<p>Simon: </p>
<hr />
<div><center>'''MAGEEC Meeting 21/07/2014 - Lymington'''</center><br />
<br />
<br />
Review of action points from last time<br />
<br />
<br />
Greg<br />
<br />
<br />
* Script to generate PCA configurations<br />
* 700 runs on 10 programs from BEEBS: training ~15s<br />
* GC Q: Is the training exponential in programs<br />
* Communal database needs SC input<br />
* Auto-generation of benchmarks.<br />
<br />
George<br />
<br />
* DejaGnu framework<br />
* ARM tools<br />
* Cuttlefishes made. Programmers needed.<br />
* Plackett-Burnham<br />
** Aliasing: not a high prority<br />
** Assuming factors independent<br />
** Aim it to find factors that actually matter<br />
** FFD on factors in cascaded manner to make number of tests tractable. Estimate ~2000 tests taking ~5h in parallel on 6 boards.<br />
<br />
Stoil<br />
<br />
* SG: Categorisation of BEEBS V2.0 not started<br />
<br />
Generic issues<br />
<br />
* [AB+JP+GF] Agreed method to compile in pseudo-random data sets in a generic manner<br />
<br />
* GIMPL SSA headers missing from Ubuntu <nowiki>=> MAGEEC build</nowiki><br />
* [JP+SG] Energy measurement code needs to be compilable.<br />
* [SH] IDD header on measurement boards needs to be taller.<br />
<br />
BEEBS<br />
<br />
* Branching<br />
<br />
Calibration of hardware<br />
<br />
* From tech node analysis, perhaps 2% total variation in system energy (0.4% static power; 1-2% dynamic power) with temperature variation.<br />
* Board-to-board variation needs more analysis.<br />
** [GC+JP] Measure and analyse the variation<br />
* [JP] Actually measure board-to-board variation.<br />
* We may need multiple runs of energy measurements across multiple boards to smooth the probability distribution.<br />
<br />
BEEBS<br />
<br />
* CMSIS-DSP tests<br />
* BEEBS name across repository → “Bristol/Embecosm Energy Benchmark Suite”<br />
* ARM CMSIS Maths + OS functions to add<br />
* Push on self-validation with conditional compilation<br />
** Can turn off for code size analysis<br />
** Validation after the measurement STOP trigger<br />
* AB 'gatekeeper' of gitbub repository. Push requests to him.<br />
<br />
Case studies<br />
<br />
* ARM – built out of CMSIS DSP functions<br />
* CMSIS-RTOS – Keil RTX<br />
<br />
Optimisations on a per-function basis<br />
<br />
* Some kind of profiling to show the per-function distribution of execution costs is needed<br />
** ? Is -O2 a good basis for counting instructions<br />
* We may need to assume that time and energy are proportional for this.<br />
<br />
'''Work going forward'''<br />
<br />
<br />
Stoil<br />
<br />
* Simulation of BEEBS for categorisation<br />
** AVR simulavr simulator → trace<br />
** qemu for ARM for each instruction. Also CGEN & Keil.<br />
** '''[2 week challenge]''' Analyse different evaluations of the BEEBS at three different levels, following this question:<br />
* ??? Do we categorise based on input program (generic) or instruction output (architecture-specific). What does the feature vector (intermediate level) expose from these for the generic level.<br />
** SC will help from 28/07/2014<br />
** gprof for dynamic execution count<br />
** Raspberry Pi may be a good compilation platform.<br />
<br />
George<br />
<br />
* <br />
** '''[2 weeks] AVR tests to do with new hardware'''<br />
<br />
Greg<br />
<br />
* Start moving on PCA of BEEBS.<br />
** Coordinate with Craig.<br />
<br />
Oliver<br />
<br />
* Short blog post <br />
<br />
Simon<br />
<br />
* Boards quotations<br />
** 100, 250, (500), 1000 volumes<br />
** Lead times<br />
<br />
James<br />
<br />
* Calibration tests on Embecosm boards<br />
<br />
<br />
<br />
[[Category:Meetings]]</div>Simonhttp://mageec.org/w/index.php?title=Meeting_21_July_2014&diff=565Meeting 21 July 20142014-07-21T15:31:10Z<p>Simon: Meeting 21-07-2014</p>
<hr />
<div><center>'''MAGEEC Meeting 21/07/2014 - Lymington'''</center><br />
<br />
<br />
Review of action points from last time<br />
<br />
<br />
Greg<br />
<br />
<br />
* Script to generate PCA configurations<br />
* 700 runs on 10 programs from BEEBS: training ~15s<br />
* GC Q: Is the training exponential in programs<br />
* Communal database needs SC input<br />
* Auto-generation of benchmarks.<br />
<br />
George<br />
<br />
* DejaGnu framework<br />
* ARM tools<br />
* Cuttlefishes made. Programmers needed.<br />
* Plackett-Burnham<br />
** Aliasing: not a high prority<br />
** Assuming factors independent<br />
** Aim it to find factors that actually matter<br />
** FFD on factors in cascaded manner to make number of tests tractable. Estimate ~2000 tests taking ~5h in parallel on 6 boards.<br />
<br />
Stoil<br />
<br />
* SG: Categorisation of BEEBS V2.0 not started<br />
<br />
Generic issues<br />
<br />
* [AB+JP+GF] Agreed method to compile in pseudo-random data sets in a generic manner<br />
<br />
* GIMPL SSA headers missing from Ubuntu <nowiki>=> MAGEEC build</nowiki><br />
* [JP+SG] Energy measurement code needs to be compilable.<br />
* [SH] IDD header on measurement boards needs to be taller.<br />
<br />
BEEBS<br />
<br />
* Branching<br />
<br />
Calibration of hardware<br />
<br />
* From tech node analysis, perhaps 2% total variation in system energy (0.4% static power; 1-2% dynamic power) with temperature variation.<br />
* Board-to-board variation needs more analysis.<br />
** [GC+JP] Measure and analyse the variation<br />
* [JP] Actually measure board-to-board variation.<br />
* We may need multiple runs of energy measurements across multiple boards to smooth the probability distribution.<br />
<br />
BEEBS<br />
<br />
* CMSIS-DSP tests<br />
* BEEBS name across repository → “Bristol/Embecosm Energy Benchmark Suite”<br />
* ARM CMSIS Maths + OS functions to add<br />
* Push on self-validation with conditional compilation<br />
** Can turn off for code size analysis<br />
** Validation after the measurement STOP trigger<br />
* AB 'gatekeeper' of gitbub repository. Push requests to him.<br />
<br />
Case studies<br />
<br />
* ARM – built out of CMSIS DSP functions<br />
* CMSIS-RTOS – Keil RTX<br />
<br />
Optimisations on a per-function basis<br />
<br />
* Some kind of profiling to show the per-function distribution of execution costs is needed<br />
** ? Is -O2 a good basis for counting instructions<br />
* We may need to assume that time and energy are proportional for this.<br />
<br />
'''Work going forward'''<br />
<br />
<br />
Stoil<br />
<br />
* Simulation of BEEBS for categorisation<br />
** AVR simulavr simulator → trace<br />
** qemu for ARM for each instruction. Also CGEN & Keil.<br />
** '''[2 week challenge]''' Analyse different evaluations of the BEEBS at three different levels, following this question:<br />
* ??? Do we categorise based on input program (generic) or instruction output (architecture-specific). What does the feature vector (intermediate level) expose from these for the generic level.<br />
** SC will help from 28/07/2014<br />
** gprof for dynamic execution count<br />
** Raspberry Pi may be a good compilation platform.<br />
<br />
George<br />
<br />
* <br />
** '''[2 weeks] AVR tests to do with new hardware'''<br />
<br />
Greg<br />
<br />
* Start moving on PCA of BEEBS.<br />
** Coordinate with Craig.<br />
<br />
Oliver<br />
<br />
* Short blog post <br />
<br />
Simon<br />
<br />
* Boards quotations<br />
** 100, 250, (500), 1000 volumes<br />
** Lead times<br />
<br />
[[Category:Meetings]]</div>Simonhttp://mageec.org/w/index.php?title=Meeting_8_July_2014&diff=561Meeting 8 July 20142014-07-08T18:06:49Z<p>Simon: Meeting notes 8 July 2014</p>
<hr />
<div><center>'''MAGEEC Meeting – Bristol – 08 July 2014'''</center><br />
<br />
<center>Present: KIE, GC, GF, SG, A Burgess, JB, SC, JP, SH</center><br />
<br />
<br />
''<nowiki>Note: Actions denoted by people's initials (e.g. [SH])</nowiki>''<br />
<br />
<br />
'''Agenda'''<br />
<br />
* Review of progress of Greg, George and Stoil.<br />
* Discussion of Greg's report on Machine Learning, with follow-up discussions on<br />
* Automatic test generation<br />
* Use of feature vectors throughout the MAGEEC flow. Do we need it and if so, how do we adapt the plugins?<br />
* How to classify 'low' energy reading.<br />
* Clarification of framework interfaces and decisions on Expect and integration approaches - George<br />
* Case study candidate(s) - Stoil<br />
* BEEBS<br />
* Evaluation of status of our software and hardware capabilities<br />
* Do we need to buy anything else?<br />
* Making hardware boards available<br />
* Blog updates and other publicity<br />
* Setting of objectives for the next fortnight.<br />
* Who goes to TSB Energy efficient computing meeting – 28<sup>th</sup> July 2014.<br />
<br />
'''Machine Learning'''<br />
<br />
* The 50+ size of the Milepost feature vector includes many features that may or may not be missing/appropriate<br />
* Feature vector is flattened.<br />
* Pass order will be set as outside the scope<br />
* We have a good idea about what new features could be added, but this is outside the MAGEEC scope.<br />
* Programs will assume input data-independence i.e. fix inputs across<br />
* <nowiki>[GC] </nowiki>''Long-term Summer goal ''– Run PCA on a data set of BEEBS with input data variation. ''To feed into paper on BEEBS with SG. ''Work with Craig on this too.<br />
* <nowiki>[OR] Short blog post on what aspects of the features we are and are not doing</nowiki><br />
* <nowiki>[GC] How long would it take the ML to learn based on 100 programs.</nowiki><br />
* <nowiki>[GC+SC] Set up a communal database for sharing run data.</nowiki><br />
<br />
'''Framework'''<br />
<br />
* MAGEEC framework has working ML inside it now.<br />
* Framework can extract feature vector between passes if wanted, until the representation is lowered towards the ISA.<br />
* The lowest energy to compare MAGEEC's performance against can be the 1000 random compilations, as used by MILEPOST<br />
* Lowest possible energy could be predicted from mathematical extreme limits from random samples.<br />
* <nowiki>[</nowiki>AB+GF] ''Long-term Goal:'' Develop the software framework to be able to use DejaGnu to run and correlate returned energy value with run program. DejaGnu to be used for test build, pass fail detection. This approach gives long-term support to architectures and platforms. Problem with lack of documentation.<br />
* <nowiki>[GF] Documentation for DejaGnu-based framework for MAGEEC. </nowiki>Onto mageec.org.<br />
* <nowiki>[GF+SC] MAGEEC has an interface for the database. This needs to be integrated and tested. Any alterations that need to occur to the database schema to be completed.</nowiki><br />
<br />
'''BEEBS'''<br />
<br />
* Up to 91 tests<br />
* Fix int length assumptions<br />
* Ensure copyright notice on all<br />
* Ensure -wall compilation has no warnings<br />
* DejaGNU needs integrating<br />
* BEEBS V2.0 release goal: end of August<br />
* <nowiki>[SC] Register beebs.eu</nowiki><br />
* <nowiki>[SG] </nowiki>''Long term Summer goal:'' Review BEEBS for suitability now they are expanded. Paper on BEEBS V2.0<br />
* <nowiki>[GC] Csmith can generate C programs. Can we use auto-generation to cover holes in the feature space</nowiki> w.r.t. the programs we have.<br />
* Given BEEBS, we create a set of points in the feature space.<br />
* <nowiki>[AB] BEEBS branch user guide on wiki</nowiki><br />
* <nowiki>** [JP] Calibration. How to calibrate board-to-boar</nowiki>d variation. Test program set <br />
<br />
BEEBS workers ('beebsv2' branch)<br />
<br />
* Andrew Burgess lead<br />
* Greg characterisation<br />
* George – Plackett-Burman<br />
* Simon C – User<br />
* Jeremy - User<br />
* James – Add more benchmarks<br />
* Stoil - Contributor<br />
<br />
'''Case studies'''<br />
<br />
* <nowiki>[SG] </nowiki>In 2 weeks: Tinkerman Weather station – ATMEGA328. Get running on Embecosm's extensive AVR hardware.<br />
* <nowiki>[SG] For 6 weeks later. Get working the </nowiki>OSSI Cubesat code-base. Needs to be very energy efficient.<br />
* <nowiki>[SG] </nowiki>MSP430 target needs to be brought up.<br />
* OSCirrus weather station – ATMega128. <br />
* Novena Open Laptop running battery controller STMF32 with Chibios.<br />
* <nowiki>[SG] Explore further cases where RTOS are working.</nowiki><br />
<br />
'''Evaluation capabilities and scaling'''<br />
<br />
* 225 GCC passes needs 64,000 runs. 100 tests in BEEBS. 2 seconds to flash a chip. Total test 5-10s each. ~800 days of execution time. How to address this problem? H/W parallelism lowers this, but not enough.<br />
* <nowiki>[JP] FFD tool needs expanding to more than its current level of factors </nowiki>to >=225<br />
* What level of flag coverage is needed?<br />
* <nowiki>[</nowiki>JP/OR] Could choose e.g. 12 most important flags and be exhaustive in that space and reduce the number of runs this way.<br />
* <nowiki>[GF</nowiki>+SG] ''Long-term Summer Aim: ''Use Plackett-Burman Design approach to design to run the most important of 255 tests per architecture. Aim: a <nowiki><= </nowiki>one-week run on the number of boards we have.<br />
<br />
'''Hardware needed'''<br />
<br />
* 10 discovery boards needed<br />
* 5 MSP430 to order.<br />
* Components for weather station needed<br />
* Shrimp / Cuttlefish Embecosm → UoB<br />
<br />
'''Making hardware boards available'''<br />
<br />
* <nowiki>[A Back] Yes to Andrew's plan to make 50 available to the general public on an at-cost basis.</nowiki><br />
* <nowiki>[SH] Get quotations and determine cost sweet-spots</nowiki><br />
<br />
'''Publicity'''<br />
<br />
* <nowiki>[GF+SG] Complete blogs from George and Stoil. </nowiki>Send to Andrew Back.<br />
* <nowiki>[KE] ICT Energy Summer school talk.</nowiki><br />
<br />
'''2 week objectives'''<br />
<br />
* Stoil: Get the weather station assembled and running. Co-ordinate with Simon Hollis and James Pallister.<br />
* Greg: Demonstrate BEEBS working with same tests working with different data. Then start report on characterisation of BEEBS. Coordinate with Andrew Burgess.<br />
* George: Head around Plackett-Burman and Deja-GNU. Create a plan for running and integrating Plackett-Burman into Deja-GNU. Co-ordinate with Simon Cook and James Pallister.<br />
<br />
'''Follow-up research list'''<br />
<br />
* Pass-to-pass feature extraction and ML based on the transformations between individual passes.<br />
* '''RQ: '''Assume BEEBS representative of class or programs. Are MILEPOST features valid for classifying these. Or, do we assume the features are useful. How useful are BEEBS for covering this space. When we find holes in the feature space, is it because programs do not exist, or because we need to find input programs in that part of the space.<br />
<br />
[[Category:Meetings]]</div>Simonhttp://mageec.org/w/index.php?title=Meeting_25_June_2014,_Embecosm&diff=556Meeting 25 June 2014, Embecosm2014-06-26T08:15:54Z<p>Simon: </p>
<hr />
<div><center>'''MAGEEC Meeting at Lymington, 25<sup>th</sup> June 2014'''</center><br />
<br />
<br />
'''Present:''' George, Greg, Stoil, Craig, Oliver, Simon H, Simon C, James P, Jeremy<br />
<br />
<br />
'''To Discuss'''<br />
<br />
<br />
* BEEBS V2 release<br />
* Case studies<br />
* WP5 July: Review of ML approach – i.e. test the previous choice with BEEBS V2<br />
* WP6: Training and testing (Embecosm lead; ?Does Bristol need to help?<nowiki>; overdue)</nowiki><br />
* WP8: Infrastructure evaluation<br />
<br />
[[Image:]]<br />
<br />
<br />
'''Progress'''<br />
<br />
* <nowiki>[SC]We have a working automatic training script that can take BEEBS and pass to train the ML.</nowiki><br />
* <nowiki>[SC] Working on </nowiki>completing milestone 6/1. Can be completed without need for Bristol interaction.<br />
<br />
'''Actions and Plan forward'''<br />
<br />
* All to sign up for mailing lists<br />
* Plan for blog posts on each topic. Min 1 per person<br />
* Freenode IRC: #MAGEEC<br />
* BEEBS V2 to be considered by Andrew Burgess. Some weeks off a release.<br />
* <nowiki>[SH]: Request Moon if we can publish his MSc on mageec.org</nowiki><br />
* Increase of training set<br />
* <nowiki>[SC]: Need to consider the dependencies between passes </nowiki>(i.e. what passes must run together) and ensure that the MAGEEC infrastructure maintains these. Is an extra constraint database necessary to express these, rather than just relying on GCC to fail.<br />
* <nowiki>[JR] Needs to put effort to complete milestone 7/2</nowiki><br />
* Deadline for presentation of work 18<sup>th</sup> July:<br />
** WP6/2<br />
*** Embecosm to lead. Discussion at next meeting of proof of concept training results.<br />
** WP5/3<br />
*** Embecosm to lead.<br />
** WP8/1<br />
*** This is the aggregate of work at Bristol over the Summer<br />
** WP8/2<br />
*** <nowiki>[SH/</nowiki>KIE/OR] Short paper on initial results '''''or''''' initial draft of final journal paper.<br />
** WP8/3<br />
*** <nowiki>[SH/KIE/OR] review targets for the final, long journal paper.</nowiki><br />
<br />
'''People allocation'''<br />
<br />
<br />
Stoil:<br />
<br />
* Identification and development of Case Studies, including<br />
** Audio<br />
** Software-defined radio<br />
* BEEBS development<br />
* Add new embedded architectures<br />
* Energy measurement analysis<br />
* Community involvement<br />
* Remote USB power-upper-downer<br />
<br />
George:<br />
<br />
* Measurement to feed-in Framework<br />
* Database integration<br />
* Expect interfaces<br />
* Case studies<br />
<br />
Greg:<br />
<br />
* Review of ML approach<br />
* ML development and evaluation with Milepost Features<br />
* Extension of BEEBS<br />
* Look at PCA again once BEEBS is large enough<br />
<br />
Craig:<br />
<br />
* ILP & PCA<br />
* Paper development<br />
<br />
Pierre:<br />
<br />
* To spend time with Greg, Stoil and Craig some time next week<br />
<br />
[[Category:Meetings]]</div>Simonhttp://mageec.org/w/index.php?title=Meeting_25_June_2014,_Embecosm&diff=555Meeting 25 June 2014, Embecosm2014-06-26T08:15:36Z<p>Simon: Meeting notes 25 June 2014</p>
<hr />
<div><center>'''MAGEEC Meeting at Lymington, 25<sup>th</sup> June 2014'''</center><br />
<br />
<br />
'''Present:''' George, Greg, Stoil, Craig, Oliver, Simon H, Simon C, James P, Jeremy<br />
<br />
<br />
'''To Discuss'''<br />
<br />
<br />
* BEEBS V2 release<br />
* Case studies<br />
* WP5 July: Review of ML approach – i.e. test the previous choice with BEEBS V2<br />
* WP6: Training and testing (Embecosm lead; ?Does Bristol need to help?<nowiki>; overdue)</nowiki><br />
* WP8: Infrastructure evaluation<br />
<br />
[[Image:]]<br />
<br />
<br />
'''Progress'''<br />
<br />
* <nowiki>[SC]We have a working automatic training script that can take BEEBS and pass to train the ML.</nowiki><br />
* <nowiki>[SC] Working on </nowiki>completing milestone 6/1. Can be completed without need for Bristol interaction.<br />
<br />
'''Actions and Plan forward'''<br />
<br />
* All to sign up for mailing lists<br />
* Plan for blog posts on each topic. Min 1 per person<br />
* Freenode IRC: #MAGEEC<br />
* BEEBS V2 to be considered by Andrew Burgess. Some weeks off a release.<br />
* <nowiki>[SH]: Request Moon if we can publish his MSc on mageec.org</nowiki><br />
* Increase of training set<br />
* <nowiki>[SC]: Need to consider the dependencies between passes </nowiki>(i.e. what passes must run together) and ensure that the MAGEEC infrastructure maintains these. Is an extra constraint database necessary to express these, rather than just relying on GCC to fail.<br />
* <nowiki>[JR] Needs to put effort to complete milestone 7/2</nowiki><br />
* Deadline for presentation of work 18<sup>th</sup> July:<br />
** WP6/2<br />
*** Embecosm to lead. Discussion at next meeting of proof of concept training results.<br />
** WP5/3<br />
*** Embecosm to lead.<br />
** WP8/1<br />
*** This is the aggregate of work at Bristol over the Summer<br />
** WP8/2<br />
*** <nowiki>[SH/</nowiki>KIE/OR] Short paper on initial results '''''or''''' initial draft of final journal paper.<br />
** WP8/3<br />
*** <nowiki>[SH/KIE/OR] review targets for the final, long journal paper.</nowiki><br />
<br />
'''People allocation'''<br />
<br />
<br />
Stoil:<br />
<br />
* Identification and development of Case Studies, including<br />
** Audio<br />
** Software-defined radio<br />
* BEEBS development<br />
* Add new embedded architectures<br />
* Energy measurement analysis<br />
* Community involvement<br />
* Remote USB power-upper-downer<br />
<br />
George:<br />
<br />
* Measurement to feed-in Framework<br />
* Database integration<br />
* Expect interfaces<br />
* Case studies<br />
<br />
Greg:<br />
<br />
* Review of ML approach<br />
* ML development and evaluation with Milepost Features<br />
* Extension of BEEBS<br />
* Look at PCA again once BEEBS is large enough<br />
<br />
Craig:<br />
<br />
* ILP & PCA<br />
* Paper development<br />
<br />
Pierre:<br />
<br />
* To spend time with Greg, Stoil and Craig some time next week<br />
<br />
[Category:Meetings]</div>Simonhttp://mageec.org/w/index.php?title=Meeting-14-11-2013&diff=280Meeting-14-11-20132013-11-14T17:03:56Z<p>Simon: Created page with "= MAGEEC meeting, 14th November, 2013, Lymington = = Present: SC, JP, SH, JB = == Review of previous actions == * ER table complete * Moon has written up but not completed sec..."</p>
<hr />
<div>= MAGEEC meeting, 14th November, 2013, Lymington =<br />
= Present: SC, JP, SH, JB =<br />
== Review of previous actions ==<br />
* ER table complete<br />
* Moon has written up but not completed second blog post<br />
* Wuthering Bytes went well and had a larger than expected interest for the energy measuring boards.<br />
<br />
== Project progress and plan update ==<br />
* WP2,5,6,8,9 active.<br />
* Need to review the test set deliverable – still too small.<br />
* SC: working on gcc plugin -fplugin=libmageec_gcc.so<br />
** Will configure so that machine learner can override and gcc assumed flags if it wants to set/clear then.<br />
** First implementation is there, but not yet hooked into the machine learner. <br />
*** The hook-in is on target for the end of the month.<br />
** Tests not ready so may miss schedule. D2.2<br />
* Need extra command-line tool for manipulating results table, but this is not in current deliverable.<br />
* WP4 (training set) needs extending. New deliverable 4.4, the evaluation set. Could be BEEBS V2.<br />
* WP7.1, 7.2 done via published papers and JP's PhD work.<br />
* Risk register updated. Strategy for dealing with new highest risk adopted.<br />
<br />
== Preparation for project review ==<br />
* Review of progress against targets<br />
* LLVM target will not be started by end of month. 2.2 deadline from 30Dec->28 Feb. Instead, gcc focus will push on getting it produce some productions.<br />
** Delay due to ML dependency. The ML interface is no fixed, so should not have a knock-on effect on the subsequent iterations, but has impacted completion time of <br />
* D7.1 and D7.2 completed. Documentation will come as part of JP's PhD review on 4<sup>th</sup> December.<br />
* WP3. 3.3 V2 measuring hardware due on 11<sup>th</sup> December.<br />
* WP5: all deliverables completed.<br />
* WP7: D7.1, D7.2 complete. D7.3 underway.<br />
* WP9: Customer and enthusiast interest. FOSDEM. New contracts. Papers, REF impact.<br />
* Evaluation criteria plan for presentation.<br />
<br />
== Flash line optimisation (under WP7) ==<br />
* D7.2 JP has identified how to optimise code for flash<br />
* D7.3 Joern implementation in progress on gcc according to agreed requirements.<br />
* We will want to test for improvement/harm on architectures that do/do not have the optimised-for flash.<br />
* D7.4 question about implementation on architectures that do not support LLVM<br />
<br />
== Actions ==<br />
* SC: Link to wiki page to be added on how to interact with results table.<br />
* SC: 2.2 deadline from 30Dec->28 Feb. Update risk register and work plan.<br />
* JP: post (link/copy) PhD review material for D7.1 and D7.2 when complete.<br />
* SC, JB, SH: review progress on iteration 1 and subsequent iterations for impacts on timescale.<br />
* SC: D1.5. Needs link to a change doc.<br />
* SH: V2 measurement board wiki and blog post. Harnessing demand and industry for external evaluation theme.<br />
* SC: Update 3.3 due date to 31<sup>st</sup> Jan (for FOSDEM).<br />
* SC: New deliverable 4.4, the evaluation set. <br />
* SH, MG, SC, JB: Schedule a time (5<sup>th</sup> Dec?) to sit down and trawl through benchmark possibilities for a test set<br />
* SC: D5.1-5.4 mark as completed and link to all. Lit page, add link to Moon's work.<br />
* JP: Update D7.1 with information<br />
* SC: set up media links to pull together for WP9.<br />
* AB: Push on community contributions to benchmarks.<br />
* SC: Arrange monthly phone conferences to discuss the new main project risk, on WP4.<br />
* SH: UoB budget report and forecasts.<br />
* JB: Open hardware licence update<br />
* SC: Presentation for project review.<br />
* SH: Chase accounting rules for Embecosm<br />
* JP/SH: Pictures of kit for review presentation.<br />
* JB, SH, JP, KIE, SC: Log talks for FOSDEM in the online system.<br />
* SC: Setup wikipage for sticker ideas.<br />
* AB: Table at FOSDEM: Sat → fully manned. Sunday → one man show<br />
* SH: Extra spot at FOSDEM?<br />
<br />
== FOSDEM ==<br />
* Main track accepted.<br />
<br />
* Devroom topics<br />
<br />
* JB: Introductory talk (non main track version)<br />
* SH: The physics of energy usage<br />
* JP: Energy measurement hardware<br />
* KIE: ENTRA<br />
* Hayden: EACOF<br />
* SC: MAGEEC<br />
* Open Low Power Devices, Emilio Monti (mbed)<br />
* Workshop<br />
<br />
[[Category:Meetings]]</div>Simonhttp://mageec.org/w/index.php?title=Meeting_09-10-2013&diff=257Meeting 09-10-20132013-10-09T14:01:21Z<p>Simon: </p>
<hr />
<div><center>'''MAGEEC Meeting 09/10/2013'''</center><br />
<br />
<center>Present JB, SJH, AB, KIE, Hayden Fields, AW, MG, Craig Blackmore</center><br />
<br />
<br />
== Planning and management ==<br />
* Current progress slightly behind schedule<br />
** Simon C will increase hours in October<br />
** Joern will begin work soon<br />
<br />
== Exploitation ==<br />
* Power measurement board has some commercial interest from Wuthering Bytes<br />
** Paul Tanner<br />
** Ken Boak<br />
** We should produce a batch for sale/donation to interested parties<br />
<br />
== Benchmark sources (see also note on Moon's discussion) ==<br />
* Could ask the community to run BEEBS on multiple platforms<br />
* Could ask community to supply applications as benchmarks<br />
* LLVM regression test suite<br />
** Coreutils have some library depencies and some are small<br />
*** Suggestion to select those that are usable and large enough<br />
** LLVM regression tests<br />
* GDB regression tests<br />
* Initially, we need a large set of test programs for a single platform, then extend.<br />
* Dhrystone<br />
* osadl.org<br />
* nqueens type<br />
* From post: [https://mailman.cs.umd.edu/pipermail/otter-dev/2011-January/000521.html https://mailman.cs.umd.edu/pipermail/otter-dev/2011-January/000521.html]<br />
<br />
Coreutils (KLEE)<br />
<br />
Busybox<br />
Minix utilities<br />
HiStar<br />
SQLite (Execution Synthesis)<br />
ghttpd<br />
HawkNL<br />
SGLIB (CUTE)<br />
vim (Hybrid concolic testing)<br />
oSIP (DART)<br />
<br />
<br />
== FOSDEM ==<br />
* Devroom proposal has been accepted<br />
** Nominally 9am-4pm on Sunday<br />
** May be a tiered lecture room<br />
* Should the day be broken into themed blocked<br />
** e.g. Energy measurement<br />
** Code optimisation<br />
** Transparency<br />
** …<br />
* Workshop timing after lunch<br />
* Proposal for technical talk is still under consideration (Dec decision)<br />
* TSB contribution to recognise UK gov + Open source and how to support them. <br />
* We should issue a call for contributions<br />
** Looking for talks and hands-on demonstrations<br />
** Esp. check if Prof Luca Benini's group is interested in contributing<br />
* Call contents:<br />
** Lightning talks<br />
** Suggested parts of workshops<br />
** Presentations<br />
** Demonstrations<br />
** Lightning talks<br />
** Out through at least: HiPEAC, OSHUG, BCSOSSIG, TSB, EACO, research community<br />
* '''Might we want a MAGEEC table on Saturday (in corridors) to pull interest into Sunday'''<br />
* Energy-measurement board workshops (need boards + numbers cap + iterations)<br />
** Could be AVR/Shrimping before the day, with kits provided beforehand<br />
** Could be ARM+Energy measurement boards hackathon on the day<br />
* Competition element<br />
** KIE suggests delaying to next year<br />
** SJH thinks we may be able to do it on the day<br />
** JB thinks that it could be beforehand or not easy to do this year<br />
** AB wonders if it could be online<br />
** Majority view seems to be to delay<br />
* Lightning talks?<br />
* Birds-of-a-feather session<br />
** Shall we encourage other people we know to attend<br />
** European 'experts' in energy efficiency expected<br />
* BEEBS<br />
** Introduction + a 'bring along your code' session where we can feed more applications into the MAGEEC project.<br />
** Could we run a re-engineering for optimisation game session<br />
* EACOF demo<br />
** Hayden has a 10-15 min presentation<br />
** Linux + OSX support<br />
*** Could possibly be packaged up for a workshop<br />
**** Jeremy has concerns over the work involved for this<br />
***** Perhaps it should be a demo + one-to-one 'guru' session for you to install on your own machine.<br />
** SAC paper in the works<br/> <br />
<br />
<br />
'''ACTION: AB: Draft call for participation this week for dissemination next week. Arrange a phone call to discuss the draft.'''<br />
<br />
<br />
== ML Discussion (Moon) ==<br />
* J48 strongly indicated as best ML approach going forward<br />
* Moon will continue with work as project, and LLVM target<br />
* Need to extend test suites (see benchmarks section)<br />
* '''Action: Moon to make a blog post on the literature review '''<br />
<br />
== BEEBS ==<br />
* Do we need a notion of scaling factors for BEEBS on smaller/larger platforms.<br />
** Could drive towards energy-proportional computing<br />
* Do we need to fix the implementation for a given architecture in terms of data sizes etc.<br />
* '''Extensions:''' Do we want to re-write BEEBS for platforms to remove usage of libraries (C standard libraries)<br />
** JB: Could be a lot of work for nothing, but libraries should be compiled with same options as the BEEBS code.<br />
*** Official libraries probably been optimised for the architecture.<br />
** KIE: Could require the user to know about the external library. e.g. -dIncludeExternalLibrary, to make them aware.<br />
*** Or provide both standalone and libraried code.<br />
** SH: What difference in energy from running with included libraries vs all in the source file (i.e. can global analysis do better with all of the code). JP to experiment on this with BEEBS.<br />
** Consensus: need both at this stage.<br />
** '''Action: JP to explore.'''<br />
<br />
== Database ==<br />
* Oliver gave a very useful presentation on potential database schemas<br />
* Selected Approach 4, with tweak so that RH table is keyed on hash of option sequences. This allows fast checking of existence of a flag set.<br />
** Weak hash OK, since checking of collision is low cost.'''Action: Simon C to review Oliver's presentation and schema and suggest how to integrate into the framework.'''<br />
<br />
== Student projects (Craig) ==<br />
* Interests in benchmark collation and evaluation<br />
** Why does X have the impact is does<br />
** How could an architecture/compiler be optimised based on what we find out<br />
* Declarative learning<br />
** KIE suggests follow on from Bin Tao's work<br />
*** SJH suggests moving to Beaglebone to simultaneously gather performance and energy<br />
* SJH: Energy modelling<br />
** Reverse build from MAGEEC data and James-type analysis of traces.<br />
** ARM energy building<br />
** JB: OpenRISC, combined with WAATCH is a one way<br />
* KIE: ENTRA work pushing up ISA model to IR for multi-threaded scheduling for optimality<br />
<br />
[[Category:Meetings]]</div>Simonhttp://mageec.org/w/index.php?title=Meeting_09-10-2013&diff=256Meeting 09-10-20132013-10-09T14:00:45Z<p>Simon: Created page with "<center>'''MAGEEC Meeting 09/10/2013'''</center> <center>Present JB, SJH, AB, KIE, Hayden Fields, AW, MG, Craig Blackmore</center> == Planning and management == * Current p..."</p>
<hr />
<div><center>'''MAGEEC Meeting 09/10/2013'''</center><br />
<br />
<center>Present JB, SJH, AB, KIE, Hayden Fields, AW, MG, Craig Blackmore</center><br />
<br />
<br />
== Planning and management ==<br />
* Current progress slightly behind schedule<br />
** Simon C will increase hours in October<br />
** Joern will begin work soon<br />
<br />
== Exploitation ==<br />
* Power measurement board has some commercial interest from Wuthering Bytes<br />
** Paul Tanner<br />
** Ken Boak<br />
** We should produce a batch for sale/donation to interested parties<br />
<br />
== Benchmark sources (see also note on Moon's discussion) ==<br />
* Could ask the community to run BEEBS on multiple platforms<br />
* Could ask community to supply applications as benchmarks<br />
* LLVM regression test suite<br />
** Coreutils have some library depencies and some are small<br />
*** Suggestion to select those that are usable and large enough<br />
** LLVM regression tests<br />
* GDB regression tests<br />
* Initially, we need a large set of test programs for a single platform, then extend.<br />
* Dhrystone<br />
* osadl.org<br />
* nqueens type<br />
* From post: [https://mailman.cs.umd.edu/pipermail/otter-dev/2011-January/000521.html https://mailman.cs.umd.edu/pipermail/otter-dev/2011-January/000521.html]<br />
<br />
Coreutils (KLEE)<br />
<br />
Busybox<br />
Minix utilities<br />
HiStar<br />
SQLite (Execution Synthesis)<br />
ghttpd<br />
HawkNL<br />
SGLIB (CUTE)<br />
vim (Hybrid concolic testing)<br />
oSIP (DART)<br />
<br />
<br />
== FOSDEM ==<br />
* Devroom proposal has been accepted<br />
** Nominally 9am-4pm on Sunday<br />
** May be a tiered lecture room<br />
* Should the day be broken into themed blocked<br />
** e.g. Energy measurement<br />
** Code optimisation<br />
** Transparency<br />
** …<br />
* Workshop timing after lunch<br />
* Proposal for technical talk is still under consideration (Dec decision)<br />
* TSB contribution to recognise UK gov + Open source and how to support them. <br />
* We should issue a call for contributions<br />
** Looking for talks and hands-on demonstrations<br />
** Esp. check if Prof Luca Benini's group is interested in contributing<br />
* Call contents:<br />
** Lightning talks<br />
** Suggested parts of workshops<br />
** Presentations<br />
** Demonstrations<br />
** Lightning talks<br />
** Out through at least: HiPEAC, OSHUG, BCSOSSIG, TSB, EACO, research community<br />
* '''Might we want a MAGEEC table on Saturday (in corridors) to pull interest into Sunday'''<br />
* Energy-measurement board workshops (need boards + numbers cap + iterations)<br />
** Could be AVR/Shrimping before the day, with kits provided beforehand<br />
** Could be ARM+Energy measurement boards hackathon on the day<br />
* Competition element<br />
** KIE suggests delaying to next year<br />
** SJH thinks we may be able to do it on the day<br />
** JB thinks that it could be beforehand or not easy to do this year<br />
** AB wonders if it could be online<br />
** Majority view seems to be to delay<br />
* Lightning talks?<br />
* Birds-of-a-feather session<br />
** Shall we encourage other people we know to attend<br />
** European 'experts' in energy efficiency expected<br />
* BEEBS<br />
** Introduction + a 'bring along your code' session where we can feed more applications into the MAGEEC project.<br />
** Could we run a re-engineering for optimisation game session<br />
* EACOF demo<br />
** Hayden has a 10-15 min presentation<br />
** Linux + OSX support<br />
*** Could possibly be packaged up for a workshop<br />
**** Jeremy has concerns over the work involved for this<br />
***** Perhaps it should be a demo + one-to-one 'guru' session for you to install on your own machine.<br />
** SAC paper in the works<br/> <br />
<br />
<br />
'''ACTION: AB: Draft call for participation this week for dissemination next week. Arrange a phone call to discuss the draft.'''<br />
<br />
<br />
== ML Discussion (Moon) ==<br />
* J48 strongly indicated as best ML approach going forward<br />
* Moon will continue with work as project, and LLVM target<br />
* Need to extend test suites (see benchmarks section)<br />
* '''Action: Moon to make a blog post on the literature review '''<br />
<br />
== BEEBS ==<br />
* Do we need a notion of scaling factors for BEEBS on smaller/larger platforms.<br />
** Could drive towards energy-proportional computing<br />
* Do we need to fix the implementation for a given architecture in terms of data sizes etc.<br />
* '''Extensions:''' Do we want to re-write BEEBS for platforms to remove usage of libraries (C standard libraries)<br />
** JB: Could be a lot of work for nothing, but libraries should be compiled with same options as the BEEBS code.<br />
*** Official libraries probably been optimised for the architecture.<br />
** KIE: Could require the user to know about the external library. e.g. -dIncludeExternalLibrary, to make them aware.<br />
*** Or provide both standalone and libraried code.<br />
** SH: What difference in energy from running with included libraries vs all in the source file (i.e. can global analysis do better with all of the code). JP to experiment on this with BEEBS.<br />
** Consensus: need both at this stage.<br />
** '''Action: JP to explore.'''<br />
<br />
== Database ==<br />
* Oliver gave a very useful presentation on potential database schemas<br />
* Selected Approach 4, with tweak so that RH table is keyed on hash of option sequences. This allows fast checking of existence of a flag set.<br />
** Weak hash OK, since checking of collision is low cost.'''Action: Simon C to review Oliver's presentation and schema and suggest how to integrate into the framework.'''<br />
<br />
== Student projects (Craig) ==<br />
* Interests in benchmark collation and evaluation<br />
** Why does X have the impact is does<br />
** How could an architecture/compiler be optimised based on what we find out<br />
* Declarative learning<br />
** KIE suggests follow on from Bin Tao's work<br />
*** SJH suggests moving to Beaglebone to simultaneously gather performance and energy<br />
* SJH: Energy modelling<br />
** Reverse build from MAGEEC data and James-type analysis of traces.<br />
** ARM energy building<br />
** JB: OpenRISC, combined with WAATCH is a one way<br />
* KIE: ENTRA work pushing up ISA model to IR for multi-threaded scheduling for optimality</div>Simonhttp://mageec.org/w/index.php?title=Meeting_04-09-2013&diff=236Meeting 04-09-20132013-09-04T16:00:58Z<p>Simon: /* Action Points from meeting 04/09/2013 */</p>
<hr />
<div>== Action Points from meeting 04/09/2013 ==<br />
<br />
* [JB,SH,KIE] Prepare FOSDEM Proposal<br />
** Deadline 13th September<br />
** [JB] Contact Grigory and Albert, TSB, David Greaves. Create first draft of proposal.<br />
** [KIE] Contact Mike O'boyle, Paul Beskeen, EACOF, EACOP, EEC-SIG. Arrange funding: Andrew<br />
** [SH] Arrange funding: Caroline, <br />
** [JP] Summary of Pre-instrumented hardware demo content for proposal. Solder up hardware in Lymington.<br />
<br />
<br />
* [JP,KIE] Sort out entity relationship diagram<br />
<br />
* [MG,KIE,OR] Write-up of Summer work.<br />
** Cross between a tech report and the first half of an eventual paper.<br />
** Aim to end at the part where we show how we have identified candidate SVM,J?? and KNN algorithms. Aim to generalise to show how these, with further work can be levered to provide solutions to MAGEEC and similar problems.<br />
** To discuss further on Friday with KIE and OR<br />
<br />
* [JP] Wuthering Bytes<br />
** Formulate the workshop</div>Simonhttp://mageec.org/w/index.php?title=Meeting_04-09-2013&diff=235Meeting 04-09-20132013-09-04T15:19:08Z<p>Simon: /* Action Points */</p>
<hr />
<div>== Action Points from meeting 04/09/2013 ==<br />
<br />
* [JB,SH,KIE] Prepare FOSDEM Proposal<br />
** Deadline 13th September<br />
** [JB] Contact Grigory and Albert, TSB, David Greaves. Create first draft of proposal.<br />
** [KIE] Contact Mike O'boyle, Paul Beskeen, EACOF, EACOP, EEC-SIG. Arrange funding: Andrew<br />
** [SH] Arrange funding: Caroline, <br />
** [JP] Summary of Pre-instrumented hardware demo content for proposal. Solder up hardware in Lymington.<br />
<br />
<br />
* [JP,KIE] Sort out entity relationship diagram<br />
<br />
* [MG] Write-up of Summer work.</div>Simonhttp://mageec.org/w/index.php?title=Meeting_04-09-2013&diff=234Meeting 04-09-20132013-09-04T15:17:28Z<p>Simon: Created page with "== Action Points == * [JB,SH,KIE] Prepare FOSDEM Proposal ** Deadline 13th September ** [JB] Contact Grigory and Albert, TSB, David Greaves. Create first draft of proposal. *..."</p>
<hr />
<div>== Action Points ==<br />
<br />
* [JB,SH,KIE] Prepare FOSDEM Proposal<br />
** Deadline 13th September<br />
** [JB] Contact Grigory and Albert, TSB, David Greaves. Create first draft of proposal.<br />
** [KIE] Contact Mike O'boyle, Paul Beskeen, EACOF, EACOP, EEC-SIG. Arrange funding: Andrew<br />
** [SH] Arrange funding: Caroline, <br />
** [JP] Summary of Pre-instrumented hardware demo content for proposal.</div>Simonhttp://mageec.org/w/index.php?title=Deliverable_5.2&diff=201Deliverable 5.22013-08-29T12:34:36Z<p>Simon: Created page with "Category:Deliverables =Deliverable 5.2: Selection of Core Machine Learning Algorithms= ==Status: Ongoing, options identified== Via experimentaion, primarily using the W..."</p>
<hr />
<div>[[Category:Deliverables]]<br />
<br />
=Deliverable 5.2: Selection of Core Machine Learning Algorithms=<br />
<br />
==Status: Ongoing, options identified==<br />
<br />
Via experimentaion, primarily using the WEKA framework, along with discussions from the MILEPOST team, we have already identified the following as possible core machine learning algorithms:<br />
<br />
* Bullet points<br />
* for Moon to fill in<br />
<br />
Of these, the following have been discarded as unsuitable:<br />
<br />
<br />
<br />
This leaves a choice between X, Y & Z to be made by the end of September. We will do this taking into account ...</div>Simonhttp://mageec.org/w/index.php?title=Deliverable_5.1&diff=200Deliverable 5.12013-08-29T12:34:09Z<p>Simon: Created page with "Category:Deliverables =Deliverable 5.1: Machine Learning Literature Review= ==Status: Ongoing== Over the project, so far, we have identified a good deal of relevant mac..."</p>
<hr />
<div>[[Category:Deliverables]]<br />
<br />
=Deliverable 5.1: Machine Learning Literature Review=<br />
<br />
==Status: Ongoing==<br />
<br />
Over the project, so far, we have identified a good deal of relevant machine learning literature. Much is detailed in terms of references to works on [http://mageec.org/wiki/Literature the literature wiki page]. A number of practical techniques have been identified and what remains is to write this up into a review in the form of a technical note.</div>Simonhttp://mageec.org/w/index.php?title=Deliverable_4.3&diff=199Deliverable 4.32013-08-29T12:32:36Z<p>Simon: Created page with "Category:Deliverables =Deliverable 4.3: Embedded system set-up= ==Status: Complete, can be extended with further systems== In this deliverable, we show the physical man..."</p>
<hr />
<div>[[Category:Deliverables]]<br />
<br />
=Deliverable 4.3: Embedded system set-up=<br />
<br />
==Status: Complete, can be extended with further systems==<br />
<br />
In this deliverable, we show the physical manifestation of an embedded system running test code from Deliverables 4.1 and 4.2.<br />
<br />
The best way to show this is with some media, so here is a photograph of an ARM Cortex-M3 platform under test.<br />
<br />
[[File:Embedded_System_Set-up.jpg]]<br />
<br />
and, here is a video of the code running and producing some output<br />
<br />
**VIDEO**<br />
<br />
The above setup is generalisable to many systems, including XMOS, AVR and other ARM platforms.</div>Simonhttp://mageec.org/w/index.php?title=Deliverable_4.2&diff=198Deliverable 4.22013-08-29T12:28:42Z<p>Simon: Created page with "Category:Deliverables =Deliverable 4.2: Case study source= ==Status: Ongoing== The case studies are intended to bring together a wider set of source and application cod..."</p>
<hr />
<div>[[Category:Deliverables]]<br />
<br />
=Deliverable 4.2: Case study source=<br />
<br />
==Status: Ongoing==<br />
<br />
The case studies are intended to bring together a wider set of source and application code than for the training set alone. With the case studies, we also target users external to the MAGEEC project, with the aim of capturing as diverse and extensive a code base as possible.<br />
<br />
Internally, we are considering the use of some of the potential training set expansion.<br />
<br />
On the external font, we are engaging the community via the website, mailing list and contracted services of AB Open, and are encouraging code submission to the project. As of Aug 2013, we have not yet received any code but will leave this avenue open.</div>Simonhttp://mageec.org/w/index.php?title=Deliverable_4.1&diff=197Deliverable 4.12013-08-29T12:28:03Z<p>Simon: Created page with "Category:Deliverables =Deliverable 4.1: Training set source= ==Status: Core Complete; Potential for extensions== In this deliverable, we aim to supply a large training ..."</p>
<hr />
<div>[[Category:Deliverables]]<br />
<br />
=Deliverable 4.1: Training set source=<br />
<br />
==Status: Core Complete; Potential for extensions==<br />
<br />
In this deliverable, we aim to supply a large training set of data that the Machine Learning framework can use to learn the relevant features that connect program code and its energy consumption.<br />
<br />
We decided that the best approach was to first produce a core set of applications that span the intended embedded application space, before extending it with a larger and larger code base in later phases of the project.<br />
<br />
<br />
===Core source set===<br />
<br />
We decided that the core set should be a self-contained benchmark suite. To this end, we developed the [http://www.cs.bris.ac.uk/Research/Micro/beebs.jsp BEEBS benchmark suite], which includes 10 core applications from across the embedded application space.<br />
<br />
BEEBS has then been selected as our core training set source and is again released under an open source license [https://github.com/mageec/lowpower-benchmarks on the MAGEEC github site].<br />
<br />
===Future extension potential===<br />
<br />
Initial applications of the core source set to ML systems has indicated that the ML training will perform better if a larger quantity of input programs are available. Therefore, it is likely that we will want to extend the source set at a later date to improve the performance of the overall MAGEEC system.<br />
<br />
To this end, we have identified a number of potential expansions to the code base, including:<br />
<br />
* GCC regression suite<br />
* LLVM nightly test suite<br />
* GNU coreutils<br />
* Linux build essentials,<br />
* Mibench (non-BEEBS tests)<br />
<br />
These will be investigated alongside the framework development.</div>Simonhttp://mageec.org/w/index.php?title=Deliverable_3.3&diff=196Deliverable 3.32013-08-29T12:27:20Z<p>Simon: Created page with "Category:Deliverables =Deliverable 3.3: V2 Power measurement hardware= ==Status: Planned== When experimenting with the MAGEEC power measurement hardware, we discovered ..."</p>
<hr />
<div>[[Category:Deliverables]]<br />
<br />
=Deliverable 3.3: V2 Power measurement hardware=<br />
<br />
==Status: Planned==<br />
<br />
When experimenting with the MAGEEC power measurement hardware, we discovered that, whilst it provides accurate results, there are a number of areas for improvement in the design to support the simultaneous measurement of multiple target platforms, particularly those that are not the ARM-M3-based measurement host.<br />
<br />
Implementing a revised version of the design would thus increase the rate at which we can profile the energy consumption of code and improve the throughput of testing. The accuracy and quality of results would be unchanged.<br />
<br />
Given that we know exactly what to do and have a base design to start from, producing a V2 hardware will be straightforward and the time required to do so low. We will, however, delay the start of this work until we have gathered as much feedback as possible about the use cases and features of the existing board. This will ensure that we make the right changes in V2.</div>Simonhttp://mageec.org/w/index.php?title=Deliverable_3.2&diff=195Deliverable 3.22013-08-29T12:26:40Z<p>Simon: Created page with "Category:Deliverables =Deliverable 3.2: Working power measurement hardware= ==Status: Complete== The MAGEEC power measurement hardware has been manufactured and is worki..."</p>
<hr />
<div>[[Category:Deliverables]]<br />
<br />
=Deliverable 3.2: Working power measurement hardware=<br />
==Status: Complete==<br />
<br />
The MAGEEC power measurement hardware has been manufactured and is working and available for experimentation on. <br />
<br />
Here is an image of the hardware:<br />
<br />
We have also created a Youtube video of it being used in combination with an experimental version of the energy testing framework and the BEEBS benchmarks.<br />
<br />
The hardware has been tested and is compatible with our work flow and gives accurate results.</div>Simonhttp://mageec.org/w/index.php?title=Deliverable_3.1&diff=194Deliverable 3.12013-08-29T12:25:58Z<p>Simon: Created page with "Category:Deliverables =Deliverable 3.1: Board Design Documentation= ==This deliverable has been completed on 28th August 2013== The deliverable includes the open-source..."</p>
<hr />
<div>[[Category:Deliverables]]<br />
<br />
=Deliverable 3.1: Board Design Documentation=<br />
<br />
==This deliverable has been completed on 28th August 2013==<br />
<br />
The deliverable includes the open-sourced design for the hardware measurement board, along with instructions on how to use it and how external parties may re-create the same design.<br />
<br />
All of the information has been captured in a git repository, which is associated with the MAGEEC project.<br />
<br />
The repository is externally hosted as a [https://github.com/mageec/powersense-shield github repository].</div>Simonhttp://mageec.org/w/index.php?title=Milestone_3.1&diff=193Milestone 3.12013-08-29T12:22:35Z<p>Simon: Blanked the page</p>
<hr />
<div></div>Simonhttp://mageec.org/w/index.php?title=Milestone_3.1&diff=189Milestone 3.12013-08-29T08:41:37Z<p>Simon: Created page with "=Milestone 3.1: Board Design Documentation= ==This deliverable has been completed on 28th August 2013== The deliverable includes the open-sourced design for the hardware mea..."</p>
<hr />
<div>=Milestone 3.1: Board Design Documentation=<br />
<br />
==This deliverable has been completed on 28th August 2013==<br />
<br />
The deliverable includes the open-sourced design for the hardware measurement board, along with instructions on how to use it and how external parties may re-create the same design.<br />
<br />
All of the information has been captured in a git repository, which is associated with the MAGEEC project.<br />
<br />
The repository is externally hosted as a [https://github.com/mageec/powersense-shield github repository].</div>Simonhttp://mageec.org/w/index.php?title=MAGEEC&diff=176MAGEEC2013-08-21T15:34:09Z<p>Simon: /* Project meetings */</p>
<hr />
<div>Welcome to the Wiki for the MAchine Guided Energy Efficient Compilation Project (MAGEEC).<br />
<br />
This wiki uses the category system to group pages. The tabs above will take you to the main categories.<br />
<br />
== Getting Involved ==<br />
<br />
=== Wiki ===<br />
<br />
You can register for the wiki [http://mageec.org/wordpress/wp-register.php here]. Please use the wiki category system with any new pages, since that makes the index more useful.<br />
<br />
Standard Wikipedia formatting conventions apply here. Only the first letter of page names and section headings should be capitalized. Pages should only use heading level 2 and below.<br />
<br />
=== Mailing lists ===<br />
<br />
* The main mageec mailing list is [http://mageec.org/cgi-bin/mailman/listinfo/mageec mageec@mageec.org]. Anyone can join, and this is where most work is discussed.<br />
* The research team at Embecosm and Bristol University have [mailto:mageec-magicians@sympa.bristol.ac.uk an internal mailing list]. Nothing especially secret here&mdash;just for issues it would be inappropriate to share with the entire community.<br />
<br />
=== IRC ===<br />
<br />
Day to day discussion is on channel #mageec at freenode.net. You can join by clicking [irc://irc.freenode.com:6667/mageec here]. The entire discussion is archived [http://mageec.org/irclogs here].<br />
<br />
=== Events ===<br />
<br />
Upcoming events:<br />
<br />
Past events:<br />
* Jeremy Bennett spoke at the [https://connect.innovateuk.org/web/eec/events-view/-/events/6715007 Energy Efficient Computing SIG Annual Event] <br />
**[[Media:Tsb-eec-mageec-18-jul-13.pdf|slides (PDF)]] [[Media:Tsb-eec-mageec-18-jul-13.odp|(ODP)]]<br />
* [http://gcc.gnu.org/wiki/cauldron2013 GNU Tools Cauldron 2013].<br />
** James Pallister's presentation ''The Impact of Different Compiler Options on Energy Consumption'' [[Media:JamesCauldron2013.pdf|slides]] and [http://www.youtube.ca/watch?v=Y-Hr8pCAtaM&list=PLsgS8fWwKJZhrjVEN7tsQyj2nLb5z0n70&index=23 video].<br />
** Jeremy Bennett and Simon Cook's presentation ''MAGEEC: MAchine Guided Energy Efficient Compilation'' [[Media:2013-07-13 MAGEEC (Cauldron).pdf|slides (PDF)]] [[Media:2013-07-13 MAGEEC (Cauldron) Slides.odp|(ODP)]] and [http://www.youtube.ca/watch?v=ysOVgWptNgY&list=PLsgS8fWwKJZhrjVEN7tsQyj2nLb5z0n70&index=17 video].<br />
<br />
== Design and Implementation ==<br />
<br />
All design and implementation documents are in the [[Category:Design|Design category]].<br />
<br />
Software Design:<br />
* [[Design_overview|Overview of the design]].<br />
* [[Interface Flow|Interface flow]].<br />
<br />
Hardware Design:<br />
* [[Power Sensing Board|Power sensing board]].<br />
<br />
=== Download ===<br />
<br />
Software and hardware designs can be downloaded from the mageec GitHub repositories<br />
<br />
=== Previous Work ===<br />
<br />
MAGEEC draws heavily on MILEPOST<br />
* [[Installing MILEPOST]]<br />
<br />
== Research ==<br />
<br />
Related research [[Literature|literature]]<br />
Current [[Research Questions|Research questions]].<br />
<br />
== Planning and organization ==<br />
<br />
=== People ===<br />
<br />
* [[User:Jeremybennett|Jeremy Bennett]], Embecosm. Project Manager<br />
* [[User:Simon|Simon Hollis]], Bristol University. Project lead at Bristol University.<br />
* [[User:Simoncook|Simon Cook]], Embecosm. Project lead engineer.<br />
* [[User:Andrew|Andrew Back]], AB Open. Community Manager.<br />
* [[User:Kerstin|Kerstin Eder]], Bristol University.<br />
* [[User:James|James Pallister]], Bristol University.<br />
* [[User:Munaaf|Munaaf Ghumran]], Bristol University.<br />
* [[User:AWhetter|Ashley Whetter]], Bristol University.<br />
* [[User:Joern|Joern Rennecke]], Embecosm.<br />
<br />
=== Project Plan ===<br />
<br />
The project plan is a living document. You can see both the current version and history of the components:<br />
* [[Project Plan|Project plan]] (which lists all the work packages)<br />
** [[Project_Plan#Gantt_Chart|Gantt chart]]<br />
* [[Milestones]]<br />
* [[Risk Register|Risk register]]<br />
<br />
All planning documments are in the [[:Category:Planning|Planning category]].<br />
<br />
=== Project meetings ===<br />
<br />
The project team meets regularly to manage the project.<br />
* [[Meeting_01-07-2013|Meeting 1 July 2013, Embecosm]]<br />
* [[Meeting 22-07-2013|Meeting 22 July 2013, UoB]]<br />
* [[Meeting_31-07-2013|Meeting 31 July 2013, Embecosm]]<br />
* [[Meeting-21st_August_2013 | Meeting 21 August 2013, UoB]]</div>Simonhttp://mageec.org/w/index.php?title=Meeting_21-08-2013&diff=174Meeting 21-08-20132013-08-21T15:33:20Z<p>Simon: Created page with "= Meeting at Bristol: 21st August 2013 = == Present: KIE, MG, AW, JB, JP, SC, SH == # Technical items &nbsp;&nbsp; - update on the interface (Simon C to lead) * ** Discussio..."</p>
<hr />
<div>= Meeting at Bristol: 21st August 2013 =<br />
== Present: KIE, MG, AW, JB, JP, SC, SH ==<br />
# Technical items &nbsp;&nbsp; - update on the interface (Simon C to lead)<br />
<br />
* <br />
** Discussion of the results database in the framework:<br />
*** Distinct databases for different compilers<br />
*** Compilations options should include the source code<br />
*** An entity relationship diagram would help map e.g. testID->runID<br />
*** runID+testID will generate a unique key for top table<br />
*** index is also a unique key.<br />
*** One could be eliminated.<br />
* &nbsp;&nbsp; - discuss data gather phase (Simon C, James, Ashley) <br />
** Top level framework gains specified flags (Q: how to select tests from them?)<br />
** Interface to GDB should be “MI” not RPI.<br />
** Heated discussion about best location for the power data to feed through into the GDB runners, and the most appropriate language for implementation (Boost python vs Tcl + C++)<br />
*** Similarities to GDB regression.<br />
*** Expect was discussed<br />
**** Can solve timeout problem too<br />
*** DejaGNU test suite is migrating to python<br />
*** JB was concerned that our flow might not be accepted by the community.<br />
* &nbsp;&nbsp; - demo of energy capture framework (Ashley) <br />
** Ashley gave demo of existing implementation<br />
*** Good starts.<br />
*** Monitor commands may be ignored.<br />
*** Need to ensure that the framework is generic for platforms we have not yet used.<br />
*** Expect may solve lots of coding work here.<br />
*** Config file strategy is a good.&nbsp;&nbsp; - instrumenting the Embecosm kit (James) <br />
*** AVR first target<br />
**** 8 bit and “different”<br />
*** MAGEEC on SDCC compiler on ?8051?<br />
**** Potential UoB group project in this area&nbsp;&nbsp; - acceptance testing of initial framework and user flow <br />
*** Embecosm guys to take away copy of kit to give feedback on the code base and current approach.<br />
*** Initial acceptance of the current framework implementation to tie up Ashley's current efforts. No immediate requirement for porting to MI or Expect.<br />
*** Tidy up with extra wiki pages, blog posts and video of the demo.<br/> <br />
<br />
* Moon's ML update<br />
** Been in contact with MILEPOST authors.<br />
*** Felt that PCA was not a good approach – lost features<br />
*** Decision trees were promoted for speed and simplicity by the other experts.<br />
** Support-vector models being explored.<br />
** How to feed the energy data into the framework – where does it come from initially? Current sample size too small to do decent learning.<br />
* 3. Public side &nbsp;&nbsp; - Andrew's licensing and open data questions <br />
** Adopted CC BY-SA 3.0 for hardware.<br />
** Data sets to GPLv3<br />
** Open hardware logo (and license?!) on any new hardware adopted.<br />
** Prototype BEEBS web page (email sent about). Any comments welcomed.&nbsp;&nbsp; - MAGEEC public mailing list — Working with open source embedded projects<br />
** We decided to allow external contributions, provided they are supplied under an identical license to the underlying project work.<br />
** Data contributions should be siloed by producer. 2. Preparation for quarterly review <br />
** Review on 4th --- Ashley will be away; Moon will be here.&nbsp;&nbsp; - Simon C will take us through the steps &nbsp;&nbsp; - review milestones, work packages, risk register &nbsp;&nbsp; - we'll need a first cut at finance report and budgeting <br />
*** Available flickr stream for photos<br/> <br />
<br />
<br />
* Could we work with open source embedded projects to mutual benefit? I.e. we help them increase their energy-efficiency and benefit from an exemplar use case. One possible candidate could be OpenEnergyMonitor(for their battery powered emonTX wireless sensor board). [http://openenergymonitor.org/emon/ http://openenergymonitor.org/emon/]<br />
** Was substantial enthusiasm for the general idea of working with related projects. The OpenEnergyMonitor does not look immediately compatible with our project goals, but we are willing to explore other candidates as well as this in some more detail.<br />
* New open mailing list @mageec.org<br />
** <br />
*** Use for technical discussions<br />
* mageec.org structure needs overhaul<br />
** home page should be description of project.<br />
** Indices should be moved. e.g Design->design overview not index.<br/> <br />
<br />
* Then after the MAGEEC meeting, I need to discuss 4. Latest versions of power and benchmark papers <br />
** Discussed and updates will be emailed.<br />
<br />
Actions:<br />
<br />
* <nowiki>[JP,SC,AW,KIE,SH] clarify database elements and make a final specification the necessary columns and entries.</nowiki><br />
* <nowiki>[AW] migrate GDB interface to MI from RPI.</nowiki><br />
* <nowiki>[AW] migrate python to expect.</nowiki><br />
* <nowiki>[JB, AW, JP, SC] work on pitch and solution for the GDB interfaces</nowiki><br />
* <nowiki>[JP] look at SDCC compiler</nowiki><br />
* <nowiki>[JP] get AVR working at Embecosm end</nowiki><br />
* <nowiki>[AW] blog post on benchmarks used</nowiki><br />
* <nowiki>[All] sign up for new mageec.org mailing list at: </nowiki>[http://mageec.org/cgi-bin/mailman/listinfo/mageec http://mageec.org/cgi-bin/mailman/listinfo/mageec]<br />
* <nowiki>[AB,JB] restructure mageec.org navigation</nowiki><br />
* <nowiki>[SH] write next blog post.</nowiki><br />
* <nowiki>[SH] prepare project expenditure and forecast in line with reporting headings</nowiki><br />
* <nowiki>[SC] exploitation plan to wiki</nowiki><br />
* <nowiki>[JP] publish benchmark paper V1 on arXiv by Thursday evening.</nowiki><br />
<br />
[[Category:Meetings]]</div>Simonhttp://mageec.org/w/index.php?title=Meeting_22-07-2013&diff=123Meeting 22-07-20132013-07-22T12:44:42Z<p>Simon: </p>
<hr />
<div><center>'''Meeting UoB 22 July 2013'''</center><br />
<br />
<br />
<center>Present: JB, SC, MG, AW, JP, SH, KE, OR</center><br />
<br />
<br />
'''Hardware Energy Monitoring Report (Ashley)'''<br />
<br />
* Slow progress due to needing to get the hardware working<br />
** Software installation / OS issues<br />
* Moved to use a previous version (V2) energy-monitor boards, since they are more suitable <br />
* Problems using V3 to measure external device energy consumptions<br />
** Capable but the level of additional soldering required, lack of voltage divider and need for external resistors makes it less easy to use for V2.<br />
* Benchmarks working and verification of their correctness being worked on.<br />
** TODO: Now a priority to push on internal verification code to the tests.<br />
*** e.g. compare outputs to pre-computed correct and return result.<br />
* When running the benchmarks, we should apply techniques such as <tt>extern</tt>s or <tt>volatile</tt>s that can store e.g. the final result.<br />
** Dijkstra program not working, others are.<br />
* TODO: Get the benchmarks (BEEBS / BBS) work for last year out there with more oomph (webs, publicity, workshops...)<br />
<br />
* James 13th 14th September will give a talk at OSHCAMP on energy monitoring board design.<br />
* TODO: Run code that straddles flash banks and runs on flash to see differences in energy → feeds well into 'discovery' phase of ML. <br />
<br />
'''MAGEEC Blog posts'''<br />
<br />
* Weekly (ish) blogs, from rotating members of the team.<br />
* TODO: 22/07/2013: Jeremy this week for an intro<br />
* TODO: 29/07/2013: Ashley next week for intro to energy monitoring hardware.<br />
** Ensure the draft is saved.<br />
** Andrew will perform the final publishing<br />
* TODO: 05/07/2013: Moon, intro to his work.<br />
<br />
'''Compiler Framework'''<br />
<br />
* No new update on implementation, due to attending GCC meeting etc.<br />
* GCC meeting feedback is that HPC community (inc. LLL) very interested into learning more about low power. <br />
* Looked at the research questions posted from UoB discussion on the wiki.<br />
** Profile-directed optimisation is very powerful, perhaps most powerful<br />
** Joern to be quizzed by James about the kinds of profile data that comes out, since it can impact the machine learning.<br />
*** Can some examples of data be generated so that it can be represented for ML learning.<br />
<br />
'''MILEPOST Approach'''<br />
<br />
* <nowiki>Normalised feature vector 0<= 1 using the number of instructions as the divider.</nowiki><br />
* Ran 1000 random on/off flags, then kept the top 5% of previously trained data.<br />
* Question on whether or not the flags are orthogonal.<br />
* MSc student is addressing:<br />
** Taking James' flags of significance and isolating these and exhaustively <br />
** Data set available by end of week.<br />
** Performance as metric.<br />
* <nowiki>TODO: Paper on James' previous work? Of 130 flags in GCC only 13 make a difference [in our scenarios]. We want to understand why.</nowiki><br />
* Consider systematic (FFD) vs exhaustive vs random selection and their effectiveness.<br />
** Part of the MSc work will address this.<br />
* Moon investigating which of the MILEPOST vectors are and are not useful. Needs a large data set of 150+ applications to check them.<br />
** Work on WCET to extract additional programs to help with this.<br />
** Look (longer term) at HPC space to augment these.<br />
* MILEPOST Approach Summary: Run with 1000 flags randomly. Having discarded invalid flags, take top 100 good ones and run accumulating stats showing which are in best sets between iterations<br />
* There are techniques for looking at flag dependencies, but they need further investigation.<br />
<br />
'''Framework representation for enabling ''later'' advanced ML work'''<br />
<br />
* Reference SC's previous Overall Design slide.<br />
<br />
* Oliver outlined the “credit assignment” problem for ML to infer what causes improvement.<br />
<br />
* <tt>gen_features() </tt>just gives a list of passes that will be run.<br />
* Mapping of passes to flags needed.<br />
* A lot of debate about whether to use a feature vector, IR or source code for selecting the relevant features for ML.<br />
* Main takeaway – we need to be able to cope with changes in the features that are relevant as our knowledge of this evolves.<br />
** Feature info passing should be generic (enough to be able send IR, if necessary).<br />
** Should plan for plugin re-write next year if necessary to aid this.<br />
* How to support backtracking prediction.<br />
** Unlikely in GCC due to global state.<br />
* TODO: SC to produce a draft spec by 31/07/2013<br />
<br />
'''Planning'''<br />
<br />
* Hardware design and build completed on time.<br />
* No new streams of work coming online. <br />
* All else proceeding OK.<br />
* TODO: By 29/07/2013: Action the kit buying <br />
<br />
'''Actions carried forward from previous meeting:'''<br />
<br />
* SH: Apply for BlueCrystal Accounts for all<br />
* KIE: To decide on whether for two PCs with 32GB was one PC with two cores and much more RAM</div>Simonhttp://mageec.org/w/index.php?title=Meeting_22-07-2013&diff=122Meeting 22-07-20132013-07-22T12:44:07Z<p>Simon: </p>
<hr />
<div><center>'''Meeting UoB 22 July 2013'''</center><br />
<br />
<br />
<center>Present: JB, SC, MG, AW, JP, SH, KE, OR</center><br />
<br />
<br />
'''Hardware Energy Monitoring Report (Ashley)'''<br />
<br />
* Slow progress due to needing to get the hardware working<br />
** Software installation / OS issues<br />
* Moved to use a previous version (V2) energy-monitor boards, since they are more suitable <br />
* Problems using V3 to measure external device energy consumptions<br />
** Capable but the level of additional soldering required, lack of voltage divider and need for external resistors makes it less easy to use for V2.<br />
* Benchmarks working and verification of their correctness being worked on.<br />
** TODO: Now a priority to push on internal verification code to the tests.<br />
*** e.g. compare outputs to pre-computed correct and return result.<br />
* When running the benchmarks, we should apply techniques such as <tt>extern</tt>s or <tt>volatile</tt>s that can store e.g. the final result.<br />
** Dijkstra program not working, others are.<br />
* TODO: Get the benchmarks (BEEBS / BBS) work for last year out there with more oomph (webs, publicity, workshops...)<br />
<br />
* James 13th 14th September will give a talk at OSHCAMP on energy monitoring board design.<br />
* TODO: Run code that straddles flash banks and runs on flash to see differences in energy → feeds well into 'discovery' phase of ML. <br />
<br />
'''MAGEEC Blog posts'''<br />
<br />
* Weekly (ish) blogs, from rotating members of the team.<br />
* TODO: 22/07/2013: Jeremy this week for an intro<br />
* TODO: 29/07/2013: Ashley next week for intro to energy monitoring hardware.<br />
** Ensure the draft is saved.<br />
** Andrew will perform the final publishing<br />
* TODO: 05/07/2013: Moon, intro to his work.<br />
<br />
'''Compiler Framework'''<br />
<br />
* No new update on implementation, due to attending GCC meeting etc.<br />
* GCC meeting feedback is that HPC community (inc. LLL) very interested into learning more about low power. <br />
* Looked at the research questions posted from UoB discussion on the wiki.<br />
** Profile-directed optimisation is very powerful, perhaps most powerful<br />
** Joern to be quizzed by James about the kinds of profile data that comes out, since it can impact the machine learning.<br />
*** Can some examples of data be generated so that it can be represented for ML learning.<br />
<br />
'''MILEPOST Approach'''<br />
<br />
* <nowiki>Normalised feature vector 0<= 1 using the number of instructions as the divider.</nowiki><br />
* Ran 1000 random on/off flags, then kept the top 5% of previously trained data.<br />
* Question on whether or not the flags are orthogonal.<br />
* MSc student is addressing:<br />
** Taking James' flags of significance and isolating these and exhaustively <br />
** Data set available by end of week.<br />
** Performance as metric.<br />
* <nowiki>TODO: Paper on James' previous work? Of 130 flags in GCC only 13 make a difference [in our scenarios]. We want to understand why.</nowiki><br />
* Consider systematic (FFD) vs exhaustive vs random selection and their effectiveness.<br />
** Part of the MSc work will address this.<br />
* Moon investigating which of the MILEPOST vectors are and are not useful. Needs a large data set of 150+ applications to check them.<br />
** Work on WCET to extract additional programs to help with this.<br />
** Look (longer term) at HPC space to augment these.<br />
* MILEPOST Approach Summary: Run with 1000 flags randomly. Having discarded invalid flags, take top 100 good ones and run accumulating stats showing which are in best sets between iterations<br />
* There are techniques for looking at flag dependencies, but they need further investigation.<br />
<br />
'''Framework representation for enabling ''later'' advanced ML work'''<br />
<br />
* Reference SC's previous Overall Design slide.<br />
<br />
* Oliver outlined the “credit assignment” problem for ML to infer what causes improvement.<br />
<br />
* <tt>gen_features() </tt>just gives a list of passes that will be run.<br />
* Mapping of passes to flags needed.<br />
* A lot of debate about whether to use a feature vector, IR or source code for selecting the relevant features for ML.<br />
* Main takeaway – we need to be able to cope with changes in the features that are relevant as our knowledge of this evolves.<br />
** Feature info passing should be generic (enough to be able send IR, if necessary).<br />
** Should plan for plugin re-write next year if necessary to aid this.<br />
* How to support backtracking prediction.<br />
** Unlikely in GCC due to global state.<br />
* TODO: SC to produce a draft spec by 31/07/2013<br />
<br />
'''Planning'''<br />
<br />
* Hardware design and build completed on time.<br />
* No new streams of work coming online. <br />
* All else proceeding OK.<br />
* TODO: By 29/07/2013: Action the kit buying <br />
<br />
'''Actions carried forward from previous meeting:'''<br />
<br />
* S[http://mageec.org/w/index.php?title=SH&action=edit&redlink=1 H]&nbsp;Apply for BlueCrystal Accounts for all<br />
* [http://mageec.org/w/index.php?title=KIE&action=edit&redlink=1 KIE]&nbsp;To decide on whether for two PCs with 32GB was one PC with two cores and much more RAM</div>Simonhttp://mageec.org/w/index.php?title=Meeting_22-07-2013&diff=121Meeting 22-07-20132013-07-22T12:43:18Z<p>Simon: </p>
<hr />
<div>Meeting UoB 22 July 2013<br />
<br />
Present: JB, SC, MG, AW, JP, SH, KE, OR<br />
<br />
Hardware Energy Monitoring Report (Ashley)<br />
Slow progress due to needing to get the hardware working<br />
Software installation / OS issues<br />
Moved to use a previous version (V2) energy-monitor boards, since they are more suitable <br />
Problems using V3 to measure external device energy consumptions<br />
Capable but the level of additional soldering required, lack of voltage divider and need for external resistors makes it less easy to use for V2.<br />
Benchmarks working and verification of their correctness being worked on.<br />
TODO: Now a priority to push on internal verification code to the tests.<br />
e.g. compare outputs to pre-computed correct and return result.<br />
When running the benchmarks, we should apply techniques such as externs or volatiles that can store e.g. the final result.<br />
Dijkstra program not working, others are.<br />
TODO: Get the benchmarks (BEEBS / BBS) work for last year out there with more oomph (webs, publicity, workshops...)<br />
<br />
James 13th 14th September will give a talk at OSHCAMP on energy monitoring board design.<br />
TODO: Run code that straddles flash banks and runs on flash to see differences in energy → feeds well into 'discovery' phase of ML. <br />
<br />
MAGEEC Blog posts<br />
Weekly (ish) blogs, from rotating members of the team.<br />
TODO: 22/07/2013: Jeremy this week for an intro<br />
TODO: 29/07/2013: Ashley next week for intro to energy monitoring hardware.<br />
Ensure the draft is saved.<br />
Andrew will perform the final publishing<br />
TODO: 05/07/2013: Moon, intro to his work.<br />
<br />
Compiler Framework<br />
No new update on implementation, due to attending GCC meeting etc.<br />
GCC meeting feedback is that HPC community (inc. LLL) very interested into learning more about low power. <br />
Looked at the research questions posted from UoB discussion on the wiki.<br />
Profile-directed optimisation is very powerful, perhaps most powerful<br />
Joern to be quizzed by James about the kinds of profile data that comes out, since it can impact the machine learning.<br />
Can some examples of data be generated so that it can be represented for ML learning.<br />
MILEPOST Approach<br />
Normalised feature vector 0<= 1 using the number of instructions as the divider.<br />
Ran 1000 random on/off flags, then kept the top 5% of previously trained data.<br />
Question on whether or not the flags are orthogonal.<br />
MSc student is addressing:<br />
Taking James' flags of significance and isolating these and exhaustively <br />
Data set available by end of week.<br />
Performance as metric.<br />
TODO: Paper on James' previous work? Of 130 flags in GCC only 13 make a difference [in our scenarios]. We want to understand why.<br />
Consider systematic (FFD) vs exhaustive vs random selection and their effectiveness.<br />
Part of the MSc work will address this.<br />
Moon investigating which of the MILEPOST vectors are and are not useful. Needs a large data set of 150+ applications to check them.<br />
Work on WCET to extract additional programs to help with this.<br />
Look (longer term) at HPC space to augment these.<br />
MILEPOST Approach Summary: Run with 1000 flags randomly. Having discarded invalid flags, take top 100 good ones and run accumulating stats showing which are in best sets between iterations<br />
There are techniques for looking at flag dependencies, but they need further investigation.<br />
<br />
<br />
<br />
Framework representation for enabling later advanced ML work<br />
Reference SC's previous Overall Design slide.<br />
Oliver outlined the “credit assignment” problem for ML to infer what causes improvement.<br />
gen_features() just gives a list of passes that will be run.<br />
Mapping of passes to flags needed.<br />
A lot of debate about whether to use a feature vector, IR or source code for selecting the relevant features for ML.<br />
Main takeaway – we need to be able to cope with changes in the features that are relevant as our knowledge of this evolves.<br />
Feature info passing should be generic (enough to be able send IR, if necessary).<br />
Should plan for plugin re-write next year if necessary to aid this.<br />
How to support backtracking prediction.<br />
Unlikely in GCC due to global state.<br />
TODO: SC to produce a draft spec by 31/07/2013<br />
<br />
<br />
<br />
Planning<br />
Hardware design and build completed on time.<br />
No new streams of work coming online. <br />
All else proceeding OK.<br />
TODO: By 29/07/2013: Action the kit buying <br />
<br />
Actions carried forward from previous meeting:<br />
SH Apply for BlueCrystal Accounts for all<br />
KIE To decide on whether for two PCs with 32GB was one PC with two cores and much more RAM</div>Simonhttp://mageec.org/w/index.php?title=Meeting_22-07-2013&diff=120Meeting 22-07-20132013-07-22T12:42:50Z<p>Simon: Created page with "<center>'''Meeting UoB 22 July 2013'''</center> <center>Present: JB, SC, MG, AW, JP, SH, KE, OR</center> '''Hardware Energy Monitoring Report (Ashley)''' * Slow progress ..."</p>
<hr />
<div><center>'''Meeting UoB 22 July 2013'''</center><br />
<br />
<br />
<center>Present: JB, SC, MG, AW, JP, SH, KE, OR</center><br />
<br />
<br />
'''Hardware Energy Monitoring Report (Ashley)'''<br />
<br />
* Slow progress due to needing to get the hardware working<br />
** Software installation / OS issues<br />
* Moved to use a previous version (V2) energy-monitor boards, since they are more suitable <br />
* Problems using V3 to measure external device energy consumptions<br />
** Capable but the level of additional soldering required, lack of voltage divider and need for external resistors makes it less easy to use for V2.<br />
* Benchmarks working and verification of their correctness being worked on.<br />
** TODO: Now a priority to push on internal verification code to the tests.<br />
*** e.g. compare outputs to pre-computed correct and return result.<br />
* When running the benchmarks, we should apply techniques such as <tt>extern</tt>s or <tt>volatile</tt>s that can store e.g. the final result.<br />
** Dijkstra program not working, others are.<br />
* TODO: Get the benchmarks (BEEBS / BBS) work for last year out there with more oomph (webs, publicity, workshops...)<br />
<br />
* James 13th 14th September will give a talk at OSHCAMP on energy monitoring board design.<br />
<br />
'''MAGEEC Blog posts'''<br />
<br />
* Weekly (ish) blogs, from rotating members of the team.<br />
* TODO: 22/07/2013: Jeremy this week for an intro<br />
* TODO: 29/07/2013: Ashley next week for intro to energy monitoring hardware.<br />
** Ensure the draft is saved.<br />
** Andrew will perform the final publishing<br />
* TODO: 05/07/2013: Moon, intro to his work.<br />
<br />
'''Compiler Framework'''<br />
<br />
* No new update on implementation, due to attending GCC meeting etc.<br />
* GCC meeting feedback is that HPC community (inc. LLL) very interested into learning more about low power. <br />
* Looked at the research questions posted from UoB discussion on the wiki.<br />
** Profile-directed optimisation is very powerful, perhaps most powerful<br />
** Joern to be quizzed by James about the kinds of profile data that comes out, since it can impact the machine learning.<br />
*** Can some examples of data be generated so that it can be represented for ML learning.<br />
<br />
'''MILEPOST Approach'''<br />
<br />
* <nowiki>Normalised feature vector 0<= 1 using the number of instructions as the divider.</nowiki><br />
* Ran 1000 random on/off flags, then kept the top 5% of previously trained data.<br />
* Question on whether or not the flags are orthogonal.<br />
* MSc student is addressing:<br />
** Taking James' flags of significance and isolating these and exhaustively <br />
** Data set available by end of week.<br />
** Performance as metric.<br />
* <nowiki>TODO: Paper on James' previous work? Of 130 flags in GCC only 13 make a difference [in our scenarios]. We want to understand why.</nowiki><br />
* Consider systematic (FFD) vs exhaustive vs random selection and their effectiveness.<br />
** Part of the MSc work will address this.<br />
* Moon investigating which of the MILEPOST vectors are and are not useful. Needs a large data set of 150+ applications to check them.<br />
** Work on WCET to extract additional programs to help with this.<br />
** Look (longer term) at HPC space to augment these.</div>Simonhttp://mageec.org/w/index.php?title=Power_Measurement_Board&diff=105Power Measurement Board2013-07-02T12:20:37Z<p>Simon: </p>
<hr />
<div>Video:<br />
http://youtu.be/mpEI5E8gyec<br />
<br />
[[Category:Media]]</div>Simonhttp://mageec.org/w/index.php?title=Media&diff=104Media2013-07-02T12:20:18Z<p>Simon: </p>
<hr />
<div>= List of MAGEEC Media =<br />
<br />
* [[Power Measurement Board]]<br />
<br />
[[Category:Media]]</div>Simonhttp://mageec.org/w/index.php?title=Media&diff=103Media2013-07-02T12:19:07Z<p>Simon: /* List of MAGEEC Media */</p>
<hr />
<div>= List of MAGEEC Media =<br />
<br />
* [[Power Measurement Board]]</div>Simonhttp://mageec.org/w/index.php?title=Media&diff=102Media2013-07-02T12:18:37Z<p>Simon: Created page with "= List of MAGEEC Media = Power Measurement Board"</p>
<hr />
<div>= List of MAGEEC Media =<br />
[[Power Measurement Board]]</div>Simonhttp://mageec.org/w/index.php?title=Power_Measurement_Board&diff=87Power Measurement Board2013-07-01T14:45:47Z<p>Simon: </p>
<hr />
<div>Video:<br />
http://youtu.be/mpEI5E8gyec</div>Simonhttp://mageec.org/w/index.php?title=Power_Measurement_Board&diff=86Power Measurement Board2013-07-01T14:45:17Z<p>Simon: Created page with "Video: http://www.youtube.com/watch?v=mpEI5E8"</p>
<hr />
<div>Video:<br />
http://www.youtube.com/watch?v=mpEI5E8</div>Simonhttp://mageec.org/w/index.php?title=Meeting_01-07-2013&diff=81Meeting 01-07-20132013-07-01T13:08:53Z<p>Simon: /* Overview */</p>
<hr />
<div>==== Present ====<br />
<br />
SH, SC, JP, JB, Adam (Experience student)<br />
<br />
== Overview ==<br />
<br />
SC presented a flow diagram of Compiler <--> MAGEEC <--> ML and what data is generated and exchanged where, and what the flow is. Will be on wiki.<br />
<br />
== Definition of features ==<br />
<br />
* Feature vectors need defining external to any compiler IR.<br />
** Some compilers may or may not have some features available.<br />
** The ML will need to ?regenerate? its data-sets for each option.<br />
** Define 'essential' + 'optional' features?<br />
* The feature vector needs to be defined. <br />
** MILEPOST vector is a starting point, but some features may not be useful.<br />
** Check with MILEPOST results. <br />
** Discarding features could speed the flow. <br />
** Are there language-specific features that are not so useful/a problem for future expandability?<br />
<br />
== Pass runs ==<br />
<br />
* Need to run sets of passes in one go, not just a single pass in one go.<br />
** Need to run lists of passes<br />
** How to bound the number sensibly<br />
** Some passes need to run multiple times (even after each other)<br />
* Can ML predict how many passes needed before starting? <br />
<br />
== ML predictor requirements ==<br />
<br />
* Takes in the current feature vector from a program with current passes run.<br />
* Needs to return back a list of passes to run next (with goodness metrics?)<br />
<br />
== ML Training ==<br />
<br />
* Probably use a separate application to do the training.<br />
* Need to generate sets of flags (randomly?) based on the constraints of pass ordering (shared data structure from the prediction phase)<br />
* Ordering of passes is important; can make a big difference in some cases.<br />
<br />
== Actions ==<br />
<br />
* [[SC]] Begin initial write of compiler->MAGEEC interfaces. Dummy ML placeholder for now.<br />
* [[SC]] Add list of upcoming meetings to wiki<br />
* [[SH]] Ensure that next meeting includes all ML people to get feedback into that aspect.<br />
* [[SH]] Apply for BlueCrystal Accounts for all<br />
* [[SH]] Inform all about quarterly meetings<br />
* [[SH]] Mailing list activation<br />
* [[MG]] Begin on defining the feature vectors to be used, based on MILEPOST.<br />
* [[JB]] Flag up with Tom Harris that the ~30% Q1-Q4 UoB staff budget will be rolled forward<br />
* [[KIE]] To decide on whether for two PCs with 32GB was one PC with two cores and much more RAM</div>Simonhttp://mageec.org/w/index.php?title=Meeting_01-07-2013&diff=80Meeting 01-07-20132013-07-01T13:08:03Z<p>Simon: Created page with "==== Present ==== SH, SC, JP, JB, Adam (Experience student) == Overview == SC presented a flow diagram of Compiler <--> MAGEEC <--> ML and what data is generated and exchan..."</p>
<hr />
<div>==== Present ====<br />
<br />
SH, SC, JP, JB, Adam (Experience student)<br />
<br />
== Overview ==<br />
<br />
SC presented a flow diagram of Compiler <--> MAGEEC <--> ML and what data is generated and exchanged where, and what the flow is. Will be on wiki.<br />
<br />
=== Definition of features ===<br />
<br />
* Feature vectors need defining external to any compiler IR.<br />
** Some compilers may or may not have some features available.<br />
** The ML will need to ?regenerate? its data-sets for each option.<br />
** Define 'essential' + 'optional' features?<br />
* The feature vector needs to be defined. <br />
** MILEPOST vector is a starting point, but some features may not be useful.<br />
** Check with MILEPOST results. <br />
** Discarding features could speed the flow. <br />
** Are there language-specific features that are not so useful/a problem for future expandability?<br />
<br />
=== Pass runs ===<br />
<br />
* Need to run sets of passes in one go, not just a single pass in one go.<br />
** Need to run lists of passes<br />
** How to bound the number sensibly<br />
** Some passes need to run multiple times (even after each other)<br />
* Can ML predict how many passes needed before starting? <br />
<br />
=== ML predictor requirements ===<br />
<br />
* Takes in the current feature vector from a program with current passes run.<br />
* Needs to return back a list of passes to run next (with goodness metrics?)<br />
<br />
=== ML Training ===<br />
<br />
* Probably use a separate application to do the training.<br />
* Need to generate sets of flags (randomly?) based on the constraints of pass ordering (shared data structure from the prediction phase)<br />
* Ordering of passes is important; can make a big difference in some cases.<br />
<br />
== Actions ==<br />
<br />
* [[SC]] Begin initial write of compiler->MAGEEC interfaces. Dummy ML placeholder for now.<br />
* [[SC]] Add list of upcoming meetings to wiki<br />
* [[SH]] Ensure that next meeting includes all ML people to get feedback into that aspect.<br />
* [[SH]] Apply for BlueCrystal Accounts for all<br />
* [[SH]] Inform all about quarterly meetings<br />
* [[SH]] Mailing list activation<br />
* [[MG]] Begin on defining the feature vectors to be used, based on MILEPOST.<br />
* [[JB]] Flag up with Tom Harris that the ~30% Q1-Q4 UoB staff budget will be rolled forward<br />
* [[KIE]] To decide on whether for two PCs with 32GB was one PC with two cores and much more RAM</div>Simonhttp://mageec.org/w/index.php?title=Project_Plan&diff=47Project Plan2013-04-25T11:18:12Z<p>Simon: </p>
<hr />
<div>__TOC__<br />
<br />
{{WorkPackage<br />
|n = 1<br />
|title = Iterative Design of Compiler Framework<br />
|start = 1 June 2013<br />
|end = 31 August 2013<br />
|totaldays = 32<br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Definition of compiler and hardware independent interface for machine learning compiler.<br />
* Selection of a set of software characteristics to be exploited during the optimisation selection process.<br />
* Identify target for first implementation, GCC or LLVM.<br />
|description = <br />
* Identify target for first implementation, GCC or LLVM.<br />
* Determine degree of integration with specific compilers.<br />
* Identify machine learning interface.<br />
* Identify feature selection methodology.<br />
* Iterative refinement on 2-4.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Decision of GCC/LLVM for first implementation | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Design doc for compiler integration | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 3 | title = Design doc for machine learning interface | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 4 | title = Design doc for feature selection | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 5 | title = Iterate 2-5 throughout project | external = E | responsibility = Emb. | due = End of each Q. | comments = }}<br />
| dependencies = <br />
| dependents = <br />
* [[#WP2|Work Package 2]]<br />
}}<br />
<br />
{{WorkPackage<br />
|n = 2<br />
|title = Iterative Implementation of Compiler Framework<br />
|start = 1 July 2013<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Implementation of prototype framework with one compiler, identified in [[#WP1|Work Package 1]]<br />
* Implementation of prototype framework with other compiler.<br />
|description =<br />
* Write code for use with first compiler.<br />
* Write documentation for use with first compiler.<br />
* Implement regression tests for use with first compiler.<br />
* Extend support for use with second compiler (code, documentation, regression).<br />
* Iterative refinement on 1-4.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = First iteration of implementation, testing, documentation with first compiler | external = E | responsibility = Emb. | due = 30 Nov 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Second iteration of development using second compiler | external = E | responsibility = Emb. | due = 30 Nov 2013 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Refinement of implementation with both compilers | external = E | responsibility = Emb. | due = End of each Q. | comments = }}<br />
| dependencies =<br />
* [[#WP1|Work Package 1]]<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 3<br />
|title = Design and Build of Hardware Measurement Platform<br />
|start = 1 June 2013<br />
|end = 31 July 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol<br />
|objectives = <br />
* Implementation of Hardware Measurement<br />
|description =<br />
* Design board, reusing existing expertise<br />
* Board Implementation<br />
* Board testing<br />
|equipment = <br />
* PCB Manufacturing (outsourced)<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Board Design Documentation | external = E | responsibility = UoB | due = 31 Jul 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Working Hardware | external = E | responsibility = UoB | due = 31 Jul 2013 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
* [[#WP4|Work Package 4]] (to instrument boards)<br />
}}<br />
<br />
{{WorkPackage<br />
|n = 4<br />
|title = Training Set, Test Program, Test Hardware and Case Study Development<br />
|start = 1 July 2013<br />
|end = 30 September 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Creation of a set of applications for training.<br />
* Creation of a set of applications for initial testing of trained systems.<br />
* Development of larger case studies for testing purposes.<br />
* Selection of target embedded systems for testing.<br />
|description =<br />
* Select suitable test and training applications from existing benchmark suites.<br />
* Choice of case studies from wider community.<br />
* Selection of embedded systems representative of industrial/commercial applications in consultation with community.<br />
* Integration of embedded systems with hardware test platform ([[#WP3|Work Package 3]])<br />
|equipment = <br />
* Selection of embedded systems<br />
* Community engagement platforms<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Training set source | external = E | responsibility = UoB | due = 30 Sep 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Case study source | external = E | responsibility = UoB | due = 30 Sep 2013 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Embedded systems set up for testing | external = I | responsibility = UoB | due = 30 Sep 2013 | comments = Physical setup internal, documentation external }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 5<br />
|title = Theory of Analysis of Machine Learning Techniques<br />
|start = 1 July 2013<br />
|end = 31 July 2014<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Understand current machine learning techniques and decide if relevant<br />
* Select approach(es) for incorporating into framework<br />
* Refinement in the light of ongoing project development and experience<br />
|description =<br />
* Review existing uses including MILEPOST, directed learning, abductive learning ([[Literature]])<br />
* Whole team working days to bring together theory with implementers to select approach and specify API<br />
* Decision on choice of training approach, e.g. FFD, random, etc.<br />
* Iterative review during second year of programme, inc. potential for reordering<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Literature Review | external = E | responsibility = UoB | due = 30 Sep 2013 (draft) ; 31 Dec 2013 (final) | comments = May be appropriate for publication }}<br />
{{WPDeliverable | ref = 2 | title = Selection of core learning algorithm(s) | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 3 | title = Training approach | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 4 | title = API for implementers | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 5 | title = Review of initial approach | external = | responsibility = UoB | due = 31 July 2014 | comments = May be appropriate for publication }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
<br />
{{WorkPackage<br />
|n = 7<br />
|title = Training and Testing Prototype Infrastructure<br />
|start = 1 March 2014<br />
|end = 31 May 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Set up of training infrastructure and demonstration with tests using existing optimisations<br />
* Refine/repeat for use with new optimisations<br />
|description =<br />
* Set up infrastructure for existing optimisations<br />
* Train infrastructure with small set with existing optimisations<br />
* Test with small set with existing optimisations<br />
* Repeat above with new optimisations from [[#WP8|Work Package 8]]<br />
|equipment = <br />
* High performance workstation<br />
* Embedded systems with hardware energy measuring<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Working training infrastructure (existing optimisations) | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Results from proof of concept training and test (existing optimisations) | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = This is proof of concept, not the evaluation }}<br />
{{WPDeliverable | ref = 3 | title = Working training infrastructure (new optimisations) | external = E | responsibility = Emb. | due = 31 May 2014 | comments = }}<br />
{{WPDeliverable | ref = 4 | title = Results from proof of concept training and test (new optimisations) | external = E | responsibility = Emb. | due = 31 May 2014 | comments = This is proof of concept, not the evaluation }}<br />
| dependencies =<br />
* [[#WP2|Work Package 2]] (Compiler Infrastructure)<br />
* [[#WP4|Work Package 4]] (Test hardware, test applications)<br />
* [[#WP8|Work Package 8]] (New Optimisations)<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 8<br />
|title = Implement New Optimisation Passes<br />
|start = 1 September 2013<br />
|end = 31 May 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm, UoB<br />
|objectives = <br />
* Design and implement optimisation passes in the GCC and LLVM compilers.<br />
|description =<br />
* Design new optimisation passes in light of theory from [[#WP6|Work Package 6]]<br />
* Implement new optimisation passes in GCC<br />
* Reimplement optimisation passes in LLVM<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Review of existing techniques for energy optimisation | external = E | responsibility = Emb, co-located at UoB for knowledge exchange. | due = 31 Jan 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Design optimisation passes | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Implement optimisation passes in GCC | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 4 | title = Implement optimisation passes in LLVM | external = E | responsibility = Emb. | due = 31 May 2014 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 9<br />
|title = Evaluation of Infrastructure<br />
|start = 1 June 2014<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Full evaluation of infrastructure using full training sets, full test sets and large case studies for both with and without our new optimisations.<br />
|description =<br />
* Train infrastructure with existing optimisations<br />
* Evaluate with smaller tests with existing optimisations<br />
* Evaluate with case studies with existing optimisations<br />
* Repeat above with new optimisations<br />
* Write paper detailing findings<br />
* Review and refine paper<br />
|equipment = <br />
* Instrumented embedded systems<br />
* HPC Facilities<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Trained and tested complete system with full case studies | external = E | responsibility = UoB | due = 30 Sep 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Draft paper presenting results | external = E | responsibility = UoB | due = 30 Sep 2014 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Final paper | external = E | responsibility = Emb. | due = 30 Nov 2014 | comments = This is the ultimate report and it is anticipated that it will take some time to develop. Additionally engineering on the project will continue whilst the paper is written, hence this will be a significantly large task. }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 10<br />
|title = Dissemination and Exploitation<br />
|start = 1 June 2013<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = University of Bristol, Embecosm<br />
|contributors = University of Bristol, Embecosm<br />
|objectives =<br />
* Business case development inc. market analysis<br />
* Engagement with relevant communities<br />
* Engagement with potential customers<br />
* Academic and business publications<br />
|description =<br />
* Develop business case by engagement of all stakeholders<br />
* Ongoing review of business case throughout project, leading to updated exploitation plan<br />
* Engagement with the technical community through participation in workshops (including EACO, NMI, etc.), conferences (including GNU Tools Cauldron, LLVM Developer Conference, etc.), presentations, training events, new media using the skills of AB Open<br />
* Engagement with potential customers<br />
* Publication of papers as described in various Work Packages.<br />
|equipment = <br />
* General computing infrastructure, including website and social media (http://mageec.org)<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Exploitation plan | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Updated every quarter }}<br />
{{WPDeliverable | ref = 2 | title = Participation in workshops and training events | external = E | responsibility = Emb. | due = 30 Nov 2014 | comments = Dates to be confirmed }}<br />
{{WPDeliverable | ref = 3 | title = Website/wiki/new media | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Due date is set up of, mainted throughout project }}<br />
{{WPDeliverable | ref = 4 | title = Papers | external = E | responsibility = n/a | due = n/a | comments = Detailed throughout project plan, for dates and details, refer to associated work packages. }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
== Gantt Chart ==<br />
The following Gantt Chart details the interactions between work packages. As one work package does not necessarily depend on the entire completion of another, a traditional finish-start relationship does not perfectly represent this information. The Microsoft Project file used to generate this chart can be found at [[File:MAGEEC_Gantt.mpp]].<br />
<br />
[[File:MAGEEC_Gantt.png|800px]]<br />
<br />
[[Category:Planning]]</div>Simonhttp://mageec.org/w/index.php?title=Project_Plan&diff=46Project Plan2013-04-25T11:16:06Z<p>Simon: </p>
<hr />
<div>__TOC__<br />
<br />
{{WorkPackage<br />
|n = 1<br />
|title = Iterative Design of Compiler Framework<br />
|start = 1 June 2013<br />
|end = 31 August 2013<br />
|totaldays = 32<br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Definition of compiler and hardware independent interface for machine learning compiler.<br />
* Selection of a set of software characteristics to be exploited during the optimisation selection process.<br />
* Identify target for first implementation, GCC or LLVM.<br />
|description = <br />
* Identify target for first implementation, GCC or LLVM.<br />
* Determine degree of integration with specific compilers.<br />
* Identify machine learning interface.<br />
* Identify feature selection methodology.<br />
* Iterative refinement on 2-4.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Decision of GCC/LLVM for first implementation | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Design doc for compiler integration | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 3 | title = Design doc for machine learning interface | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 4 | title = Design doc for feature selection | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 5 | title = Iterate 2-5 throughout project | external = E | responsibility = Emb. | due = End of each Q. | comments = }}<br />
| dependencies = <br />
| dependents = <br />
* [[#WP2|Work Package 2]]<br />
}}<br />
<br />
{{WorkPackage<br />
|n = 2<br />
|title = Iterative Implementation of Compiler Framework<br />
|start = 1 July 2013<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Implementation of prototype framework with one compiler, identified in [[#WP1|Work Package 1]]<br />
* Implementation of prototype framework with other compiler.<br />
|description =<br />
* Write code for use with first compiler.<br />
* Write documentation for use with first compiler.<br />
* Implement regression tests for use with first compiler.<br />
* Extend support for use with second compiler (code, documentation, regression).<br />
* Iterative refinement on 1-4.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = First iteration of implementation, testing, documentation with first compiler | external = E | responsibility = Emb. | due = 30 Nov 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Second iteration of development using second compiler | external = E | responsibility = Emb. | due = 30 Nov 2013 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Refinement of implementation with both compilers | external = E | responsibility = Emb. | due = End of each Q. | comments = }}<br />
| dependencies =<br />
* [[#WP1|Work Package 1]]<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 3<br />
|title = Design and Build of Hardware Measurement Platform<br />
|start = 1 June 2013<br />
|end = 31 July 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol<br />
|objectives = <br />
* Implementation of Hardware Measurement<br />
|description =<br />
* Design board, reusing existing expertise<br />
* Board Implementation<br />
* Board testing<br />
|equipment = <br />
* PCB Manufacturing (outsourced)<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Board Design Documentation | external = E | responsibility = UoB | due = 31 Jul 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Working Hardware | external = E | responsibility = UoB | due = 31 Jul 2013 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
* [[#WP4|Work Package 4]] (to instrument boards)<br />
}}<br />
<br />
{{WorkPackage<br />
|n = 4<br />
|title = Training Set, Test Program, Test Hardware and Case Study Development<br />
|start = 1 July 2013<br />
|end = 30 September 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Creation of a set of applications for training.<br />
* Creation of a set of applications for initial testing of trained systems.<br />
* Development of larger case studies for testing purposes.<br />
* Selection of target embedded systems for testing.<br />
|description =<br />
* Select suitable test and training applications from existing benchmark suites.<br />
* Choice of case studies from wider community.<br />
* Selection of embedded systems representative of industrial/commercial applications in consultation with community.<br />
* Integration of embedded systems with hardware test platform ([[#WP3|Work Package 3]])<br />
|equipment = <br />
* Selection of embedded systems<br />
* Community engagement platforms<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Training set source | external = E | responsibility = UoB | due = 30 Sep 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Case study source | external = E | responsibility = UoB | due = 30 Sep 2013 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Embedded systems set up for testing | external = I | responsibility = UoB | due = 30 Sep 2013 | comments = Physical setup internal, documentation external }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 5<br />
|title = Theory of Analysis of Machine Learning Techniques<br />
|start = 1 July 2013<br />
|end = 31 July 2014<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Understand current machine learning techniques and decide if relevant<br />
* Select approach(es) for incorporating into framework<br />
* Refinement in the light of ongoing project development and experience<br />
|description =<br />
* Review existing uses including MILEPOST, directed learning, abductive learning ([[Literature]])<br />
* Whole team working days to bring together theory with implementers to select approach and specify API<br />
* Decision on choice of training approach, e.g. FFD, random, etc.<br />
* Iterative review during second year of programme, inc. potential for reordering<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Literature Review | external = E | responsibility = UoB | due = 30 Sep 2013 (draft) ; 31 Dec 2013 (final) | comments = May be appropriate for publication }}<br />
{{WPDeliverable | ref = 2 | title = Selection of core learning algorithm(s) | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 3 | title = Training approach | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 4 | title = API for implementers | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 5 | title = Review of initial approach | external = | responsibility = UoB | due = 31 July 2014 | comments = May be appropriate for publication }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 6<br />
|title = Theory of New Optimisation Passes<br />
|start = 1 July 2013<br />
|end = 31 August 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Theoretical analysis of why energy optimisations work.<br />
|description =<br />
* Look at all existing work with energy measurement of relevant systems to look for data to guide implementers of optimisation passes to implement those specific for energy minimisation.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Paper identifying characteristics suitable for compiler optimisation passes | external = E | responsibility = UoB | due = 31 Aug 2013 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 7<br />
|title = Training and Testing Prototype Infrastructure<br />
|start = 1 March 2014<br />
|end = 31 May 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Set up of training infrastructure and demonstration with tests using existing optimisations<br />
* Refine/repeat for use with new optimisations<br />
|description =<br />
* Set up infrastructure for existing optimisations<br />
* Train infrastructure with small set with existing optimisations<br />
* Test with small set with existing optimisations<br />
* Repeat above with new optimisations from [[#WP8|Work Package 8]]<br />
|equipment = <br />
* High performance workstation<br />
* Embedded systems with hardware energy measuring<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Working training infrastructure (existing optimisations) | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Results from proof of concept training and test (existing optimisations) | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = This is proof of concept, not the evaluation }}<br />
{{WPDeliverable | ref = 3 | title = Working training infrastructure (new optimisations) | external = E | responsibility = Emb. | due = 31 May 2014 | comments = }}<br />
{{WPDeliverable | ref = 4 | title = Results from proof of concept training and test (new optimisations) | external = E | responsibility = Emb. | due = 31 May 2014 | comments = This is proof of concept, not the evaluation }}<br />
| dependencies =<br />
* [[#WP2|Work Package 2]] (Compiler Infrastructure)<br />
* [[#WP4|Work Package 4]] (Test hardware, test applications)<br />
* [[#WP8|Work Package 8]] (New Optimisations)<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 8<br />
|title = Implement New Optimisation Passes<br />
|start = 1 September 2013<br />
|end = 31 May 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm, UoB<br />
|objectives = <br />
* Design and implement optimisation passes in the GCC and LLVM compilers.<br />
|description =<br />
* Design new optimisation passes in light of theory from [[#WP6|Work Package 6]]<br />
* Implement new optimisation passes in GCC<br />
* Reimplement optimisation passes in LLVM<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Review of existing techniques for energy optimisation | external = E | responsibility = Emb, with UoB knowledge echange. | due = 31 Jan 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Design optimisation passes | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Implement optimisation passes in GCC | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 4 | title = Implement optimisation passes in LLVM | external = E | responsibility = Emb. | due = 31 May 2014 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 9<br />
|title = Evaluation of Infrastructure<br />
|start = 1 June 2014<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Full evaluation of infrastructure using full training sets, full test sets and large case studies for both with and without our new optimisations.<br />
|description =<br />
* Train infrastructure with existing optimisations<br />
* Evaluate with smaller tests with existing optimisations<br />
* Evaluate with case studies with existing optimisations<br />
* Repeat above with new optimisations<br />
* Write paper detailing findings<br />
* Review and refine paper<br />
|equipment = <br />
* Instrumented embedded systems<br />
* HPC Facilities<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Trained and tested complete system with full case studies | external = E | responsibility = UoB | due = 30 Sep 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Draft paper presenting results | external = E | responsibility = UoB | due = 30 Sep 2014 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Final paper | external = E | responsibility = Emb. | due = 30 Nov 2014 | comments = This is the ultimate report and it is anticipated that it will take some time to develop. Additionally engineering on the project will continue whilst the paper is written, hence this will be a significantly large task. }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 10<br />
|title = Dissemination and Exploitation<br />
|start = 1 June 2013<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = University of Bristol, Embecosm<br />
|contributors = University of Bristol, Embecosm<br />
|objectives =<br />
* Business case development inc. market analysis<br />
* Engagement with relevant communities<br />
* Engagement with potential customers<br />
* Academic and business publications<br />
|description =<br />
* Develop business case by engagement of all stakeholders<br />
* Ongoing review of business case throughout project, leading to updated exploitation plan<br />
* Engagement with the technical community through participation in workshops (including EACO, NMI, etc.), conferences (including GNU Tools Cauldron, LLVM Developer Conference, etc.), presentations, training events, new media using the skills of AB Open<br />
* Engagement with potential customers<br />
* Publication of papers as described in various Work Packages.<br />
|equipment = <br />
* General computing infrastructure, including website and social media (http://mageec.org)<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Exploitation plan | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Updated every quarter }}<br />
{{WPDeliverable | ref = 2 | title = Participation in workshops and training events | external = E | responsibility = Emb. | due = 30 Nov 2014 | comments = Dates to be confirmed }}<br />
{{WPDeliverable | ref = 3 | title = Website/wiki/new media | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Due date is set up of, mainted throughout project }}<br />
{{WPDeliverable | ref = 4 | title = Papers | external = E | responsibility = n/a | due = n/a | comments = Detailed throughout project plan, for dates and details, refer to associated work packages. }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
== Gantt Chart ==<br />
The following Gantt Chart details the interactions between work packages. As one work package does not necessarily depend on the entire completion of another, a traditional finish-start relationship does not perfectly represent this information. The Microsoft Project file used to generate this chart can be found at [[File:MAGEEC_Gantt.mpp]].<br />
<br />
[[File:MAGEEC_Gantt.png|800px]]<br />
<br />
[[Category:Planning]]</div>Simonhttp://mageec.org/w/index.php?title=Project_Plan&diff=45Project Plan2013-04-25T11:05:28Z<p>Simon: </p>
<hr />
<div>__TOC__<br />
<br />
{{WorkPackage<br />
|n = 1<br />
|title = Iterative Design of Compiler Framework<br />
|start = 1 June 2013<br />
|end = 31 August 2013<br />
|totaldays = 32<br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Definition of compiler and hardware independent interface for machine learning compiler.<br />
* Selection of a set of software characteristics to be exploited during the optimisation selection process.<br />
* Identify target for first implementation, GCC or LLVM.<br />
|description = <br />
* Identify target for first implementation, GCC or LLVM.<br />
* Determine degree of integration with specific compilers.<br />
* Identify machine learning interface.<br />
* Identify feature selection methodology.<br />
* Iterative refinement on 2-4.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Decision of GCC/LLVM for first implementation | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Design doc for compiler integration | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 3 | title = Design doc for machine learning interface | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 4 | title = Design doc for feature selection | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 5 | title = Iterate 2-5 throughout project | external = E | responsibility = Emb. | due = End of each Q. | comments = }}<br />
| dependencies = <br />
| dependents = <br />
* [[#WP2|Work Package 2]]<br />
}}<br />
<br />
{{WorkPackage<br />
|n = 2<br />
|title = Iterative Implementation of Compiler Framework<br />
|start = 1 July 2013<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Implementation of prototype framework with one compiler, identified in [[#WP1|Work Package 1]]<br />
* Implementation of prototype framework with other compiler.<br />
|description =<br />
* Write code for use with first compiler.<br />
* Write documentation for use with first compiler.<br />
* Implement regression tests for use with first compiler.<br />
* Extend support for use with second compiler (code, documentation, regression).<br />
* Iterative refinement on 1-4.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = First iteration of implementation, testing, documentation with first compiler | external = E | responsibility = Emb. | due = 30 Nov 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Second iteration of development using second compiler | external = E | responsibility = Emb. | due = 30 Nov 2013 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Refinement of implementation with both compilers | external = E | responsibility = Emb. | due = End of each Q. | comments = }}<br />
| dependencies =<br />
* [[#WP1|Work Package 1]]<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 3<br />
|title = Design and Build of Hardware Measurement Platform<br />
|start = 1 June 2013<br />
|end = 31 July 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol<br />
|objectives = <br />
* Implementation of Hardware Measurement<br />
|description =<br />
* Design board, reusing existing expertise<br />
* Board Implementation<br />
* Board testing<br />
|equipment = <br />
* PCB Manufacturing (outsourced)<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Board Design Documentation | external = E | responsibility = UoB | due = 31 Jul 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Working Hardware | external = E | responsibility = UoB | due = 31 Jul 2013 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
* [[#WP4|Work Package 4]] (to instrument boards)<br />
}}<br />
<br />
{{WorkPackage<br />
|n = 4<br />
|title = Training Set, Test Program, Test Hardware and Case Study Development<br />
|start = 1 July 2013<br />
|end = 30 September 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Creation of a set of applications for training.<br />
* Creation of a set of applications for initial testing of trained systems.<br />
* Development of larger case studies for testing purposes.<br />
* Selection of target embedded systems for testing.<br />
|description =<br />
* Select suitable test and training applications from existing benchmark suites.<br />
* Choice of case studies from wider community.<br />
* Selection of embedded systems representative of industrial/commercial applications in consultation with community.<br />
* Integration of embedded systems with hardware test platform ([[#WP3|Work Package 3]])<br />
|equipment = <br />
* Selection of embedded systems<br />
* Community engagement platforms<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Training set source | external = E | responsibility = UoB | due = 30 Sep 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Case study source | external = E | responsibility = UoB | due = 30 Sep 2013 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Embedded systems set up for testing | external = I | responsibility = UoB | due = 30 Sep 2013 | comments = Physical setup internal, documentation external }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 5<br />
|title = Theory of Analysis of Machine Learning Techniques<br />
|start = 1 July 2013<br />
|end = 31 July 2014<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Understand current machine learning techniques and decide if relevant<br />
* Select approach(es) for incorporating into framework<br />
* Refinement in the light of ongoing project development and experience<br />
|description =<br />
* Review existing uses including MILEPOST, directed learning, abductive learning ([[Literature]])<br />
* Whole team working days to bring together theory with implementers to select approach and specify API<br />
* Decision on choice of training approach, e.g. FFD, random, etc.<br />
* Iterative review during second year of programme, inc. potential for reordering<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Literature Review | external = E | responsibility = UoB | due = 30 Sep 2013 (draft) ; 31 Dec 2013 (final) | comments = May be appropriate for publication }}<br />
{{WPDeliverable | ref = 2 | title = Selection of core learning algorithm(s) | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 3 | title = Training approach | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 4 | title = API for implementers | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 5 | title = Review of initial approach | external = | responsibility = UoB | due = 31 July 2014 | comments = May be appropriate for publication }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 6<br />
|title = Theory of New Optimisation Passes<br />
|start = 1 July 2013<br />
|end = 31 August 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Theoretical analysis of why energy optimisations work.<br />
|description =<br />
* Look at all existing work with energy measurement of relevant systems to look for data to guide implementers of optimisation passes to implement those specific for energy minimisation.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Paper identifying characteristics suitable for compiler optimisation passes | external = E | responsibility = UoB | due = 31 Aug 2013 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 7<br />
|title = Training and Testing Prototype Infrastructure<br />
|start = 1 March 2014<br />
|end = 31 May 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Set up of training infrastructure and demonstration with tests using existing optimisations<br />
* Refine/repeat for use with new optimisations<br />
|description =<br />
* Set up infrastructure for existing optimisations<br />
* Train infrastructure with small set with existing optimisations<br />
* Test with small set with existing optimisations<br />
* Repeat above with new optimisations from [[#WP8|Work Package 8]]<br />
|equipment = <br />
* High performance workstation<br />
* Embedded systems with hardware energy measuring<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Working training infrastructure (existing optimisations) | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Results from proof of concept training and test (existing optimisations) | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = This is proof of concept, not the evaluation }}<br />
{{WPDeliverable | ref = 3 | title = Working training infrastructure (new optimisations) | external = E | responsibility = Emb. | due = 31 May 2014 | comments = }}<br />
{{WPDeliverable | ref = 4 | title = Results from proof of concept training and test (new optimisations) | external = E | responsibility = Emb. | due = 31 May 2014 | comments = This is proof of concept, not the evaluation }}<br />
| dependencies =<br />
* [[#WP2|Work Package 2]] (Compiler Infrastructure)<br />
* [[#WP4|Work Package 4]] (Test hardware, test applications)<br />
* [[#WP8|Work Package 8]] (New Optimisations)<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 8<br />
|title = Implement New Optimisation Passes<br />
|start = 1 September 2013<br />
|end = 31 May 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Design and implement optimisation passes in the GCC and LLVM compilers.<br />
|description =<br />
* Design new optimisation passes in light of theory from [[#WP6|Work Package 6]]<br />
* Implement new optimisation passes in GCC<br />
* Reimplement optimisation passes in LLVM<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Design optimisation passes | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Implement optimisation passes in GCC | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Implement optimisation passes in LLVM | external = E | responsibility = Emb. | due = 31 May 2014 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 9<br />
|title = Evaluation of Infrastructure<br />
|start = 1 June 2014<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Full evaluation of infrastructure using full training sets, full test sets and large case studies for both with and without our new optimisations.<br />
|description =<br />
* Train infrastructure with existing optimisations<br />
* Evaluate with smaller tests with existing optimisations<br />
* Evaluate with case studies with existing optimisations<br />
* Repeat above with new optimisations<br />
* Write paper detailing findings<br />
* Review and refine paper<br />
|equipment = <br />
* Instrumented embedded systems<br />
* HPC Facilities<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Trained and tested complete system with full case studies | external = E | responsibility = UoB | due = 30 Sep 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Draft paper presenting results | external = E | responsibility = UoB | due = 30 Sep 2014 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Final paper | external = E | responsibility = Emb. | due = 30 Nov 2014 | comments = This is the ultimate report and it is anticipated that it will take some time to develop. Additionally engineering on the project will continue whilst the paper is written, hence this will be a significantly large task. }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 10<br />
|title = Dissemination and Exploitation<br />
|start = 1 June 2013<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = University of Bristol, Embecosm<br />
|contributors = University of Bristol, Embecosm<br />
|objectives =<br />
* Business case development inc. market analysis<br />
* Engagement with relevant communities<br />
* Engagement with potential customers<br />
* Academic and business publications<br />
|description =<br />
* Develop business case by engagement of all stakeholders<br />
* Ongoing review of business case throughout project, leading to updated exploitation plan<br />
* Engagement with the technical community through participation in workshops (including EACO, NMI, etc.), conferences (including GNU Tools Cauldron, LLVM Developer Conference, etc.), presentations, training events, new media using the skills of AB Open<br />
* Engagement with potential customers<br />
* Publication of papers as described in various Work Packages.<br />
|equipment = <br />
* General computing infrastructure, including website and social media (http://mageec.org)<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Exploitation plan | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Updated every quarter }}<br />
{{WPDeliverable | ref = 2 | title = Participation in workshops and training events | external = E | responsibility = Emb. | due = 30 Nov 2014 | comments = Dates to be confirmed }}<br />
{{WPDeliverable | ref = 3 | title = Website/wiki/new media | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Due date is set up of, mainted throughout project }}<br />
{{WPDeliverable | ref = 4 | title = Papers | external = E | responsibility = n/a | due = n/a | comments = Detailed throughout project plan, for dates and details, refer to associated work packages. }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
== Gantt Chart ==<br />
The following Gantt Chart details the interactions between work packages. As one work package does not necessarily depend on the entire completion of another, a traditional finish-start relationship does not perfectly represent this information. The Microsoft Project file used to generate this chart can be found at [[File:MAGEEC_Gantt.mpp]].<br />
<br />
[[File:MAGEEC_Gantt.png|800px]]<br />
<br />
[[Category:Planning]]</div>Simonhttp://mageec.org/w/index.php?title=Project_Plan&diff=44Project Plan2013-04-25T10:56:02Z<p>Simon: </p>
<hr />
<div>__TOC__<br />
<br />
{{WorkPackage<br />
|n = 1<br />
|title = Iterative Design of Compiler Framework<br />
|start = 1 June 2013<br />
|end = 31 August 2013<br />
|totaldays = 32<br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Definition of compiler and hardware independent interface for machine learning compiler.<br />
* Selection of a set of software characteristics to be exploited during the optimisation selection process.<br />
* Identify target for first implementation, GCC or LLVM.<br />
|description = <br />
* Identify target for first implementation, GCC or LLVM.<br />
* Determine degree of integration with specific compilers.<br />
* Identify machine learning interface.<br />
* Identify feature selection methodology.<br />
* Iterative refinement on 2-4.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Decision of GCC/LLVM for first implementation | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Design doc for compiler integration | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 3 | title = Design doc for machine learning interface | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 4 | title = Design doc for feature selection | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 5 | title = Iterate 2-5 throughout project | external = E | responsibility = Emb. | due = End of each Q. | comments = }}<br />
| dependencies = <br />
| dependents = <br />
* [[#WP2|Work Package 2]]<br />
}}<br />
<br />
{{WorkPackage<br />
|n = 2<br />
|title = Iterative Implementation of Compiler Framework<br />
|start = 1 July 2013<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Implementation of prototype framework with one compiler, identified in [[#WP1|Work Package 1]]<br />
* Implementation of prototype framework with other compiler.<br />
|description =<br />
* Write code for use with first compiler.<br />
* Write documentation for use with first compiler.<br />
* Implement regression tests for use with first compiler.<br />
* Extend support for use with second compiler (code, documentation, regression).<br />
* Iterative refinement on 1-4.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = First iteration of implementation, testing, documentation with first compiler | external = E | responsibility = Emb. | due = 30 Nov 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Second iteration of development using second compiler | external = E | responsibility = Emb. | due = 30 Nov 2013 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Refinement of implementation with both compilers | external = E | responsibility = Emb. | due = End of each Q. | comments = }}<br />
| dependencies =<br />
* [[#WP1|Work Package 1]]<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 3<br />
|title = Design and Build of Hardware Measurement Platform<br />
|start = 1 June 2013<br />
|end = 31 July 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol<br />
|objectives = <br />
* Implementation of Hardware Measurement<br />
|description =<br />
* Design board, reusing existing expertise<br />
* Board Implementation<br />
* Board testing<br />
|equipment = <br />
* PCB Manufacturing (outsourced)<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Board Design Documentation | external = E | responsibility = UoB | due = 31 Jul 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Working Hardware | external = E | responsibility = UoB | due = 31 Jul 2013 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
* [[#WP4|Work Package 4]] (to instrument boards)<br />
}}<br />
<br />
{{WorkPackage<br />
|n = 4<br />
|title = Training Set, Test Program, Test Hardware and Case Study Development<br />
|start = 1 July 2013<br />
|end = 30 September 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Creation of a set of applications for training.<br />
* Creation of a set of applications for initial testing of trained systems.<br />
* Development of larger case studies for testing purposes.<br />
* Selection of target embedded systems for testing.<br />
|description =<br />
* Select suitable test and training applications from existing benchmark suites.<br />
* Choice of case studies from wider community.<br />
* Selection of embedded systems representative of industrial/commercial applications in consultation with community.<br />
* Integration of embedded systems with hardware test platform ([[#WP3|Work Package 3]])<br />
|equipment = <br />
* Selection of embedded systems<br />
* Community engagement platforms<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Training set source | external = E | responsibility = UoB | due = 30 Sep 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Case study source | external = E | responsibility = UoB | due = 30 Sep 2013 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Embedded systems set up for testing | external = I | responsibility = UoB | due = 30 Sep 2013 | comments = Physical setup internal, documentation external }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 5<br />
|title = Theory of Analysis of Machine Learning Techniques<br />
|start = 1 July 2013<br />
|end = 31 August 2014<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Understand current machine learning techniques and decide if relevant<br />
* Select approach(es) for incorporating into framework<br />
* Refinement in the light of ongoing project development and experience<br />
|description =<br />
* Review existing uses including MILEPOST, directed learning, abductive learning ([[Literature]])<br />
* Whole team working days to bring together theory with implementers to select approach and specify API<br />
* Decision on choice of training approach, e.g. FFD, random, etc.<br />
* Iterative review during second year of programme, inc. potential for reordering<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Literature Review | external = E | responsibility = UoB | due = 30 Sep 2013 (draft) ; 31 Dec 2013 (final) | comments = May be appropriate for publication }}<br />
{{WPDeliverable | ref = 2 | title = Selection of core learning algorithm(s) | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 3 | title = Training approach | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 4 | title = API for implementers | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 5 | title = Review of initial approach | external = | responsibility = UoB | due = 31 July 2014 | comments = May be appropriate for publication }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 6<br />
|title = Theory of New Optimisation Passes<br />
|start = 1 July 2013<br />
|end = 31 August 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Theoretical analysis of why energy optimisations work.<br />
|description =<br />
* Look at all existing work with energy measurement of relevant systems to look for data to guide implementers of optimisation passes to implement those specific for energy minimisation.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Paper identifying characteristics suitable for compiler optimisation passes | external = E | responsibility = UoB | due = 31 Aug 2013 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 7<br />
|title = Training and Testing Prototype Infrastructure<br />
|start = 1 March 2014<br />
|end = 31 May 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Set up of training infrastructure and demonstration with tests using existing optimisations<br />
* Refine/repeat for use with new optimisations<br />
|description =<br />
* Set up infrastructure for existing optimisations<br />
* Train infrastructure with small set with existing optimisations<br />
* Test with small set with existing optimisations<br />
* Repeat above with new optimisations from [[#WP8|Work Package 8]]<br />
|equipment = <br />
* High performance workstation<br />
* Embedded systems with hardware energy measuring<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Working training infrastructure (existing optimisations) | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Results from proof of concept training and test (existing optimisations) | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = This is proof of concept, not the evaluation }}<br />
{{WPDeliverable | ref = 3 | title = Working training infrastructure (new optimisations) | external = E | responsibility = Emb. | due = 31 May 2014 | comments = }}<br />
{{WPDeliverable | ref = 4 | title = Results from proof of concept training and test (new optimisations) | external = E | responsibility = Emb. | due = 31 May 2014 | comments = This is proof of concept, not the evaluation }}<br />
| dependencies =<br />
* [[#WP2|Work Package 2]] (Compiler Infrastructure)<br />
* [[#WP4|Work Package 4]] (Test hardware, test applications)<br />
* [[#WP8|Work Package 8]] (New Optimisations)<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 8<br />
|title = Implement New Optimisation Passes<br />
|start = 1 September 2013<br />
|end = 31 May 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Design and implement optimisation passes in the GCC and LLVM compilers.<br />
|description =<br />
* Design new optimisation passes in light of theory from [[#WP6|Work Package 6]]<br />
* Implement new optimisation passes in GCC<br />
* Reimplement optimisation passes in LLVM<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Design optimisation passes | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Implement optimisation passes in GCC | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Implement optimisation passes in LLVM | external = E | responsibility = Emb. | due = 31 May 2014 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 9<br />
|title = Evaluation of Infrastructure<br />
|start = 1 June 2014<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Full evaluation of infrastructure using full training sets, full test sets and large case studies for both with and without our new optimisations.<br />
|description =<br />
* Train infrastructure with existing optimisations<br />
* Evaluate with smaller tests with existing optimisations<br />
* Evaluate with case studies with existing optimisations<br />
* Repeat above with new optimisations<br />
* Write paper detailing findings<br />
* Review and refine paper<br />
|equipment = <br />
* Instrumented embedded systems<br />
* HPC Facilities<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Trained and tested complete system with full case studies | external = E | responsibility = UoB | due = 30 Sep 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Draft paper presenting results | external = E | responsibility = UoB | due = 30 Sep 2014 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Final paper | external = E | responsibility = Emb. | due = 30 Nov 2014 | comments = This is the ultimate report and it is anticipated that it will take some time to develop. Additionally engineering on the project will continue whilst the paper is written, hence this will be a significantly large task. }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 10<br />
|title = Dissemination and Exploitation<br />
|start = 1 June 2013<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = University of Bristol, Embecosm<br />
|contributors = University of Bristol, Embecosm<br />
|objectives =<br />
* Business case development inc. market analysis<br />
* Engagement with relevant communities<br />
* Engagement with potential customers<br />
* Academic and business publications<br />
|description =<br />
* Develop business case by engagement of all stakeholders<br />
* Ongoing review of business case throughout project, leading to updated exploitation plan<br />
* Engagement with the technical community through participation in workshops (including EACO, NMI, etc.), conferences (including GNU Tools Cauldron, LLVM Developer Conference, etc.), presentations, training events, new media using the skills of AB Open<br />
* Engagement with potential customers<br />
* Publication of papers as described in various Work Packages.<br />
|equipment = <br />
* General computing infrastructure, including website and social media (http://mageec.org)<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Exploitation plan | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Updated every quarter }}<br />
{{WPDeliverable | ref = 2 | title = Participation in workshops and training events | external = E | responsibility = Emb. | due = 30 Nov 2014 | comments = Dates to be confirmed }}<br />
{{WPDeliverable | ref = 3 | title = Website/wiki/new media | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Due date is set up of, mainted throughout project }}<br />
{{WPDeliverable | ref = 4 | title = Papers | external = E | responsibility = n/a | due = n/a | comments = Detailed throughout project plan, for dates and details, refer to associated work packages. }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
== Gantt Chart ==<br />
The following Gantt Chart details the interactions between work packages. As one work package does not necessarily depend on the entire completion of another, a traditional finish-start relationship does not perfectly represent this information. The Microsoft Project file used to generate this chart can be found at [[File:MAGEEC_Gantt.mpp]].<br />
<br />
[[File:MAGEEC_Gantt.png|800px]]<br />
<br />
[[Category:Planning]]</div>Simonhttp://mageec.org/w/index.php?title=Project_Plan&diff=43Project Plan2013-04-25T10:52:59Z<p>Simon: </p>
<hr />
<div>__TOC__<br />
<br />
{{WorkPackage<br />
|n = 1<br />
|title = Iterative Design of Compiler Framework<br />
|start = 1 June 2013<br />
|end = 31 August 2013<br />
|totaldays = 32<br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Definition of compiler and hardware independent interface for machine learning compiler.<br />
* Selection of a set of software characteristics to be exploited during the optimisation selection process.<br />
* Identify target for first implementation, GCC or LLVM.<br />
|description = <br />
* Identify target for first implementation, GCC or LLVM.<br />
* Determine degree of integration with specific compilers.<br />
* Identify machine learning interface.<br />
* Identify feature selection methodology.<br />
* Iterative refinement on 2-4.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Decision of GCC/LLVM for first implementation | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Design doc for compiler integration | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 3 | title = Design doc for machine learning interface | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 4 | title = Design doc for feature selection | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 5 | title = Iterate 2-5 throughout project | external = E | responsibility = Emb. | due = End of each Q. | comments = }}<br />
| dependencies = <br />
| dependents = <br />
* [[#WP2|Work Package 2]]<br />
}}<br />
<br />
{{WorkPackage<br />
|n = 2<br />
|title = Iterative Implementation of Compiler Framework<br />
|start = 1 July 2013<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Implementation of prototype framework with one compiler, identified in [[#WP1|Work Package 1]]<br />
* Implementation of prototype framework with other compiler.<br />
|description =<br />
* Write code for use with first compiler.<br />
* Write documentation for use with first compiler.<br />
* Implement regression tests for use with first compiler.<br />
* Extend support for use with second compiler (code, documentation, regression).<br />
* Iterative refinement on 1-4.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = First iteration of implementation, testing, documentation with first compiler | external = E | responsibility = Emb. | due = 30 Nov 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Second iteration of development using second compiler | external = E | responsibility = Emb. | due = 30 Nov 2013 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Refinement of implementation with both compilers | external = E | responsibility = Emb. | due = End of each Q. | comments = }}<br />
| dependencies =<br />
* [[#WP1|Work Package 1]]<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 3<br />
|title = Design and Build of Hardware Measurement Platform<br />
|start = 1 June 2013<br />
|end = 31 July 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol<br />
|objectives = <br />
* Implementation of Hardware Measurement<br />
|description =<br />
* Design board, reusing existing expertise<br />
* Board Implementation<br />
* Board testing<br />
|equipment = <br />
* PCB Manufacturing (outsourced)<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Board Design Documentation | external = E | responsibility = UoB | due = 31 Jul 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Working Hardware | external = E | responsibility = UoB | due = 31 Jul 2013 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
* [[#WP4|Work Package 4]] (to instrument boards)<br />
}}<br />
<br />
{{WorkPackage<br />
|n = 4<br />
|title = Training Set, Test Program, Test Hardware and Case Study Development<br />
|start = 1 July 2013<br />
|end = 30 September 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Creation of a set of applications for training.<br />
* Creation of a set of applications for initial testing of trained systems.<br />
* Development of larger case studies for testing purposes.<br />
* Selection of target embedded systems for testing.<br />
|description =<br />
* Select suitable test and training applications from existing benchmark suites.<br />
* Choice of case studies from wider community.<br />
* Selection of embedded systems representative of industrial/commercial applications in consultation with community.<br />
* Integration of embedded systems with hardware test platform ([[#WP3|Work Package 3]])<br />
|equipment = <br />
* Selection of embedded systems<br />
* Community engagement platforms<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Training set source | external = E | responsibility = UoB | due = 30 Sep 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Case study source | external = E | responsibility = UoB | due = 30 Sep 2013 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Embedded systems set up for testing | external = I | responsibility = UoB | due = 30 Sep 2013 | comments = Physical setup internal, documentation external }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 5<br />
|title = Theory of Analysis of Machine Learning Techniques<br />
|start = 1 July 2013<br />
|end = 31 August 2014<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Understand current machine learning techniques and decide if relevant<br />
* Select approach(es) for incorporating into framework<br />
* Refinement in the light of ongoing project development and experience<br />
|description =<br />
* Review existing uses including MILEPOST, directed learning, abductive learning ([[Literature]])<br />
* Whole team working days to bring together theory with implementers to select approach and specify API<br />
* Decision on choice of training approach, e.g. FFD, random, etc.<br />
* Iterative review during second year of programme, inc. potential for reordering<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Literature Review | external = E | responsibility = UoB | due = 30 Sep 2013 (draft) ; 31 Dec 2013 (final) | comments = May be appropriate for publication }}<br />
{{WPDeliverable | ref = 2 | title = Selection of core learning algorithm(s) | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 3 | title = Training approach | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 4 | title = API for implementers | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 5 | title = Review of initial approach | external = | responsibility = UoB | due = 31 July 2014 | comments = May be appropriate for publication }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 6<br />
|title = Theory of New Optimisation Passes<br />
|start = 1 July 2013<br />
|end = 31 August 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Theoretical analysis of why energy optimisations work.<br />
|description =<br />
* Look at all existing work with energy measurement of relevant systems to look for data to guide implementers of optimisation passes to implement those specific for energy minimisation.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Paper identifying characteristics suitable for compiler optimisation passes | external = E | responsibility = UoB | due = 31 Aug 2013 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 7<br />
|title = Training and Testing Prototype Infrastructure<br />
|start = 1 March 2014<br />
|end = 31 May 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Set up of training infrastructure and demonstration with tests using existing optimisations<br />
* Refine/repeat for use with new optimisations<br />
|description =<br />
* Set up infrastructure for existing optimisations<br />
* Train infrastructure with small set with existing optimisations<br />
* Test with small set with existing optimisations<br />
* Repeat above with new optimisations from [[#WP8|Work Package 8]]<br />
|equipment = <br />
* High performance workstation<br />
* Embedded systems with hardware energy measuring<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Working training infrastructure (existing optimisations) | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Results from proof of concept training and test (existing optimisations) | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = This is proof of concept, not the evaluation }}<br />
{{WPDeliverable | ref = 3 | title = Working training infrastructure (new optimisations) | external = E | responsibility = Emb. | due = 31 May 2014 | comments = }}<br />
{{WPDeliverable | ref = 4 | title = Results from proof of concept training and test (new optimisations) | external = E | responsibility = Emb. | due = 31 May 2014 | comments = This is proof of concept, not the evaluation }}<br />
| dependencies =<br />
* [[#WP2|Work Package 2]] (Compiler Infrastructure)<br />
* [[#WP4|Work Package 4]] (Test hardware, test applications)<br />
* [[#WP8|Work Package 8]] (New Optimisations)<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 8<br />
|title = Implement New Optimisation Passes<br />
|start = 1 September 2013<br />
|end = 31 May 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Design and implement optimisation passes in the GCC and LLVM compilers.<br />
|description =<br />
* Design new optimisation passes in light of theory from [[#WP6|Work Package 6]]<br />
* Implement new optimisation passes in GCC<br />
* Reimplement optimisation passes in LLVM<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Design optimisation passes | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Implement optimisation passes in GCC | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Implement optimisation passes in LLVM | external = E | responsibility = Emb. | due = 31 May 2014 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 9<br />
|title = Evaluation of Infrastructure<br />
|start = 1 June 2014<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Full evaluation of infrastructure using full training sets, full test sets and large case studies for both with and without our new optimisations.<br />
|description =<br />
* Train infrastructure with existing optimisations<br />
* Evaluate with smaller tests with existing optimisations<br />
* Evaluate with case studies with existing optimisations<br />
* Repeat above with new optimisations<br />
* Write paper detailing findings<br />
* Review and refine paper<br />
|equipment = <br />
* Instrumented embedded systems<br />
* HPC Facilities<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Trained and tested complete system with full case studies | external = E | responsibility = UoB | due = 31 Aug 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Draft paper presenting results | external = E | responsibility = UoB | due = 31 Aug 2014 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Final paper | external = E | responsibility = Emb. | due = 30 Nov 2014 | comments = This is the ultimate report and it is anticipated that it will take some time to develop. Additionally engineering on the project will continue whilst the paper is written, hence this will be a significantly large task. }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 10<br />
|title = Dissemination and Exploitation<br />
|start = 1 June 2013<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = University of Bristol, Embecosm<br />
|contributors = University of Bristol, Embecosm<br />
|objectives =<br />
* Business case development inc. market analysis<br />
* Engagement with relevant communities<br />
* Engagement with potential customers<br />
* Academic and business publications<br />
|description =<br />
* Develop business case by engagement of all stakeholders<br />
* Ongoing review of business case throughout project, leading to updated exploitation plan<br />
* Engagement with the technical community through participation in workshops (including EACO, NMI, etc.), conferences (including GNU Tools Cauldron, LLVM Developer Conference, etc.), presentations, training events, new media using the skills of AB Open<br />
* Engagement with potential customers<br />
* Publication of papers as described in various Work Packages.<br />
|equipment = <br />
* General computing infrastructure, including website and social media (http://mageec.org)<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Exploitation plan | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Updated every quarter }}<br />
{{WPDeliverable | ref = 2 | title = Participation in workshops and training events | external = E | responsibility = Emb. | due = 30 Nov 2014 | comments = Dates to be confirmed }}<br />
{{WPDeliverable | ref = 3 | title = Website/wiki/new media | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Due date is set up of, mainted throughout project }}<br />
{{WPDeliverable | ref = 4 | title = Papers | external = E | responsibility = n/a | due = n/a | comments = Detailed throughout project plan, for dates and details, refer to associated work packages. }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
== Gantt Chart ==<br />
The following Gantt Chart details the interactions between work packages. As one work package does not necessarily depend on the entire completion of another, a traditional finish-start relationship does not perfectly represent this information. The Microsoft Project file used to generate this chart can be found at [[File:MAGEEC_Gantt.mpp]].<br />
<br />
[[File:MAGEEC_Gantt.png|800px]]<br />
<br />
[[Category:Planning]]</div>Simonhttp://mageec.org/w/index.php?title=Project_Plan&diff=42Project Plan2013-04-25T10:48:03Z<p>Simon: </p>
<hr />
<div>__TOC__<br />
<br />
{{WorkPackage<br />
|n = 1<br />
|title = Iterative Design of Compiler Framework<br />
|start = 1 June 2013<br />
|end = 31 August 2013<br />
|totaldays = 32<br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Definition of compiler and hardware independent interface for machine learning compiler.<br />
* Selection of a set of software characteristics to be exploited during the optimisation selection process.<br />
* Identify target for first implementation, GCC or LLVM.<br />
|description = <br />
* Identify target for first implementation, GCC or LLVM.<br />
* Determine degree of integration with specific compilers.<br />
* Identify machine learning interface.<br />
* Identify feature selection methodology.<br />
* Iterative refinement on 2-4.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Decision of GCC/LLVM for first implementation | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Design doc for compiler integration | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 3 | title = Design doc for machine learning interface | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 4 | title = Design doc for feature selection | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Iterative design, live document}}<br />
{{WPDeliverable | ref = 5 | title = Iterate 2-5 throughout project | external = E | responsibility = Emb. | due = End of each Q. | comments = }}<br />
| dependencies = <br />
| dependents = <br />
* [[#WP2|Work Package 2]]<br />
}}<br />
<br />
{{WorkPackage<br />
|n = 2<br />
|title = Iterative Implementation of Compiler Framework<br />
|start = 1 July 2013<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Implementation of prototype framework with one compiler, identified in [[#WP1|Work Package 1]]<br />
* Implementation of prototype framework with other compiler.<br />
|description =<br />
* Write code for use with first compiler.<br />
* Write documentation for use with first compiler.<br />
* Implement regression tests for use with first compiler.<br />
* Extend support for use with second compiler (code, documentation, regression).<br />
* Iterative refinement on 1-4.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = First iteration of implementation, testing, documentation with first compiler | external = E | responsibility = Emb. | due = 30 Nov 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Second iteration of development using second compiler | external = E | responsibility = Emb. | due = 30 Nov 2013 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Refinement of implementation with both compilers | external = E | responsibility = Emb. | due = End of each Q. | comments = }}<br />
| dependencies =<br />
* [[#WP1|Work Package 1]]<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 3<br />
|title = Design and Build of Hardware Measurement Platform<br />
|start = 1 June 2013<br />
|end = 31 July 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol<br />
|objectives = <br />
* Implementation of Hardware Measurement<br />
|description =<br />
* Design board, reusing existing expertise<br />
* Board Implementation<br />
* Board testing<br />
|equipment = <br />
* PCB Manufacturing (outsourced)<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Board Design Documentation | external = E | responsibility = UoB | due = 31 Jul 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Working Hardware | external = E | responsibility = UoB | due = 31 Jul 2013 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
* [[#WP4|Work Package 4]] (to instrument boards)<br />
}}<br />
<br />
{{WorkPackage<br />
|n = 4<br />
|title = Training Set, Test Program, Test Hardware and Case Study Development<br />
|start = 1 July 2013<br />
|end = 30 September 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Creation of a set of applications for training.<br />
* Creation of a set of applications for initial testing of trained systems.<br />
* Development of larger case studies for testing purposes.<br />
* Selection of target embedded systems for testing.<br />
|description =<br />
* Select suitable test and training applications from existing benchmark suites.<br />
* Choice of case studies from wider community.<br />
* Selection of embedded systems representative of industrial/commercial applications in consultation with community.<br />
* Integration of embedded systems with hardware test platform ([[#WP3|Work Package 3]])<br />
|equipment = <br />
* Selection of embedded systems<br />
* Community engagement platforms<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Training set source | external = E | responsibility = UoB | due = 30 Sep 2013 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Case study source | external = E | responsibility = UoB | due = 30 Sep 2013 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Embedded systems set up for testing | external = I | responsibility = UoB | due = 30 Sep 2013 | comments = Physical setup internal, documentation external }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 5<br />
|title = Theory of Analysis of Machine Learning Techniques<br />
|start = 1 July 2013<br />
|end = 31 August 2014<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Understand current machine learning techniques and decide if relevant<br />
* Select approach(es) for incorporating into framework<br />
* Refinement in the light of ongoing project development and experience<br />
|description =<br />
* Review existing uses including MILEPOST, directed learning, abductive learning ([[Literature]])<br />
* Whole team working days to bring together theory with implementers to select approach and specify API<br />
* Decision on choice of training approach, e.g. FFD, random, etc.<br />
* Iterative review during second year of programme, inc. potential for reordering<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Literature Review | external = E | responsibility = UoB | due = 30 Sep 2013 (draft) ; 31 Dec 2013 (final) | comments = May be appropriate for publication }}<br />
{{WPDeliverable | ref = 2 | title = Selection of core learning algorithm(s) | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 3 | title = Training approach | external = | responsibility = UoB | due = 31 Aug 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 4 | title = API for implementers | external = | responsibility = UoB | due = 30 Sep 2013 | comments = Output of working days }}<br />
{{WPDeliverable | ref = 5 | title = Review of approach | external = | responsibility = UoB | due = 31 Dec 2013 | comments = May be appropriate for publication }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 6<br />
|title = Theory of New Optimisation Passes<br />
|start = 1 July 2013<br />
|end = 31 August 2013<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Theoretical analysis of why energy optimisations work.<br />
|description =<br />
* Look at all existing work with energy measurement of relevant systems to look for data to guide implementers of optimisation passes to implement those specific for energy minimisation.<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Paper identifying characteristics suitable for compiler optimisation passes | external = E | responsibility = UoB | due = 31 Aug 2013 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 7<br />
|title = Training and Testing Prototype Infrastructure<br />
|start = 1 March 2014<br />
|end = 31 May 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Set up of training infrastructure and demonstration with tests using existing optimisations<br />
* Refine/repeat for use with new optimisations<br />
|description =<br />
* Set up infrastructure for existing optimisations<br />
* Train infrastructure with small set with existing optimisations<br />
* Test with small set with existing optimisations<br />
* Repeat above with new optimisations from [[#WP8|Work Package 8]]<br />
|equipment = <br />
* High performance workstation<br />
* Embedded systems with hardware energy measuring<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Working training infrastructure (existing optimisations) | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Results from proof of concept training and test (existing optimisations) | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = This is proof of concept, not the evaluation }}<br />
{{WPDeliverable | ref = 3 | title = Working training infrastructure (new optimisations) | external = E | responsibility = Emb. | due = 31 May 2014 | comments = }}<br />
{{WPDeliverable | ref = 4 | title = Results from proof of concept training and test (new optimisations) | external = E | responsibility = Emb. | due = 31 May 2014 | comments = This is proof of concept, not the evaluation }}<br />
| dependencies =<br />
* [[#WP2|Work Package 2]] (Compiler Infrastructure)<br />
* [[#WP4|Work Package 4]] (Test hardware, test applications)<br />
* [[#WP8|Work Package 8]] (New Optimisations)<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 8<br />
|title = Implement New Optimisation Passes<br />
|start = 1 September 2013<br />
|end = 31 May 2014<br />
|totaldays = <br />
|leader = Embecosm<br />
|contributors = Embecosm<br />
|objectives = <br />
* Design and implement optimisation passes in the GCC and LLVM compilers.<br />
|description =<br />
* Design new optimisation passes in light of theory from [[#WP6|Work Package 6]]<br />
* Implement new optimisation passes in GCC<br />
* Reimplement optimisation passes in LLVM<br />
|equipment = <br />
* General computing infrastructure<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Design optimisation passes | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Implement optimisation passes in GCC | external = E | responsibility = Emb. | due = 28 Feb 2014 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Implement optimisation passes in LLVM | external = E | responsibility = Emb. | due = 31 May 2014 | comments = }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 9<br />
|title = Evaluation of Infrastructure<br />
|start = 1 June 2014<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = University of Bristol<br />
|contributors = University of Bristol, Embecosm<br />
|objectives = <br />
* Full evaluation of infrastructure using full training sets, full test sets and large case studies for both with and without our new optimisations.<br />
|description =<br />
* Train infrastructure with existing optimisations<br />
* Evaluate with smaller tests with existing optimisations<br />
* Evaluate with case studies with existing optimisations<br />
* Repeat above with new optimisations<br />
* Write paper detailing findings<br />
* Review and refine paper<br />
|equipment = <br />
* Instrumented embedded systems<br />
* HPC Facilities<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Trained and tested complete system with full case studies | external = E | responsibility = UoB | due = 31 Aug 2014 | comments = }}<br />
{{WPDeliverable | ref = 2 | title = Draft paper presenting results | external = E | responsibility = UoB | due = 31 Aug 2014 | comments = }}<br />
{{WPDeliverable | ref = 3 | title = Final paper | external = E | responsibility = Emb. | due = 30 Nov 2014 | comments = This is the ultimate report and it is anticipated that it will take some time to develop. Additionally engineering on the project will continue whilst the paper is written, hence this will be a significantly large task. }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
{{WorkPackage<br />
|n = 10<br />
|title = Dissemination and Exploitation<br />
|start = 1 June 2013<br />
|end = 30 Nov 2014<br />
|totaldays = <br />
|leader = University of Bristol, Embecosm<br />
|contributors = University of Bristol, Embecosm<br />
|objectives =<br />
* Business case development inc. market analysis<br />
* Engagement with relevant communities<br />
* Engagement with potential customers<br />
* Academic and business publications<br />
|description =<br />
* Develop business case by engagement of all stakeholders<br />
* Ongoing review of business case throughout project, leading to updated exploitation plan<br />
* Engagement with the technical community through participation in workshops (including EACO, NMI, etc.), conferences (including GNU Tools Cauldron, LLVM Developer Conference, etc.), presentations, training events, new media using the skills of AB Open<br />
* Engagement with potential customers<br />
* Publication of papers as described in various Work Packages.<br />
|equipment = <br />
* General computing infrastructure, including website and social media (http://mageec.org)<br />
|deliverables = <br />
{{WPDeliverable | ref = 1 | title = Exploitation plan | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Updated every quarter }}<br />
{{WPDeliverable | ref = 2 | title = Participation in workshops and training events | external = E | responsibility = Emb. | due = 30 Nov 2014 | comments = Dates to be confirmed }}<br />
{{WPDeliverable | ref = 3 | title = Website/wiki/new media | external = E | responsibility = Emb. | due = 31 Aug 2013 | comments = Due date is set up of, mainted throughout project }}<br />
{{WPDeliverable | ref = 4 | title = Papers | external = E | responsibility = n/a | due = n/a | comments = Detailed throughout project plan, for dates and details, refer to associated work packages. }}<br />
| dependencies =<br />
| dependents = <br />
}}<br />
<br />
== Gantt Chart ==<br />
The following Gantt Chart details the interactions between work packages. As one work package does not necessarily depend on the entire completion of another, a traditional finish-start relationship does not perfectly represent this information. The Microsoft Project file used to generate this chart can be found at [[File:MAGEEC_Gantt.mpp]].<br />
<br />
[[File:MAGEEC_Gantt.png|800px]]<br />
<br />
[[Category:Planning]]</div>Simon