Difference between revisions of "Meeting 22-07-2013"

From MAGEEC
Jump to: navigation, search
Line 1: Line 1:
Meeting UoB 22 July 2013
+
<center>'''Meeting UoB 22 July 2013'''</center>
  
Present: JB, SC, MG, AW, JP, SH, KE, OR
 
  
Hardware Energy Monitoring Report (Ashley)
+
<center>Present: JB, SC, MG, AW, JP, SH, KE, OR</center>
Slow progress due to needing to get the hardware working
 
Software installation / OS issues
 
Moved to use a previous version (V2) energy-monitor boards, since they are more suitable
 
Problems using V3 to measure external device energy consumptions
 
Capable but the level of additional soldering required, lack of voltage divider and need for external resistors makes it less easy to use for V2.
 
Benchmarks working and verification of their correctness being worked on.
 
TODO: Now a priority to push on internal verification code to the tests.
 
e.g. compare outputs to pre-computed correct and return result.
 
When running the benchmarks, we should apply techniques such as externs or volatiles that can store e.g. the final result.
 
Dijkstra program not working, others are.
 
TODO: Get the benchmarks (BEEBS / BBS) work for last year out there with more oomph (webs, publicity, workshops...)
 
  
James 13th 14th September will give a talk at OSHCAMP on energy monitoring board design.
 
TODO: Run code that straddles flash banks and runs on flash to see differences in energy → feeds well into 'discovery' phase of ML.
 
  
MAGEEC Blog posts
+
'''Hardware Energy Monitoring Report (Ashley)'''
Weekly (ish) blogs, from rotating members of the team.
 
TODO: 22/07/2013: Jeremy this week for an intro
 
TODO: 29/07/2013: Ashley next week for intro to energy monitoring hardware.
 
Ensure the draft is saved.
 
Andrew will perform the final publishing
 
TODO: 05/07/2013: Moon, intro to his work.
 
  
Compiler Framework
+
* Slow progress due to needing to get the hardware working
No new update on implementation, due to attending GCC meeting etc.
+
** Software installation / OS issues
GCC meeting feedback is that HPC community (inc. LLL) very interested into learning more about low power.
+
* Moved to use a previous version (V2) energy-monitor boards, since they are more suitable
Looked at the research questions posted from UoB discussion on the wiki.
+
* Problems using V3 to measure external device energy consumptions
Profile-directed optimisation is very powerful, perhaps most powerful
+
** Capable but the level of additional soldering required, lack of voltage divider and need for external resistors makes it less easy to use for V2.
Joern to be quizzed by James about the kinds of profile data that comes out, since it can impact the machine learning.
+
* Benchmarks working and verification of their correctness being worked on.
Can some examples of data be generated so that it can be represented for ML learning.
+
** TODO: Now a priority to push on internal verification code to the tests.
MILEPOST Approach
+
*** e.g. compare outputs to pre-computed correct and return result.
Normalised feature vector 0<= 1 using the number of instructions as the divider.
+
* When running the benchmarks, we should apply techniques such as <tt>extern</tt>s or <tt>volatile</tt>s that can store e.g. the final result.
Ran 1000 random on/off flags, then kept the top 5% of previously trained data.
+
** Dijkstra program not working, others are.
Question on whether or not the flags are orthogonal.
+
* TODO: Get the benchmarks (BEEBS / BBS) work for last year out there with more oomph (webs, publicity, workshops...)
MSc student is addressing:
 
Taking James' flags of significance and isolating these and exhaustively
 
Data set available by end of week.
 
Performance as metric.
 
TODO: Paper on James' previous work? Of 130 flags in GCC only 13 make a difference [in our scenarios]. We want to understand why.
 
Consider systematic (FFD) vs exhaustive vs random selection and their effectiveness.
 
Part of the MSc work will address this.
 
Moon investigating which of the MILEPOST vectors are and are not useful. Needs a large data set of 150+ applications to check them.
 
Work on WCET to extract additional programs to help with this.
 
Look (longer term) at HPC space to augment these.
 
MILEPOST Approach Summary: Run with 1000 flags randomly. Having discarded invalid flags, take top 100 good ones and run accumulating stats showing which are in best sets between iterations
 
There are techniques for looking at flag dependencies, but they need further investigation.
 
  
 +
* James 13th 14th September will give a talk at OSHCAMP on energy monitoring board design.
 +
* TODO: Run code that straddles flash banks and runs on flash to see differences in energy → feeds well into 'discovery' phase of ML.
  
 +
'''MAGEEC Blog posts'''
  
Framework representation for enabling later advanced ML work
+
* Weekly (ish) blogs, from rotating members of the team.
Reference SC's previous Overall Design slide.
+
* TODO: 22/07/2013: Jeremy this week for an intro
Oliver outlined the “credit assignment” problem for ML to infer what causes improvement.
+
* TODO: 29/07/2013: Ashley next week for intro to energy monitoring hardware.
gen_features() just gives a list of passes that will be run.
+
** Ensure the draft is saved.
Mapping of passes to flags needed.
+
** Andrew will perform the final publishing
A lot of debate about whether to use a feature vector, IR or source code for selecting the relevant features for ML.
+
* TODO: 05/07/2013: Moon, intro to his work.
Main takeaway – we need to be able to cope with changes in the features that are relevant as our knowledge of this evolves.
 
Feature info passing should be generic (enough to be able send IR, if necessary).
 
Should plan for plugin re-write next year if necessary to aid this.
 
How to support backtracking prediction.
 
Unlikely in GCC due to global state.
 
TODO: SC to produce a draft spec by 31/07/2013
 
  
 +
'''Compiler Framework'''
  
 +
* No new update on implementation, due to attending GCC meeting etc.
 +
* GCC meeting feedback is that HPC community (inc. LLL) very interested into learning more about low power.
 +
* Looked at the research questions posted from UoB discussion on the wiki.
 +
** Profile-directed optimisation is very powerful, perhaps most powerful
 +
** Joern to be quizzed by James about the kinds of profile data that comes out, since it can impact the machine learning.
 +
*** Can some examples of data be generated so that it can be represented for ML learning.
  
Planning
+
'''MILEPOST Approach'''
Hardware design and build completed on time.
 
No new streams of work coming online.
 
All else proceeding OK.
 
TODO: By 29/07/2013: Action the kit buying
 
  
Actions carried forward from previous meeting:
+
* <nowiki>Normalised feature vector 0<= 1 using the number of instructions as the divider.</nowiki>
SH Apply for BlueCrystal Accounts for all
+
* Ran 1000 random on/off flags, then kept the top 5% of previously trained data.
KIE To decide on whether for two PCs with 32GB was one PC with two cores and much more RAM
+
* Question on whether or not the flags are orthogonal.
 +
* MSc student is addressing:
 +
** Taking James' flags of significance and isolating these and exhaustively
 +
** Data set available by end of week.
 +
** Performance as metric.
 +
* <nowiki>TODO: Paper on James' previous work? Of 130 flags in GCC only 13 make a difference [in our scenarios]. We want to understand why.</nowiki>
 +
* Consider systematic (FFD) vs exhaustive vs random selection and their effectiveness.
 +
** Part of the MSc work will address this.
 +
* Moon investigating which of the MILEPOST vectors are and are not useful. Needs a large data set of 150+ applications to check them.
 +
** Work on WCET to extract additional programs to help with this.
 +
** Look (longer term) at HPC space to augment these.
 +
* MILEPOST Approach Summary: Run with 1000 flags randomly. Having discarded invalid flags, take top 100 good ones and run accumulating stats showing which are in best sets between iterations
 +
* There are techniques for looking at flag dependencies, but they need further investigation.
 +
 
 +
'''Framework representation for enabling ''later'' advanced ML work'''
 +
 
 +
* Reference SC's previous Overall Design slide.
 +
 
 +
* Oliver outlined the “credit assignment” problem for ML to infer what causes improvement.
 +
 
 +
* <tt>gen_features() </tt>just gives a list of passes that will be run.
 +
* Mapping of passes to flags needed.
 +
* A lot of debate about whether to use a feature vector, IR or source code for selecting the relevant features for ML.
 +
* Main takeaway – we need to be able to cope with changes in the features that are relevant as our knowledge of this evolves.
 +
** Feature info passing should be generic (enough to be able send IR, if necessary).
 +
** Should plan for plugin re-write next year if necessary to aid this.
 +
* How to support backtracking prediction.
 +
** Unlikely in GCC due to global state.
 +
* TODO: SC to produce a draft spec by 31/07/2013
 +
 
 +
'''Planning'''
 +
 
 +
* Hardware design and build completed on time.
 +
* No new streams of work coming online.
 +
* All else proceeding OK.
 +
* TODO: By 29/07/2013: Action the kit buying
 +
 
 +
'''Actions carried forward from previous meeting:'''
 +
 
 +
* S[http://mageec.org/w/index.php?title=SH&action=edit&redlink=1 H]&nbsp;Apply for BlueCrystal Accounts for all
 +
* [http://mageec.org/w/index.php?title=KIE&action=edit&redlink=1 KIE]&nbsp;To decide on whether for two PCs with 32GB was one PC with two cores and much more RAM

Revision as of 12:44, 22 July 2013

Meeting UoB 22 July 2013


Present: JB, SC, MG, AW, JP, SH, KE, OR


Hardware Energy Monitoring Report (Ashley)

  • Slow progress due to needing to get the hardware working
    • Software installation / OS issues
  • Moved to use a previous version (V2) energy-monitor boards, since they are more suitable
  • Problems using V3 to measure external device energy consumptions
    • Capable but the level of additional soldering required, lack of voltage divider and need for external resistors makes it less easy to use for V2.
  • Benchmarks working and verification of their correctness being worked on.
    • TODO: Now a priority to push on internal verification code to the tests.
      • e.g. compare outputs to pre-computed correct and return result.
  • When running the benchmarks, we should apply techniques such as externs or volatiles that can store e.g. the final result.
    • Dijkstra program not working, others are.
  • TODO: Get the benchmarks (BEEBS / BBS) work for last year out there with more oomph (webs, publicity, workshops...)
  • James 13th 14th September will give a talk at OSHCAMP on energy monitoring board design.
  • TODO: Run code that straddles flash banks and runs on flash to see differences in energy → feeds well into 'discovery' phase of ML.

MAGEEC Blog posts

  • Weekly (ish) blogs, from rotating members of the team.
  • TODO: 22/07/2013: Jeremy this week for an intro
  • TODO: 29/07/2013: Ashley next week for intro to energy monitoring hardware.
    • Ensure the draft is saved.
    • Andrew will perform the final publishing
  • TODO: 05/07/2013: Moon, intro to his work.

Compiler Framework

  • No new update on implementation, due to attending GCC meeting etc.
  • GCC meeting feedback is that HPC community (inc. LLL) very interested into learning more about low power.
  • Looked at the research questions posted from UoB discussion on the wiki.
    • Profile-directed optimisation is very powerful, perhaps most powerful
    • Joern to be quizzed by James about the kinds of profile data that comes out, since it can impact the machine learning.
      • Can some examples of data be generated so that it can be represented for ML learning.

MILEPOST Approach

  • Normalised feature vector 0<= 1 using the number of instructions as the divider.
  • Ran 1000 random on/off flags, then kept the top 5% of previously trained data.
  • Question on whether or not the flags are orthogonal.
  • MSc student is addressing:
    • Taking James' flags of significance and isolating these and exhaustively
    • Data set available by end of week.
    • Performance as metric.
  • TODO: Paper on James' previous work? Of 130 flags in GCC only 13 make a difference [in our scenarios]. We want to understand why.
  • Consider systematic (FFD) vs exhaustive vs random selection and their effectiveness.
    • Part of the MSc work will address this.
  • Moon investigating which of the MILEPOST vectors are and are not useful. Needs a large data set of 150+ applications to check them.
    • Work on WCET to extract additional programs to help with this.
    • Look (longer term) at HPC space to augment these.
  • MILEPOST Approach Summary: Run with 1000 flags randomly. Having discarded invalid flags, take top 100 good ones and run accumulating stats showing which are in best sets between iterations
  • There are techniques for looking at flag dependencies, but they need further investigation.

Framework representation for enabling later advanced ML work

  • Reference SC's previous Overall Design slide.
  • Oliver outlined the “credit assignment” problem for ML to infer what causes improvement.
  • gen_features() just gives a list of passes that will be run.
  • Mapping of passes to flags needed.
  • A lot of debate about whether to use a feature vector, IR or source code for selecting the relevant features for ML.
  • Main takeaway – we need to be able to cope with changes in the features that are relevant as our knowledge of this evolves.
    • Feature info passing should be generic (enough to be able send IR, if necessary).
    • Should plan for plugin re-write next year if necessary to aid this.
  • How to support backtracking prediction.
    • Unlikely in GCC due to global state.
  • TODO: SC to produce a draft spec by 31/07/2013

Planning

  • Hardware design and build completed on time.
  • No new streams of work coming online.
  • All else proceeding OK.
  • TODO: By 29/07/2013: Action the kit buying

Actions carried forward from previous meeting:

  • SH Apply for BlueCrystal Accounts for all
  • KIE To decide on whether for two PCs with 32GB was one PC with two cores and much more RAM