1 of 5

OpenDaylight Software�Quality Improvements

Implementation plans based on

presentation at ODL Fluorine DDF

Los Angeles, March 2018

Brady Johnson bjohnson@inocybe.comRyan Goulding rgoulding@inocybe.comTom Pantelis tpantelis@inocybe.com

2 of 5

ODL DDF presentation results

As a result of presenting (original presentation) at the ODL DDF in March 2018, the following were the outcomes:

  • It can be difficult to understand what a SW quality metric score means�
  • Individual project SW quality scores could lead to undesired comparisons
    • This was not the original intent, but could definitely occur�
  • Instead of measuring SW quality metrics, lets just start applying SW quality improvement measures, as explained next.

3 of 5

Applying ODL SW quality improvement measures

Start by creating non-voting jenkins jobs, make it voting in the next release.

  • Starting in Fluorine, create a non-voting jenkins job with a SW quality improvement item enabled.
    • We should probably start by enabling CheckStyle.
    • This job will probably fail for some projects, but since its a non-voting job, the project can still merge patches.
  • In the next release
    • Move this SW quality improvement item to a voting job
    • Add another (or a few) SW quality improvement item to the non-voting job
  • Take the SW quality improvement items to be applied from a prioritized list.

4 of 5

Initial SW Quality Improvement items

  1. CheckStyle - static code analysis
    1. Each ODL project can partially (per submodule) or completely (project-wide) enable
    2. When enabled, the build will fail if issues are encountered
  2. FindBugs - static code analysis
    • Each ODL project can partially (per submodule) or completely (project-wide) enable
    • When enabled, the build will fail if issues are encountered
  3. Javadocs
  4. Jacoco/Sonar - Unit Test coverage
    • Measure percentage of code covered by UT (0-100%)
    • Consider taking into account number of skipped tests (dead UT code)
  5. CSIT coverage
    • Percentage of tests currently passing
    • Ideally we should also measure test stability: percentage of times the tests failed recently

5 of 5

Looking forward

More Static Quality improvements:

  • Compare FindBugs with SpotBugs
  • Average patch size
  • Using deprecated code
  • Existing TOX job
  • etc

Runtime Quality improvements:

  • Memory usage (leaks)
  • Application start-time
  • Very chatty logging
  • etc