News and Blog

EPSRC CDT in Next Generation Computational Modelling

International Conference on Software Engineering

ICSE 2016 Summary

Presentation by Hans Fangohr, summarising the presentations made at ICSE 2016

Hans Fangohr presented a summary of what was being talked about at the International Conference on Software Engineering (ICSE) 2016 at Austin, Texas. The conference is focused on what software engineering can do for science. Hans Fangohr presented summaries of talks that he found interesting and relevant to the NGCM cohorts.

Continuous Deployment

Continuous deployment can be seen as a further addition to continuous integration. Continuous integration is the idea of automatically integrating new developed code into the main-code line: auto-building and running of tests etc. Continuous deployment extends this by automatically releasing the code to the customer/users. Facebook uses such a methodology to release software updates. These are generally small and isolated and are released to the user immediately. The release is decided by the developer. The advantages of continuous deployment are that it gives the developer end-to-end responsibility and reduces the small issue update risk. The disadvantages are the significant cost to set up.

Hans Fangohr went on to present the results from organisations that have put continuous deployment into use. The results showed that the number of critical issues remained the same even as the company size grows, there is no testing team required as the decisions are made by the developers and also the developers release daily more often than weekly (faster response to issues). The employment of continuous development does require a certain management style to be successful. Developers are now in control and therefore requires leadership that influences rather than commands (indirect leadership) and doing so caused an increase in issues resolved.

Chaos Engineering

Chaos Engineering is an additional step in the testing timeline. The idea is to test how the system behaves when things go wrong. For example, pulling out a cable from a data centre and observing how the system handles this. The idea is that if the test seems scary to do, then perform the test so often that it no longer becomes scary. Further tests are: Fault injection, network overload, inject latency and disconnecting machines.

Evolution of C Programming Practice

This was an interesting look into how C programming has evolved from the 1970's and how the programming practice has changed. This mainly highlighted the importance of code sustainability. Code from the 1970's is extremely difficult to read, almost no comments and the use of GOTO's. As such this highlighted the importance of sticking to a programming practice such the code remains sustainable for much longer. The paper can be found here.

Overall the conference contained a small open community with a mixture of research. The application to the 'real world' practice is very close and provides research developers with the insight into practices they can incorporate into their own research development to mitigate potential issues which have already been tackled in industry.

Comments