Sunday, October 5, 2014

Advice on going to your first conference

I was asked for advice from someone going to his first conference … in this case CodeMesh in London. Here are my thoughts … any comments or other advice?

Strategy … what are you aiming to find out from the conference: some very general impressions about what is going on with functional innovation, something focussed on a particular language or languages, or on particular technologies? Depending on this is …

Plan … CodeMesh has 4 parallel tracks (a real hassle) but it's well worth making sure that you have a plan for what you most want to hear, reflecting your strategy. You should be able to move between individual talks, but that can sometimes be tricky because of synchronisation between different rooms, as well as the distances between rooms. If that's tricky, then you can always change at session breaks.

Another way of choosing is to go to the talk that you know least about. If you go to the one you know most about, that talk will probably spend 80% or more of its time telling you things that you already know. Of course, it needs to be one you're interested in …

Networking … a lot of what happens during the conference is outside the sessions, so do make sure that you make the most of the coffee / tea / meal breaks, and the evening sessions. Speakers are always happy to chat, so you can engage with them outside sessions, or follow up from any questions you ask in a session (scary, but a way of getting noticed). You can always email speakers, introducing yourself, and ask a question, if you do not feel comfortable asking a question after a talk.

Find an introducer, and have them introduce you to some people. If you are going with your PhD supervisor or another colleague or friend who has been there before, they should do the introductions.

Presenting your ideas … you're going to meet people who'll ask what you're working on and you want to interest them and move the conversation along, not stumble over how to explain what you're doing. So, have a 30 second elevator pitch ready.

Talk … OK, you may not feel ready for this, but if you had something specific and interesting to say you can often give a 5 minute "lightning talk" at meetings like CodeMesh. That get you and your work noticed, and people are usually very generous in listening and supporting speakers with less experience.

Learn … if you can get to the tutorials then you can learn a whole lot at these. There are some excellent tutorialists at CodeMesh this year.

Social Media … increasingly there is a whole virtual side to conferences, so track the twitter feed and other online stuff, and contribute yourself too.

Branch out … if you're there with your buddies, don't just stick with them, but aim to meet new people too, particularly at any of the conference social events. Even a casual conversation a couple of times over a couple of conferences lays a foundation for a deeper professional relationship, especially with peers.

Refreshments … there's always free (well, paid for in the registration price) coffee/tea etc. at the breaks and lunch in the middle of the day, but at quite a lot of conferences there's breakfast too, so it's worth getting along in time for that.

Afterwards you can follow up with people you have met by email or social media. You can also catch up on presentations which you missed by watching the video if the conference talks are recorded. Personally, I find that despite my best intentions, it's very unusual for me to find the time to do this.

Particularly for big conferences, it's best to pick and chose which talks to attend rather than sitting in on everything, otherwise you'll get burned out in the first couple of days. “I'd also advise listening to the speakers rather than sitting on your laptop/phone - personally I'd ban such devices from talks!” Also, keep away from alpha male superstars and their groupies … focus on the people who give presentations that fire you with enthusiasm, and those that take your presentation seriously.

Finally, have fun soaking up all the new ideas and meeting all the interesting people behind them.

Updates … thanks to Scott Fritchie, Andy Gill, Kathy Gray, Graham Hutton, Stefan Kahrs, Greg Michaelson, Neil Mitchell and Gem Stapleton for their comments and suggestions.

HTH

Simon

Monday, August 25, 2014

Reading Robert Macfarlane by the internet

There's an old fashioned pleasure to reading on a wet August afternoon. Robert Macfarlane's The Old Ways takes you – in imagination – out into the wilder (or indeed not so wild) parts of Britain. What increases the pleasure and depth of the experience is reading with the internet by your side. With the internet we get so much further …

First, the maps – Ordnance Survey if you have paid for them, or Google maps if not. Walking along the Broomway in (or rather off) Essex takes you to that footpath along the sands, right next to the "DANGER AREA" signs. In the Hebrides, we can find the islands on Google maps – and satellite view - unnamed, but unmistakable from his descriptions. And then to wikipedia to see what gannets look like and read about the peculiar anatomy that sustains their deep dives into  the ocean, making the story of the gannet that pierced the hull of the boat but kept it plugged entirely believable.

And then onto Harris itself. Trying to negotiate the walk he makes – again we have topography and named lakes, but no hill names – but hills cast shadows on the satellite picture, and photographs culled from somewhere (even street view was here) show the picture from the ground. Then we can look at Steve Dilworth's art and read about what Iain Sinclair says about him.

Awe-inspiring.

Cloud Refactoring

A draft review of “Cloud refactoring: automated transitioning to cloud-based services” by Young-Woo Kwon and Eli Tilevich, from Automated Software Engineering, (2014) 21:345–372, DOI 10.1007/s10515-013-0136-9; to appear in Computing Reviews.

Refactoring is the process of changing how a program works without changing what it does, but even before the term was coined, it was practised by programmers as "program restructuring" from the early 1950s. Refactoring itself came to prominence with work by Opdyke, Griswold and Johnson in the early 1990s and was popularised by Fowler's 1999 book. Refactoring is done to improve code for a range of reasons: to make it conform to coding standards, to make it easier to read, or to prepare the code for modification or enhancement.

Whatever the case, refactorings of large code bases can be infeasible without automated – or semi-automated – tools, and many IDEs incorporate refactorings for a variety of languages, although it is perhaps most developed in the IDEs for the Java language, including IntelliJ and Eclipse. Refactoring "in the small" is the preserve of the developer, and may be done as a part of his or her day to day development process; larger-scale refactoring is often anticipated, but perhaps harder to justify as part of an aggressive release cycle, unless, of course, there is some concrete gain to be made. What better example could there be of this than migrating an existing system to the cloud?

Taking a system and moving it to the cloud must be the right thing to do: it provides scalability, resilience, and also fits the zeitgeist. However, as the authors make very clear, it is not without difficulties. It is important to preserve functionality – the system should not change what it does – but also it should maintain non-functional properties like efficiency (e.g. latency and throughput) and questions of resilience are more acute in a distributed/cloud setting. In common with many other refactoring tool builders, the authors propose a two-pronged approach to the problem: first, they give an analysis to provide recommendations of how systems might be refactored and then they develop a tool to implement the refactorings identified. Their first phase combines static clustering analysis with runtime profiling to identify potential services which can be migrated, and their second implements refactorings that make this change, through introducing the appropriate interfaces and proxies and at the same time adding fault-handling facilities to deal with the additional risks introduced by moving to a more distributed application platform.

These refactorings are included in Eclipse and available through its standard refactoring workflow, and the work is demonstrated through two small examples and a larger case study performed for GE Research. One of the key questions faced by designers of a cloud refactoring tool and their users alike is how much of the refactoring workflow should be automated. In discussing a number of examples the authors say that "we selected appropriate classes for the reason of the performance, call-by-reference, and meaning of features", thus making clear the role for the domain-aware engineer in the process. It would have been interesting to have heard more about the view of the developers of the software about the results of the analysis: were the services identified meaningful to them? would they have structured things somewhat differently from the analyses of the tool?

In summary, this paper makes a clear and well-described contribution to what is a fast moving field: Google Scholar, for example, gives over 1000 references matching the keywords "cloud" and "refactor", and this number can only be set to grow as migrating cloud proves to be more of a challenge than its advocates suggest.

Saturday, July 19, 2014

In honour of Robert Wyatt

Picture the scene … you're having an evening in the pub, with the usual music in the background: guitars, 4-4 bass beat, when suddenly it changes: it's a song with more complicated rhythms and  dissonant notes. What is more, it's sung in a lugubrious style by someone who sounds like a real person, rather than an identikit American drawl. The singer is Robert Wyatt, our honorary graduand today, and picking music to catch your attention like this has come to be called "Wyatting" in his honour.

Canterbury is know historically for the cathedral, and Chaucer’s Canterbury Tales, but for those of us who grew up in the 60s and 70s, it was also the home of the Cantebury scene – musicians whose careers started out in the city in the late 60s. The scene may have been no more that a group of boys whose families had eclectic record collections … record collections that included 20C classical – Hindemith but “nothing German … not long after the war” – and jazz – Max Roach and John Coltrane, perhaps – but those boys went on – jointly and separately – to be a key part of the creative explosion that was the 1960s, and Robert was there at its centre.

Right from the start, Robert was one of a very select group – others are Justin Bieber, Dave Clark, Karen Carpenter, Levon Helm and Ringo Starr – of singing drummers. As he tells it, drumming was something you could do while listening to a Max Roach record, and what's more you could get started by drumming with rolled up newspapers. You could also sing along, too, and teach yourself to be a musician. All of this meant that at school – the Simon Langton Boys School, here in Canterbury – Robert was rather left behind, and instead he became one of the Wilde Flowers, who played in Canterbury and further afield for a couple of years. That is “Wilde” with an “e”, in honour of Oscar – an example of the wordplay that “lets words collide” running through all Robert’s work.

In 1966 Robert helped to form Soft Machine, with David Allen (from Herne Bay, and introduced to Roger as “someone else who has grown their hair long”), Kevin Ayers and Mike Ratledge. Soft Machine never managed to break through to the rock or pop mainstream, but were hugely influential, first for their gentle English surrealism, and later for their cool, stripped down, fusion jazz rock. They cut their teeth – and sharpened up their act – playing support to Jimi Hendrix in a long 1968 tour of the USA. Robert observes: when you're in front of 5,000 young Texans waiting to hear Hendrix, you don't mess about; … it also helps if you don't have a guitarist in the band! This tour brought them American success, which was mirrored on the continent, where their jazz style found a sympathetic ear. In the UK, they became the first rock group to play at the Proms.

Soft Machine broke up, reformed to make their landmark ”Third” album 2LPs with four side-long pieces. and finally Robert left them for good. He formed a larger group – Matching Mole – a bilingual pun on “machine molle” / soft machine (groan!) but not long after that in an accidental fall from a window in 1973 he received injuries that have meant that he now uses a wheelchair. This took him – in his words – into just another way to be, and it meant that he had time to slow down and think about how to sing. His first record after that – Rock Bottom – sets the pattern for his work since: slow songs, striking tunes and rhythms, usually based on keyboard melodies. A reworking of the Neil Diamond song I'm a Believer – first sung by the Monkees – gave him his first hit, but that is only one side of his work: he's a great musical collaborator, and played with a huge range of people through the 1970s and 80s, from free jazzers like Carla Bley to avant gardists and more traditional rock stars, often with a political message. His most remembered song from that time is his version of Elvis Costello's Shipbuilding: an elegiac meditation on the Falklands War and its effect on the parts of the county that had been hit hardest by the Thatcher government's policies.

Looking back over Robert's musical career – which started at around the same time as the University of Kent, and shares its 50th anniversary – it is hard to think of more than a handful of musicians who have been able to keep their music as vital and original as it was when they began. His recent work has seen collaborations with the Brodsky Quartet - also honorary Kent graduates - Brian Eno and a re-recording of a set of songs by the electronica group Hot Chip. 

Recognition for his work has come in many forms: he has been a guest editor of Radio 4's Today programme, he is an honorary doctor of the university of Liège, and he is a petit fils ubu from the collège de pataphisique. Locally, he is celebrated in a life-size stencil by the street artist Stewey in Canterbury's Dover Street, on the site of the Beehive club where he played at the start of his career. 

In its turn, the University of Kent would like to record its appreciation of Robert's work. For his musical achievements and influence over the last fifty years, most honourable deputy vice-chancellor, to you and to the whole university I present Robert Wyatt to be admitted to the Degree of Doctor of Music, honoris causa.

[Text of the oration for Robert Wyatt's honorary degree award, Canterbury, 18-7-14].

Thursday, July 3, 2014

Fifteen facts about EPSRC

Thanks very much to Alex Hulkes for his visit and presentation about EPSRC in general and their ICT programmes in particular. Here are fifteen things that I learned.
  1. EPSRC has about £2.5bn worth of live projects at any time.
  2. “We have to do applied research …” because it is part of EPSRC's Royal Charter.
  3. Terminology: EPSRC has two kinds of theme: “capabilities” correspond to specific research areas, while “challenges” are cross-cutting themes like energy or global uncertainty.
  4. Terminology (2): for EPSRC “interdisciplinary” means belonging to more than one of their research areas.
  5. 75% of PGR funding is DTP plus CASE, and so not subject to particular shaping or direction.
  6. Pathways to impact: either say how you will achieve impact, or say that it doesn’t make sense for your research to have impact (at this point). 
  7. It’s good if you can say how your research project fits in with what EPSRC are currently funding, as long as it’s not a straight duplication of work that’s already funded.
  8. Developing leaders (fellowships): while it’s important to have a good research project, that’s a necessary rather than a sufficient condition: you need to be able to convince that you are a research leader.
  9. These schemes are less popular than earlier fellowships schemes, perhaps because of the difficulty of getting evidence of leadership potential down on paper.
  10. In ICT, EPSRC wants to keep developing and encouraging new research areas. It also wants greater collaboration with other areas
  11. It’s also keen to get ambitious proposals: of the funding for responsive – rather than directed – research, some 50% of the cash is in longer/larger grants.
  12. Proposals will get at least 3 reviews for panel.
  13. There's no correlation between the number of reviewers and the success/failure of the proposal.
  14. The PI response is important: the factual response has an effect when (and only when!) you can provide evidence that shows that an objection doesn't hold.
  15. Success rates (at least for ICT) are constant across areas set to grow/stay constant/ shrink. On the other hand the “grow” area has received more applications.

Tuesday, June 17, 2014

Review of "Certified Programming with Dependent Types"


Chilpala's text Certified Programming with Dependent Types is an outstanding introduction to how programs can be guaranteed to be correct, by means of the Coq theorem prover – programs that are, in his terminology, “certified”. While machine-assisted proof has been possible for more than a quarter of a century, it is only in the last five years that a substantial body of fully-formal proofs has been delivered. These include mathematical results – such as Gonthier's proof of the four colour theorem – and also those in the domain of theoretical computer science. Indeed, the “POPLmark challenge” http://www.seas.upenn.edu/~plclub/poplmark/ has set a benchmark for proof mechanisation in language metatheory. So, this text is timely in providing an accessible introduction to the area; but what is it that makes it stand out?

Firstly, Chilpala gives an excellent introduction to the area, explaining the background of the different approaches to theorem proving in systems such as ACL2 and PVS among others, as well as the Coq system that is the subject of the book. Complementing this is a discussion of the logics implemented by the various systems. He argues cogently for a theory that supports dependent types, under which the result types of functions can depend upon the values of inputs. Dependently typed systems support a dual interpretation: objects can be seen as values belonging to types, or equivalently as proofs for propositions – the so called ‘Curry Howard’ isomorphism. Allowing the interpretations to co-exist gives a type system that can express program pre-conditions, or alternatively a logic in which a functional programming language can be used to build proofs. Coq implements a powerful dependently typed theory that has a functional programming foundation, proof checking in a secure kernel (according to the ‘de Bruijn principle’), advanced proof automation through a tactic language Ltac and a principle of reflection.

Secondly, Chilpala opens the exposition in Chapter 2 with a worked set of examples that concentrate on language evaluators and compilers. Rather than covering all the material needed first, he plunges into the exposition, giving a survey of what is possible, and saying that “it is expected that most readers will not understand what exactly is going on here”. Does this approach work? It is clear reading this that anyone reading the chapter needs to understand a language like Haskell or ML, but with that knowledge it is possible to gain a good sense of how the system is used in practice, and so I would personally endorse it. After all, it's possible to skip on to chapter 3 and follow a sequential approach if this proves to be too forbidding.

Thirdly, in contrast to some introductions, the book promises a “pragmatic” approach to proof construction or engineering. This is welcome, since Chilpala acknowledges that Coq is a large system that has grown in complexity over the last twenty years. Does he deliver on his promise? The book is divided into four main sections: the first two cover the fundamental technical material, namely “basic programming and proving” and “programming with dependent types” in some 250 pages, but the remaining two sections (130pp in total) cover “proof engineering” and “the big picture” and it is in these two that he is able to cover the pragmatics of proof. He covers not only common approaches like logic programming in proof search but the problems of larger-scale proof, such as the evolvability and robustness of proofs. This is forcefully communicated through a set of “anti patterns” which mitigate against well-structured and evolvable proofs, and provides strategies for avoiding these.

The book doesn’t contain exercises, but these can be found on the book website, contributed by readers. The website also provides access to the full text of the book, as well as the Coq code for all the chapters, as well as supporting libraries of code and tactics. It is very clearly written, and tha author has a direct, approachable style. If you want to find out more about using the Coq system for building realistic, large-scale, proofs, particularly for certifying programs, then I recommend this highly.

Saturday, May 10, 2014

PROWESS mid-term workshop

I'm just back from a trip to Borås in Sweden, where we held a one day workshop for the PROWESS project to showcase what it has achieved, just over half way through the project. The aim of PROWESS is to bring the benefits of property-based testing to testing web services. We're a consortium of universities from the UK (Sheffield and Kent), Sweden (Chalmers) and Spain (A Coruña, UPM); the Swedish research organisation SP,  and SMEs from those three countries too: Erlang Solutions (UK), Quviq (Sweden) and Interoud (Spain).

As well as attendees from the project we had more than twenty others from companies in Sweden, Denmark, the UK and the Netherlands and universities in Denmark, Sweden, the UK and Brazil. The day went very well, with project members like myself being pleased to see how the separate strands of the work are coming together, and that the interactions we're having informally are turning into practical tools.

More importantly, we got positive reports back from the external attendees too, who were able to give us really helpful suggestions about how we could extend and build on what we had done. The prize for the coolest talk has to go to Benjamin Vedder, whose demo of fault injection for a quadcopter using a QuickCheck model stole the show [OK, the fault injection only happened in a simulation – to the relief of the front row – but still fun to see the use of QuickCheck in Erlang used to inject faults in to C++ embedded systems!]
Benjamin's quadcopter

The morning programme gave an overview of the project and property-based testing in QuickCheck, as well as introducing the VoDKATV platform from Interoud, which provided a case study running through the more detailed presentations in the afternoon
The afternoon gave participants a chance to talk in more detail about specific technical innovations in the project:
  • Inference of state machines from QuickCheck traces – Kirill Bogdanov, University of Sheffield
  • Automating Property-based Testing of Evolving Web Services – Huiqing Li, University of Kent and Laura Castro, University of A Coruña.
  • Fault injection – Benjamin Vedder, SP.
  • More-bugs -- how to not find the same bug over and over again – Ulf Norell, Quviq.
  • A Property-based Load Testing Framework – Diana Corbacho, Erlang Solutions Ltd and Clara Benac Earle, UPM, Madrid.
  • Smother: Extended code coverage metrics for Erlang – Ramsay Taylor. University of Sheffield
  • Automatic complexity analysis – Nick Smallbone, Chalmers University
The full programme for the day, including talk abstracts, is here; and the slides for the presentations are here.

I'd recommend this kind of open day event to anyone planning a complex research project, as it gives you all a chance to get some invaluable external insights into the work that you're doing. It's also a real fillip to the project to see the tangible progress that has been made, and also to enthuse project members to do even better in the second half of the project.