This labs post was first published at Perl.org’s blog.
Perl QA Hackathon 2016
This year I’ve had the chance to attend the Perl QA Hackathon as “the intern“. Needless to say, it was an honour and a great pleasure to take part in the event and go through a very stimulating 4 days coding and discussing with people I have a huge respect for and whose work has been a reference for me since I started my journey in the Perl world.
I have to admit I was going to the event feeling quite a bit of pressure: I wanted to be able to be helpful and contribute work but, at the same time, to not be too disruptive.
Regarding that, I was honestly very surprised with how welcomed the rest of attendees made me feel since the very beginning. Honestly, I do believe this is quite unusual in tech communities and, therefore, I think it deserves to be valued.
Without any kind of doubt, this was key in making me feel comfortable and being much more productive during the hackathon (thanks!).
… Now, let’s talk about “work”!
I “landed” in the hackathon having quite clear which projects I could provide some help to. One of them had to do with the CPAN Testers team.
It was really easy to start working with Barbie and Doug Bell and, in the end, I spent most of the hackathon working with their project.
The image I have the clearest memory of from past YAPC::EU was Ovid and his “Perl is battle-tested”.
Question: how do you get something to be “battle-tested”?
(hint: it contains the word “test”).
The idea I wanted to work on was a POC of smoking tests & reporting them to CPAN Testers relying on AWS spot instances fleets.
Although AWS EC2 computing time is not exactly cheap, “spot instances” have a much lower price and quite a particular approach: AWS has to have a lot of excess of instances, and what they’ve come up with is a marketplace specifically for that computing capacity.
The price fluctuates based on demand & offer; you place a bid for a certain amount of instances of a given type and, as soon as the price reaches your bid or lower, your instances will turn on. Then, when the price rises above your bid, AWS will shut them down. That makes them VERY suitable for batch processing (cheap price at the cost of being interruptable).
To get this POC working I needed two pieces:
- To create an image (AMI) of what would be a smoker instance (ideally automatically), containing a plenv’d Perl and all the required to run tests and report them.
- A way to issue a spot request to AWS in an automated manner; to spin up any amount of spot instances from the smoker image.
On the very last afternoon of the hackathon, I got both parts done and I could execute a script from my terminal that would successfully result in a fleet of spot instances turning on, start running tests and reporting them to CPAN Testers; at the price of 0.002$/h per instance.
Areas that need improvement:
It is very clear that some test reports add a LOT of value while some others do provide almost none.After a short talk with Andreas Koenig in which he quickly briefed me about how much effort he puts on deciding what needs to be tested, it became quite obvious to me that the smokers I would spin up were running tests with criteria that I would call… naive (just testing recent CPAN modules).
Given all the metadata related to testing reports contained @ CPAN Reporters metabase, it would be very interesting to build a metric around “which modules need to be tested in which Perl version and in which os”.
Also, given that there are windows spot instances available, I think it could be an interesting way to get more windows reports.
The very first day, I briefly discussed with Garu the possibility of reporting to CPAN Testers modules installed through Carton.
Later, a couple days in the hackathon, he gave me some pointers and put me to work and, after some debugging, he submitted a patch to cpanminus that would make App::cpanminus::reporter aware of the options cpanm was run with.
Attended the Pakket presentation by Sawyer and, also, the “river of CPAN” talk (in which I learned a LOT of details regarding quality assurance in CPAN; some of which I wasn’t even aware of!)
First of all, thanks to all the sponsors, without whom these 4 very productive and stimulating days wouldn’t have been possible:
FastMail, ActiveState, ZipRecruiter, Strato, SureVoIP, CV-Library, OpusVL, thinkproject!, MongoDB, Infinity, Dreamhost, Campus Explorer, Perl 6, Perl Careers, Evozon, Booking, Eligo, Oetiker+Partner, CAPSiDE, Perl Services, Procura, Constructor.io, Robbie Bow, Ron Savage, Charlie Gonzalez, Justin Cook.
I also wanted to thank the organisers of the event: Neil, Barbie and JJ; for all the effort to put up an event like this!
Additionally, I wanted to thank CAPSiDE – my employer – for, besides sponsoring the event, making it possible for me to attend.
Lastly… didn’t want to end without thanking Wendy for making of this Perl QAH the hackathon I’ve eaten the healthiest at! (by far)
PS: Didn’t really want to close without mentioning Chris, Andreas & Slaven for making me struggle HARD to find my own reports @ the CPAN Testers log.