Questioning EuroSTAR conference

LMAX Exchange

Conference finished, presentation done, time to reflect.

Monday’s keynote – “Skeptical self-defense for the serious tester or, how to call a $37 billion bluff” by Laurent Bossavit – being bullied with false claims, metrics. Don’t rely on hearsay. Apply science to your ways, measure and back your claims with relevant data. At one point the speaker made a claim that he’s not speaking to an audience of Agile testers. Now how did he know that? Did he apply any scientific method to substantiate his claim?

Track session – “One more question…” by Tony Bruce – Tony’s talk fell into the informative category. He talked about different types of questions that can be used to explore. Probably the one that I heard being mentioned a lot after his talk was the one around “Quiet” or “intentional dead air”, mainly to make the counter party uncomfortable enough that the silence is broken. He really tried not to make the presentation software testing specific and instead turned it into a life lesson that can be applied to any context. I really enjoyed his personal touch with examples from his own life. From the talks I attended this one seemed like the only one where the audience was very engaged and having a laugh from time to time.

Then came my session “Questioning Acceptance Tests” which you can have a look here in case Prezi is down. Probably the main conclusion of my talk was that with property based testing, even before considering what tools to use, you first have to ask yourself if this is the right strategy for me. It can get pretty complex and time consuming to try and model a big system when the foundation hasn’t been laid through simpler models from when the SUT started emerging. One nice finding was that Kristian Karl from Spotify, in his talk on “Experiences of test automation at Spotify” (which unfortunately I missed) had used some of the tools which I’ve used to extend my research into property based testing. Mainly the model based testing approach of creating chains of states and transitions with Graphwalker and yEd. But that’s a post for another day.

After talking it over with James Lyndsay, I got the impression that some of the attendees felt that I was trying to present the new and only way of testing. This couldn’t have been further from the truth. Yes, in my simple case, plain old boundary analysis and equivalence partitioning would’ve covered the testing of the story but what about the next story that comes down the line and wants to use the same type of objects as input? With QuickCheck you get reusability and increased coverage due to the input objects covering all combinations that can be found in production. Spock also made for a handy way of abstracting almost half of the initial 50 integrations tests I was trying to replace.

The 2nd keynote of the day replaced the initially advertised keynote with “Using sociology to examine testing expertise” by Robert Evans. Robert is a doctor in social sciences with the Cardiff School of Social Sciences and talked about polimorphic and mimeomorphic actions. How humans distinguish themselves from machines through interactions in a social context described by tacit and cultural knowledge. And thus with the current level of artificial intelligence this is not attainable just yet although he did say if somebody had a positronic brain lying around he would like to hear about it. Some books he recommended: “The shape of actions: What humans and machines can do” by Martin Kusch and “Tacit and explicit knowledge” by Harry Collins. I’ll be sure to check them out.

Second day’s morning keynote was entitled “Creating dissonance: Overcoming organizational bias toward software testing” by Keith Klain. This one was a personal war story of how Keith managed to fight his way to the top, becoming a manager for 800 testers at Barclays and how he overcame biases against testing. His talk didn’t resonate that much with my current context so all I wish is that the stories he presented are less and less prevalent.

I then went to attend “Specification by example with GUI tests – How could that work” by Emily Bache and Geoff Bache. They covered using TextTest to automate testing of desktop applications. The tool allows you to define domain objects while automating and thus avoiding the pain of later refactorings. I really liked the way the tool allowed you to interact with the application under test making a lot of the defining of domain objects quite easy. Also the ASCII art output was amazing and you can see there’s been a lot of effort involved in creating the tool. Once a “screenshot” of the app was created as ASCII art, you could then diff that against a later version of the app. It had the option of defining filters on what to output so you don’t end up diffing everything under the sun. For instance maybe you don’t care that the font on a button label changed so you could filter that out. My initial impression was “not another play/record tool” but that quickly got dispelled. Another idea I had was cement. When they initially showed the ASCII output I was thinking that’s way too much information being captured which would act as cement against any future software changes. Imagine you had a lot of unit tests trying to test everything and you did a refactoring of a few classes and suddenly 200 unit tests fail. But they introduced filters which can be defined as regular expressions. Although filters are a good idea I’m not so sure about regular expressions. So if you have a desktop GUI to test that’s Java or Python based you might want to give it a shot. Also Emily’s workshop on Thursday, “Readable, executable requirements: hands-on” on using Cucumber feature files was a treat. We wrote Cucumber tests for the Gilded Rose using Emily’s cyber dojo which she had setup here. I only wish more workshops were integrated within the conference itself rather than as separate paid for.

Another good talk was from Jouri Dufour on “How about security testing“. Now his talk was full of really useful tips on common sense security testing. This is so vital to our trade as testers that you should just go now and have a look at the examples in his presentation. You’ll probably come out more knowledgeable about security testing and especially of how easy it is to think like a hacker.

I also went to Anna Baik’s presentation on “The unboxed tester“. I had a chance to chat with her about what drove her towards this subject and it was quite a personal one, having returned to work after a long period of time she was confronted with different mentalities/prejudices from her new peers.

After all this I went to check out the Test Lab as last year’s one was the main attraction point for me. This time they had more stuff to do and to try out. What I did notice is that when a tester tries to take on an app in the Test Lab, they’ll eventually revert to doing security testing. Some of them more prepared than others. While it’s nice to see testers being concerned with security testing I can’t but think about the other stuff they’re missing out on: pairing with other testers when exploring the app, using different tools to record their findings, mind mapping, modelling their understanding of how the app should behave, brainstorming ideas. Some of the apps in the Test Lab didn’t lend themselves to security testing – James’s puzzles, for instance, one of which I managed to solve and got the official Lab Rat badge and a kudos tweet from James. I only managed to solve it after modelling the states of the system on a piece of paper. I also had a chance to pair with Richard on one of the puzzles but we didn’t get too far with that as I joined mid way through and instead of trying to take notes on the behaviours of the app, I was interfering with Richard’s train of thoughts and ended up being a “two cooks…” kind of story. I later found out he solved the puzzle. Congrats!

The last keynote of the conference came from Martin Pol, “Questioning the evolution of testing: What’s next”. The presentation showcased the history of software testing all the way from the 70s when anybody doing testing was a pioneer. But by working together in close collaboration and being flexible in meeting their goals, much like Agile, they managed to find the issues before reaching production. He associated this to a pioneering stage in the evolution of software testing . Later on managers demanded more reproducibility of the testers’ ways, more process such that new people didn’t have to reinvent the wheel all the time. You might even compare this to a waterfall approach. According to Martin this was the maturation phase of software testing. Later on came the optimising stage through the Agile way of building software when all the team members are working in close collaboration to deliver software, building on the great techniques and tools developed over the decades.

The Android/iOs app for the conference was an absolute delight to use, although for some reason the sessions didn’t have the room number on them. I don’t think it was promoted to its full potential. It could’ve proved an excellent tool for people to interact with each other and get feedback on presentations, events, test lab sessions you name it. Unfortunately not a lot of people used it for that purpose, or not as much. One explanation might be the lack of anonymity when offering feedback or when engaging into discussions. So maybe next time the app could have an option for submitting feedback anonymously or through an avatar. Hopefully more and more conferences will start using such apps to engage with the audience. I could even see such an app being used for submitting questions from the audience after a session.

The Q&A sessions at the end of each talk seemed to help with organising the questions and thus felt a bit more efficient than other conferences I’ve been to. Sadly not all speakers were keen on keeping to the format and found it awkward to work with. If only they would’ve relied more on the facilitators which really did an excellent job at managing the stream of questions.

Overall I found some of the sessions useful and when there wasn’t something I wanted to attend, I had the discussions with fellow testers and of course the Test Lab.

And here’s a tweet cloud of all esconfs tweets from Monday 4th of November to Sunday 17th of November.


–>

created at TagCrowd.com

Adrian Rapan Video From EuroSTAR conference – part 1

Adrian Rapan Video From EuroSTAR conference – part 2

Any opinions, news, research, analyses, prices or other information ("information") contained on this Blog, constitutes marketing communication and it has not been prepared in accordance with legal requirements designed to promote the independence of investment research. Further, the information contained within this Blog does not contain (and should not be construed as containing) investment advice or an investment recommendation, or an offer of, or solicitation for, a transaction in any financial instrument. LMAX Group has not verified the accuracy or basis-in-fact of any claim or statement made by any third parties as comments for every Blog entry.

LMAX Group will not accept liability for any loss or damage, including without limitation to, any loss of profit, which may arise directly or indirectly from use of or reliance on such information. No representation or warranty is given as to the accuracy or completeness of the above information. While the produced information was obtained from sources deemed to be reliable, LMAX Group does not provide any guarantees about the reliability of such sources. Consequently any person acting on it does so entirely at his or her own risk. It is not a place to slander, use unacceptable language or to promote LMAX Group or any other FX and CFD provider and any such postings, excessive or unjust comments and attacks will not be allowed and will be removed from the site immediately.