Saturday, May 20, 2017

The A-Z of XP


After I blathered on and on about how much I'd enjoyed Ron Jeffries' Extreme Programming Adventures in C# the Dev Manager offered to lend me his copy of Extreme Programming Explained by Kent Beck.

Some background from Wikipedia:
Extreme programming was created by Kent Beck during his work on the Chrysler Comprehensive Compensation System (C3) payroll project. Beck became the C3 project leader in March 1996 and began to refine the development methodology used in the project and wrote a book on the methodology (in October 1999, Extreme Programming Explained was published).
So I took the book (it's the first edition) and I enjoyed it too, but differently. I might say that if Adventures is a road trip, Explained is a road atlas.

One of the things I liked about Explained (that it shares with Adventures) is the suggestion that only you can really decide whether XP can work in your context, and how. Also that Beck is prepared to offer you suggestions about when it might not.

But the world probably doesn't need any more reviews of this book so instead I'll note that I was a little surprised at the degree of upfront formality (which isn't to say that I don't think formality can license freedom to express yourself); sufficiently surprised that I mapped it to help navigate the rest. (And, yes, that's a map from an atlas.)


Image: Amazon

Monday, May 15, 2017

The Daily Grind


Houghton Mill is an 18th-century water mill, full of impressive machinery and, last weekend, actually grinding flour by the power of the river Great Ouse. Although I am not knowledgeable about these kinds of buildings or this technology I found myself spellbound by a small, but crucial, component in the milling process, the slipper.

The slipper is a kind of hopper that feeds grain into the millstones for grinding, Here's a short film I took of it in operation when I was there with some friends and our kids:


It has a system of its own, and also it is intimately connected to other systems.

It has inputs: a gravity feed brings grain into the top of the slipper; energy is supplied by the vertical axle which is in turn driven indirectly from the water wheel.

It has outputs: grain is dropped into the centre of the millstones immediately below it.

It is self-regulating: as the flow of the river varies, the speed of the wheel varies, the rotation of the axle varies, and the extent to which the slipper is agitated varies. Slower water means less grain supplied to the millstones, which is what is required, as they are also grinding more slowly. A second form of self-regulation occurs with the flow of grain into the slipper.

It has balance: there is a cam mechanism on the axle which pushes the slipper to the left, and a taut string which pulls it back to the right, providing the motion that encourages grain to move.

It can be tuned: the strings that you can see at the front provide ways to alter the angle of the slipper, and the tension of the string to the right can be adjusted to change the balance.

Tuning is important. If properties of the grain change (such as size, or stickiness, or texture, ...) then the action of the slipper may need to change in step. If the properties of the millstones change (e.g. they are adjusted to grind more coarsely or finely, or they are replaced for cleaning, or the surface roughness alters as they age, ...) then the rate of delivery of grain will need to adjust too.

Although the system is self-regulating, these are examples of variables that it does not self-control for. It has no inherent feedback mechanism for them, and so requires external action to change its behaviour.

Further, beyond the skilled eye and ear (and fingers, which are used to judge the quality of the flour) of the miller, I could see no means of alerting that a change might even be required. In a mill running at full tilt, with multiple sets of stones grinding in parallel, with the noise, and dust, and cramped conditions, this must have been a challenge.

Another challenge would be in setting the system up at optimum balance for the conditions that existed at that point. I found no evidence of gauges, or markers, or scales that might indicate standard settings. I noted that the tuning is analogue, there are infinite fine variations that can be made, and the ways in which the system can be tuned no doubt interact.

The simplicity of the self-regulation is beautiful to me. But I wondered why not regulate the power coming from the water wheel instead and so potentially simplify all other systems that rely on it. There are technologies designed to implement this kind of behaviour, such as governors and flywheels.

I wondered also about the operational range of the self-regulation. At what speeds would it become untenable: too slow to shake any grain, or too fast to be stable? There didn't seem to be scope for an automatic cut-out.

So that was an enjoyable ten minutes - while the kids were playing with a model mill - to practice observation, and thought experiments, and reasoning in a world unfamiliar to me.

I doubt you'll find it difficult to make an analogy to systems that you operate within and with, so I won't labour any points here. But I will mention the continual delight I find in those trains of thought in one domain that provoke trains of thought in another.
Image: https://flic.kr/p/6eRBPi

Friday, May 5, 2017

Fix Up, Look Sharp


"I'll start, however, by assuming it's going to work. No need to borrow trouble from the future."  That's Ron Jeffries, in Extreme Programming Adventures in C#.

I'll start, however, by assuming it's going to work. No need for me to know much about eXtreme Programming (XP) and less about C#. And that's me. Interestingly, in some very real sense, this book didn't help me to overcome either of those lacks. And yet I still strongly recommend it.

Why? For one thing, because of quotes like the one at the top. It's essentially the thesis of the book, repeated in various forms from just about the first words in the Introduction ...
I start with a very simple design idea and implement features in a customer-driven order, using refactoring to keep the design just barely sufficient to the moment.
... to the last words in the final chapter, Project Retrospective:
Does incremental development work? This, of course, is the big question. It's clear to me, and I hope it's clear to you, that it certainly worked for me on this project. Will it work for you on your project? That's for you to determine ... The skills are valuable in their own right, and they may enable you to find a more flexible way to develop your software.
Here's a couple more quotes that spoke to me, but the book is full of them:
The opposite of simple is "wrong." (p.330)
The main thing is to remain sensitive to what you're doing, and to adjust your practices as you notice problems. (p.346)
Frankly, I am surprised, given that I planned this book and think of it as a book about programming, at how much value an independent customer could have provided. (p.479)
In fact, it's so heavily-laden with snappy chunks of wisdom that it has a Sound Bites appendix where some of the one-liners are elaborated on, and some that couldn't be fitted into the main text are given as a bonus. Here's one:
Isolate the unknown. Often when we're working on something, we know how to do most of it and there's just this one part we don't know how to handle. Isolate that part, via one of two ways: We can begin by focusing on that part [or] we can pretend that we know the solution [and often find that] doing what we know has made the unknown more clear.
Why else did I love this book? Because I love listening to people who know and care about what they're doing talking about what they know and care about. The style of this book is so conversational that it's almost as if Jeffries is sitting alongside you, recounting the work that he did. And it's clear that he knows what he's doing, and it's very clear that he cares about what he's doing.

And what he's doing is describing the paths he took - some false - and the compromises he made - knowingly and not - in the creation of a small application. He describes a set of principles that motivate XP loosely at the beginning of the book, and all of the work takes place in the context of them. But he's never ruled by them: he uses his instinct and pragmatism and the resources available to him at any given time to make decisions about what do do, and how, and when, and, importantly, why.

The book is littered with sidebars labelled Lessons, micro-retrospectives for and on himself. He went away from some principle he holds dear, but got away with it this time. He ignored a code smell and got punished with several hours of debugging an approach that was never going to work, before backing it out. He chose to write or perform a particular kind of test this time, with these trade-offs.

Testing, yes. The book talks about programmer unit tests, customer acceptance tests, and manual testing. Jeffries, despite developing an application with a Window-based user interface, wants to keep himself away from manually exercising the software as far as possible. The motivation, to gloss it, runs something like this: to support his incremental development he wants a suite of tests that can be run quickly so that he can make a small change and run them to find out that either he hasn't altered the functionality by his change, he has altered it in an expected way, or he has altered it in an unexpected way and needs to do work to find out where the reality and expectation diverge.

The focus of this book is not the philosophy of test coverage ... but, with my testing hat on, I found that I was feeling tense about the extent to which the tests we're shown do more than explore happy paths. That's my bias I guess, and one that could easily see me writing more tests - for edges and corners - than perhaps there is immediate value for.

I was struck by how familiar Jeffries' exploration of a problem feels. He probes behaviour, he frames hypotheses, and he tests them, frequently in code. He holds opinions loosely and is prepared to be guided by the evidence in front of him, and the advice of others with experience. I learned a new term - spike, for a short experiment - that I find appealing because of my visualisation of it as a tall but narrow peak, on a chart showing effort on the y-axis and time on the x.

But I also found the focus on the low level unfamiliar in some ways. Strategy, the big picture, high-level choices are the responsibility of the customer. Which isn't to say that Jeffries ignores them, or has no thoughts on them, or nothing to say about them. But they are not the focus of the developers on this project and, by extension, of developers on XP projects. Again, with my testing head on, I find myself uncomfortable here. I want freedom to be at both ends of that spectrum and at places that I consider to be valuable in between.

So, yes, I'm a tester. And this is a book about programming. But I write code that helps me to test software. Sometimes I use code to directly exercise software, sometimes to prepare data to exercise software with, and sometimes to analyse the results of exercising software - and sometimes, on days when I'm feeling particularly meta - I write code that generates code that I use for those exercises. Coding is a helpful and important tool in my working life, but it by no means takes up the majority of my time.

In recent years I have found myself edging closer and closer to developing the code that I do write in really tiny increments, programming by intention (although I didn't know it was called that until I read it here), and letting a design emerge by refactoring. I have also become happier about throwing away work and more desirous of pursuing a more direct route to the value I seek more often.

But I find that, to a large extent, that's also true of my non-programming work, and this book is a welcome and useful and enjoyable way to reflect on that.

Wednesday, April 26, 2017

Cambridge Lean Coffee


This month's Lean Coffee was hosted by DisplayLink. Here's some brief, aggregated comments and questions  on topics covered by the group I was in.

How to spread knowledge between testers in different teams, and how often should people rotate between teams?

  • How to know what is the right length of time for someone to spend in a team?
  • When is someone ready to move on?
  • How do you trade off e.g. good team spirit against overspecialisation?
  • When should you push someone out of their comfort zone, show them how much they don't know?
  • Fortnightly test team meetings playing videos of conference talks.
  • Secondment to other teams.
  • Lean Coffee for the test team.
  • Daily team standup, pairing, weekly presentations, ad hoc sharing sessions after standup.
  • Is there a desire to share?
  • Yes. Well, they all want to know more about what the others do.
  • People don't want to be doing the same thing all the time.
  • Could you rotate the work in the team rather than rotate people out of the team?
  • It might be harder to do in scenarios where each team is very different, e.g. in terms of technologies being tested.
  • There are side-effects on the team too.
  • There can't be a particular standard period of time after which a switch is made - the team, person, project etc must be taken into account too.
  • Can you rotate junior testers around teams to gain breadth of experience?

What piece of testing wisdom would you give to a new tester?

  • Be aware of communities of practice. Lots of people have been doing this for years.
  • ... for over 50 years, in fact, and a lot of what the early testers were doing is still relevant today.
  • There is value in not knowing - because you can ask questions no-one else is asking.
  • Always trust your instinct and gut when you're trying to explore a new feature or an area.
  • Learn to deal with complexity, uncertainty and ambiguity. You need to be able to operate in spite of them.
  • Learn about people. You will be working with them.
  • ... and don't forget that you are a person too.
  • Use the knowledge of the experienced testers around you. Ask questions. Ask again.
  • Make a list of what could be tested, and how much each item matters to relevant stakeholders.
  • Pick skills and practice them.

Where you look from changes what you see.

  • I was testing a server (using an unfamiliar technology) from a client machine and got a result I wasn't sure was reasonable.
  • ... after a while I switched to another client and got a different result.
  • Would a deeper technical understanding have helped?
  • Probably. In analogous cases where I have expertise I can more easily think about what factors are likely to be important and what kinds of scenarios I might consider.
  • Try to question everything that you see: am I sure? How could I disprove this?
  • Ask what assumptions are being made.
  • What you look at changes what you see: we had an issue which wasn't repeatable with what looked like a relevant export from the database, only with the whole database.
  • Part of the skill of testing is finding comparison points.
  • Can you take an expert's perspective, e.g. by co-opting an expert.

Using mindmaps well for large groups of test cases.

  • With such a large mindmap I can't see the whole thing at once.
  • Do you want to see the whole thing at once?
  • I want to organised mindmaps so that I can expand sub-trees independently because they aren't overly related.
  • Is wanting to see everything a smell? Perhaps that the structure isn't right?
  • Perhaps it's revealing an unwarranted degree of complexity in the product.
  • Or in your thinking.
  • A mindmap is your mindmap. It should exist to support you.
  • What are you trying to visualise?
  • Could you make it bigger?
  • Who is the audience?
  • I don't like to use a mindmap to keep track of project progress (e.g. with status).
  • I like a mindmap to get thoughts down
  • I use a mindmap to keep track of software dependencies.

Sunday, April 23, 2017

Bad Meaning Good


Good Products Bad Products by James L. Adams seeks, according to its cover, to describe "essential elements to achieving superior quality." Sounds good! As I said in my first (and failed) attempt to blog about this book, I'm interested in quality. But in the introduction (p. 2) Adams is cautious about what he means by it:
Quality is a slippery, complex, and sometimes abstract concept ... Philosophers have spent a great deal of time dealing with the concept of quality. This is not a book on semantics or philosophy, so for our purposes we will simply assume that quality means "good." But, of course, that leaves us with "good for whom?" "good for what?" "good when?" "good where?" and if you really like to pick nits, "what do you mean by good?" I won't go there, either.
My bias is towards being interested in the semantics and so I'd have liked not to have seen a concept apparently so fundamental to the book being essentially dismissed in the space of a paragraph on the second page of the introduction. Which isn't to say that quality is not referred to frequently in the book itself, nor that Adams has nothing to say about quality. He does, for example when thinking about why improving the quality of a manufacturing process is often considered a more tractable problem than improving the quality of the product being manufactured (p. 25):
characteristics of good products, such as elegance, and the emotions involved with outstanding products, namely love, are not easily described by [words, maths, experiment and quantification] - you can't put a number on elegance or love.
Further, he's clearly thought long and hard about the topic, and I'd be surprised if he hasn't wrestled at length with definitions of quality - having spent no little time exploring my own definition of testing, I have sympathy for anyone trying to define anything they know and care about - before deciding to pursue this line. What's reassuring to see is that Adams is clear that whatever quality or goodness of a product is, it's relative to people, task, time and place.

He references David Garvin's Competing on the Eight Dimensions of Quality, which I don't recall coming across before, and which includes two dimensions that I found particularly interesting: serviceability (the extent to which you can fix a product when it breaks, and the timeliness with which that takes place) and perceived quality (which is to do with branding, reputation, context and so on).

I was reading recently about how experiments in the experience of eating show that, amongst many other factors, heavier cutlery - which we might naturally perceive to be better quality - enhances the perception of the taste of the food:
... we hypothesized that cutlery of better quality could have an influence on the perceived quality of the food consumed with it. Understanding the factors that determine the influence of the cutlery could be of great interest to designers, chefs, and the general public alike.
Adams also provides a set of human factors that he deems important in relation to quality: physical fit, sensory fit, cognitive fit, safety and health, and complexity. He correctly, in my opinion, notes that complexity is a factor that influences the others, and deems it worthy of separation.

A particularly novel aspect for me is that he talks of it in part as a consideration that has influence across products. For example, while any given car might be sufficiently uncomplex to operate, the differences in details between cars can make using an unfamiliar one a disconcerting experience (p.91): "I ... am tired of starting the windshield wipers instead of the turn signal."  He admits a tension between desiring standardisation in products and wanting designers to be free to be creative. (And this is the nub of Don Norman's book, The Design of Everyday Things, that I wrote about recently.)

It's not a surprise to me that factors external to the product itself - such as familiarity and branding - govern its perceived quality, but it's interesting to see those extrinsic factors considered as a dimension of intrinsic quality. I wondered whether Weinberg's classic definition of quality has something to say about this. According to Weinberg (see for example Agile and the Definition of Quality):

  Quality is value to some person.

And value is a measure of the amount that the person would pay for the product. Consider I'm eating a meal at a restaurant: if my enjoyment of the food is enhanced by heavier cutlery, but the cost to me remains the same as with lighter cutlery, then in some real sense the value of the food to me is higher and so I can consider the food to be of higher quality. The context can affect the product.

Alternatively, perhaps in that experiment, what I'm buying is the whole dining experience, and not the plate of food. In which case, the experiential factors are not contextual at all but fundamental parts of the product. (And, in fact, note that I can consider quality of aspects of that whole differently.)

Weinberg's definition exists in a space where, as he puts it,
the definition of "quality" is always political and emotional, because it always involves a series of decisions about whose opinions count, and how much they count relative to one another. Of course, much of the time these political/emotional decisions – like all important political/emotional decisions – are hidden from public view. 
Political, yes, and also personal. Adams writes (p. 43)
Thanks to computers and research it seems to me that we have gotten better at purely technical problem solving but not necessarily at how to make products that increase the quality of people's lives - a situation that has attracted more and more of my interest.
And so there's another dimension to consider: even a low quality item (by some measure, such as how well it is built) can improve a person's quality of life. I buy some things from the pound shop, knowing that they won't last, knowing that there are better quality versions of those items, because the trade-off for me, for now, between cost and benefit is the right one.

Bad product: good product, I might say.
Image: Amazon

Saturday, April 22, 2017

Walking the Lines



I recently became interested in turning bad ideas into good ones after listening to Reflection as a Service. At around that time I was flicking through the references in Weinberg on Writing - I forget what for - when I spotted a note about Conceptual Blockbusting by James L. Adams:
A classic work on problem solving that identifies some of the major blocks - intellectual, emotional, social, and cultural - that interfere with ideation and design.
I went looking for that book and found Adams' web site and a blog post where he was talking about another of his books, Good Products Bad Products:
For many (60?) years I have been interested in what makes some products of industry "good", and others "bad". I have been involved in designing them, making them, selling them, buying them, and using them.  I guess I wanted to say some things about product quality that I think do not receive as much attention as they should by people who make them and buy them
I hadn't long finished The Design of Everyday Things by Don Norman but didn't recall much discussion of quality in it. I checked my notes, and the posts (1, 2) I wrote about the book, and found that none of them mention quality either.

I'm interested in quality, generally. And my company builds products. And Adams is saying that he has a perspective that is underappreciated. And he comes recommended by an author I respect. And so I ordered the book.

Shortly after I'd started reading it I was asked to review a book by Rich Rogers. Some of the material in Good Products Bad Products was relevant to it: some overlapping concepts, some agreement, and some differences. I don't think it played a major part in the *ahem* quality of my review, but I can say that I was able to offer different, I hope useful, feedback because of what I'd read elsewhere, but only been exposed to by a series of coincidences and choices.

I continue to be fascinated by chains of connections like these. But I'm also fascinated by the idea that there are many more connections that I could have made but never did, and also that by chasing the connections that I chose to, I never got some information that would allow me to make yet further connections. As I write this sentence, other ideas are spilling out. In fact, I stopped writing that sentence in order to note them down at the bottom of the document I'm working in.

In Weinberg on Writing there's a lot of talk about the collection and curation of fieldstones, Weinberg's term for the ideas that seed pieces of writing. Sometimes, for me, that process is like crawling blind through a swamp - the paucity of solid rock and the difficulty of finding it and holding on to it seems insurmountable. But sometimes it's more like a brick factory running at full tilt on little more than air. A wisp of raw materials is fed in and pallets full of blocks pour out of the other end.

Here's a couple of the thoughts I noted down a minute ago, expanded:

Making connections repeatedly reinforces those connections. And there's a risk of thinking becoming insular because of that. How can I give myself a sporting chance of making new connections to unfamiliar material? Deliberately, consciously seeking out and choosing unfamiliar material is one way. This week I went to a talk, Why Easter is good news for scientists, at the invitation of one of my colleagues. I am an atheist, but I enjoy listening to people who know their stuff and who have a passion for it, having my views challenged and being exposed to an alternative perspective.

It's also a chance to practice my critical thinking. To give one example: the speaker made an argument that involved background knowledge that I don't have and can't contest: that there are Roman records of a man called Jesus, alive at the right kind of time, and crucified by Pontius Pilate. But, interestingly, I can form a view of the strength of his case by the fact that he didn't cite Roman records of the resurrection itself. Michael Shermer makes a similar point in How Might a Scientist Think about the Resurrection?

Without this talk, at this time, I would not have had these thoughts, not have searched online and come across Shermer (who I was unfamiliar with but now interested in), and not have thought about the idea that absence of cited evidence can be evidence of absence of evidence to cite (to complicate a common refrain).

I am interested in the opportunity cost of pursuing one line of interest vs another. In the hour that I spent at the talk (my dinner hour, as it happens) I could have been doing something else (I'd usually be walking round the Science Park listening to a podcast) and would perhaps have found other interesting connections from that.

Another concept often associated with cost is benefit. any connections I make now might have immediate benefit, later benefit or no benefit. Similarly, any information I consume now might facilitate immediate connections, later connections or no connections ever.

Which connects all of this back to the beauty and the pain of our line of work. In a quest to provide evidence about the "goodness" or "badness" of some product (whatever that means, and with apologies to James Adams it'll have to be another blog post now) we follow certain lines of enquiry and so naturally don't follow others.

It's my instinct and experience that exposing myself to challenge, reading widely, and not standing still helps me when choosing lines of enquiry and when choosing to quit lines of enquiry. But I may just not have sufficient evidence to the contrary. Yet ...
Image: Wikipedia

Edit: I wrote a review of Good Products Bad Products later.

Tuesday, April 11, 2017

Search Party


Last month's Cambridge Tester meetup was puzzling. And one of the puzzles was an empty wordsearch that I'd made for my youngest daughter's "Crafternoon" fundraiser. At Crafternoon, Emma set up eight different activities at our house and invited some of her school friends to come and do them, with the entrance fee being a donation to charity.

The idea of the wordsearch activity is simple: take the blank wordsearch grid and make a puzzle from it using the list of words provided. Then give it to someone as a present.

If you fancy a go, download it here: Animal Alphabet Wordsearch (PDF)

(You're free to use it for your own workshops, meetups, team exercises or whatever. We hope you have fun and, if you do, please let us know about it and consider donating to an animal charity. Emma supports Wood Green.)

After Crafternoon, I offered the puzzle to Karo for the Cambridge Tester meetup and she wrote about in Testing Puzzles: Questions, Assumptions, Strategy. It's fun to read about how the testers addressed the task. It's also fun to compare it to what the children did. Broadly, I think that the kids were less concerned by a sense of expectation about the outcome - and that's not a remotely original observation, I appreciate.

Everyone who took part had some "knowledge in the head" about the task (conventions from their own experiences) and there is some "knowledge in the world" about it too, such as whatever instructions have been given and the guidelines for the person who is gifted the completed wordsearch.

Some of the testers gently played with convention by, for example:
  • filling in all blank cells with the letter A
  • using symbols outside of the Roman alphabet
  • mixing upper and lower case in the grid
  • ...

But the kids in general went further by:
  • writing more than one letter in a cell
  • writing letters outside of cells
  • writing words around corners
  • leaving some cells blank
  • crossing out the words from the list if they couldn't fit them in the grid
  • spelling something wrong to make it fit
  • ...

In our jobs we're often thinking about how a product could be used in ways that it wasn't intended. It's an education watching children trample all over a task like this, deriving their own enjoyment from it, unselfconsciously making it into whatever works for them at that moment, constrained much more by the practical restrictions (pen, paper, the location of Crafternoon, ...) than any theoretical ideas or social norms.

While I was thinking about this - washing up last night, as it happens - I was listening to Russell Brand on The Comedian's Comedian podcast. He's a thoughtful chap, worth hearing, and he came out with this beautiful quote:
Only things that there are words for are being said. A challenge ... is to make up different words if you want to say different and unusual things.
And that's fitting in a blog post about finding words, but it generalises: the children were willing and able to invent a lexicon of actions that was permitted by the context they found themselves in. As a tester, are you?
Image:Disney