PeepSo Development and Quality Assurance

The PeepSo plugin family is about to grow… and it’s going to grow big! The 1.7.0 release is massive. This release consists of 132 developer tasks, added a pile of new features, improvements, bug fixes, and a whole host of cool stuff. But most of them will be gathered under: New [GROUPS] GroupSo Plugin on our changelog.

The first commit to the Groups plugin repository was made on May 18th, 2016, which means that we started work on the Groups plugin while we were still building PeepSo 1.6.0, 1.6.1, 1.6.2 and 1.6.3.

Once lead developer Matt Jaworski had the architecture in place for Groups though we focused solely on the plugin itself. We wanted to get it out quick without compromising on code quality. We made sure that nothing breaks, that upgrades work and that everything does exactly what it’s supposed to.

The Development Process and Quality Assurance 2.0

Some of you might have read one of my early blog posts about our development process. I wrote that post almost exactly a year ago. Here’s an update that explains exactly how we make sure PeepSo works.

PeepSo Development Board

PeepSo Development Board

The Beginning

The planning stage. We look at the backlog and user requests, and ask ourselves what should go into the next version of PeepSo. Usually Merav gives us a general direction like: ‘Groups’, then lets me run wild(!) with it. So far it’s worked pretty great 🙂 Once the decisions are made we go to the next stage…

The Execution

Matt is responsible for the grand design. He makes all the architectural decisions and he’s done an outstanding job. When the architecture is in place, tasks are given to the team members. If they’re too big to be handled in one ticket, Matt breaks them down into smaller tasks and delegates them. Once a task is done it goes to the next stage…

Code Quality Control

Completed tasks are passed to a Peer Review column on our board. A developer who didn’t work on the task reviews the code to make sure it meets our standards, and tests it. If the code passes, the task goes to Lead Code Review where Matt takes another look and either gives his stamp of approval or sends the task back to the original developer for correction. If all is well, the task finally reaches my desk.

The PM Quality Control

I can be very picky, especially when it comes to the design. If something’s out of alignment by as much as one pixel my CDO (it’s like OCD but in the correct alphabetical order) steps in. I can’t sleep until it’s fixed. Nothing leaves my desk until it’s perfect. Only then does it reach our beloved Webdriver team.

The Webdriver Quality Control

Webdriver is automated testing. It’s a series of scripts that automatically click around a site and check whether features do what they’re supposed to. In my blogpost a year ago, I said that we run 400 tests for PeepSo and its plugins. A lot has happened in a year. When I’m writing this we run 1,254 test cases for PeepSo and its plugins, there will be more when Groups plugin is out. We have a dedicated server running those tests and the whole test suite takes about 18h to execute them all.

Here’s an example:

We have an automated test that checks whether users can create groups when the setting is enabled in the backend. We also have an automated test that checks whether users can’t create groups if that setting is disabled.

We also have a test to check whether admins can create groups if the setting is enabled… and a test to check whether admins can create groups if the setting is disabled.

Those tests cover every possible scenario. There’s an great tweet about QA:

Bugs Do Happen

The result of that borderline insane quality control is that we rarely receive bug reports, and only around 5 percent of the reports we do receive are valid PeepSo bugs created by a problem with our coding. The other reports are usually a server misconfiguration, a third party plugin issue or a theme problem that overwrote the PeepSo styles, making some parts of our plugin look a bit off. We once underwent a period of three weeks with no bug reports at all from users. I mean none. Version 1.7.0 will contain just five bug fixes reported by users. That’s five reported bugs since the release of 1.6.3 on September 5th.

If a user does find a bug, it’s put through our regular task procedure. Bugs end up as test cases to ensure they never happen again.

The Final Word

I’ve heard some people say:

It’s just a WordPress plugin. You’re putting way too much effort into the quality. Who cares?!

We do.

We all do. I will never put my name to something that’s half done. Thanks to our nutty high standards we can focus on development and not on fixing bugs. That saves us all time. If a user finds a bug, they get frustrated, send a bug report, and we have to patch it up. Some bugs might be too complex for a patch so we might have to create a workaround because the current architecture doesn’t support it. It’s a mess.

We’d rather do things properly the first time. That’s our approach and we’ll stick to it.

Oh, right… I almost forgot… Groups should be out around the end of October 2016 🙂

How We Develop PeepSo

To maintain the highest possible standard, minimize bugs and keep PeepSo improving steadily and quickly, development of the plugin follows a strict process.

Step 1: Planning

The first stage is planning. We discuss what the communities need, what additions would bring the most benefits and which features we should add to PeepSo next.

We then create a roadmap and produce mock-ups. These are interactive wireframes that let us play with the new features to see what works and what still needs work. When the mock-ups are ready, we write the documentation and list the requirements for the development team. Each task is broken down into a series of smaller tasks which can be prioritized and handed to our developers.

Step 2: Development & QA

As soon as we move into the development phase, we start the quality assurance. The QA work follows every step of the construction so that when a developer finishes a task, it passes straight to “peer review” where another developer looks at the code and tests it. If the task doesn’t yet meet our standards or doesn’t work exactly the way it should, it’s sent back to the developer for corrections. Only when the task meets our standards and works as it should will it move to the ‘testing’ phase.

PeepSo Tasks Board

PeepSo Tasks Board

We perform extensive testing both automated and by hand. First, the project manager checks and tests the task manually. Any failures and the task goes back to the original developer. If everything looks good, the task is passed on to the automation team.

The automation team also tests the task manually then creates a WebDriver script. WebDriver is a tool used for automating the testing of web applications. We currently have about 400 automated tests that cover most of PeepSo’s functions.

We run the automation tests during the development process and use the stable package created for a release. If the WebDriver catches any errors we fix them. Only when all the elements of a new version have passed through this process, including its several levels of testing, will that version be released.

PeepSo Webdriver Automated Testing Results

PeepSo Webdriver Automated Testing Results

That’s a very brief description of how we manage development, testing and quality assurance. In practice, it’s detailed and demanding but each stage is essential. PeepSo is here to stay but that’s only going to happen if the plugin works exactly the way it should. Code quality, standards and testing are all vital and they make future development much easier. Developing in a clean and well-maintained environment makes growth faster for us and results in a better PeepSo for you!

Comments? Questions?

Please leave them below.