Who is a Quality Engineer, and what do they actually do? The instinctual answer is probably similar to “testing applications”. If you wanted to sound a bit more like a pro, you may say “quality assurance through testing applications”. Are those correct? Absolutely.

But what do we really accomplish through testing?

The knowledge of a mistake made by a developer somewhere?

But what is a bug?

What does it mean that everything works, what is that state precisely? 


I’ll try to lead you to these answers by explaining the process of not only QA, but the entire development cycle used in Apptension. We think there is a right approach to a broadly understood quality in a Software House. Looking at results from a narrow perspective (such as QA) may give us a distorted view of project performance. With this narrow field of vision, we are more likely to miss early on that something misses the mark and we can't act before the issue escalates.

Where testing belongs

Back to the main questions - what does testing really give us? Is testing enough to get quality right? And finally, at which stage of the project should testing begin? 


If we focus solely on one project/app, then testing will already provide us with enough information that we can determine the usability of the product for the end-users. That knowledge is enough to make a decision about further product viability.


But wait — testing is usually one of the last activities in the development process. What about design, tech documentation, business assumptions, user and device matrixes? No, seriously. What about them?


The stuff listed above gets produced in previous phases, through the entire project duration — starting from client workshops, and all the way through designing, development and testing itself. Shouldn’t those stages be tested as well? There is, after all, a common agreement about their importance to the overall quality landscape.


You may notice that in our services catalogue, a Quality Engineer participates in many stages of a possible cooperation - from product consulting, development and maintenance, up to software development itself. This allows for complex management of quality and a nuanced approach to it.

Error analysis

Thanks to this tweak, we not only get the answer to the singular question of whether the app works as it should, but also if the process around it is healthy and error-free. Errors such as:


  • The developer implementation not working
  • The app not working according to the requirements
  • The device requirements are not defined precisely enough
  • The app is too slow
  • The issue reporter does not know the business side of the process
  • The client reports an error because of their misplaced expectations about product functions (the communication was imprecise)
  • Is it a human mistake and the QA Engineer failed to check something correctly?


With adequate data gathering and interpretation, you can analyze these and many other errors with ease. This is a common thread in my work at Apptension and we know the value of well processed and understood data. With a couple of simple moves, popular JIRA becomes a mighty backend database for countless hopefully relevant data points about the performance of any number of your current endeavours. I don’t mean only the QA teams here — but again, the whole length of the process — from product consulting through implementation and pushing the app to production, ending in monitoring and maintenance.


Our own tool, “Apptension Quality Metrics”, uses JIRA as a knowledge hub, and as a WEB app, can be run on any JIRA instance using an add-on. When connecting apps, you only need to use your normal JIRA credentials and SSO - no new accounts are necessary.


Then onto setting up the projects. Instruction fields are automatically taken from JIRA. That’s all!


As a result, for every selected project we are presented with an illustration of several quality metrics regarding:


  • the interdependencies between bugs and their priorities, and their trends (this helps us assess the quality of development and testing cycles)
  • the number of project errors and their trends, plus the priorities with their weight in % (to evaluate the quality and precision of documentation)
  • the progress of development work versus maintenance work (this checks the accuracy of sprint planning)
  • bugs reported by specific user groups and their proportions (to check the quality of work at every stage)

Early warning system

With the example of our QA metrics, you can see the blunt and visible impact good data and their interpretation can bring to a project. As a person frequently responsible for both quality and smooth processes, I am keenly aware of this, and use this to my advantage when tasked with assessing product quality. This approach works both for me as an individual, and for us as a team.


Additionally, when looking at the costs and the time invested, we know that correcting early mistakes just gets more expensive with time. It also grows our tech debt as well, which then may have to be mitigated through multiple future sprints.


This makes it so incredibly vital to use tools and metrics that give us an early warning system that spurs us into corrective action just at the right moment. If you would like to have a chat about this or have a different opinion, hit me up on LinkedIn!