Session Name: A Log Story: Improving Testing Quality through Proper Analysis
At a company that runs 4000+ automated integration test cases handling many external systems - from Selenium for testing, through networking equipment, and packet generation, like many companies we used pass/fail as the sole indicator for our testing quality gate. Due to a whole bunch of external factors such as connectivity and stability of third-party systems, our pass rate was consistently lowered, until 70% became the average pass rate. We knew that this was almost as bad as a coin toss, and decided to dig deeper & solve this technical debt from the source, once and for all. By investigating the logs using log analysis tools to detect patterns in the failed and passing tests, we were then able to build dashboards to help track the trends. Based on these we went to work building a smarter framework for exception classification, that enabled us to create the right tools that ultimately helped us to increase our quality gate to 98%, with a much more detailed view of every test run and easy debugging of external issues. This talk will walk you through this use case from the problem, through the final implementation, and how to think about solving similar problems in your systems.
Speaker Bio:
Gabriel is a senior full-stack developer with a favorite kid named Frontend. For over ten years now, I've enjoyed writing clean code, simplifying complex problems, leading feature development, and influencing innovation every day. When I’m not busy with code, you’ll find me talking about application performance, building confidence in code bases, product architecture, developing organizational culture, and other nerdy dev stuff. Besides all that, I'm a father of two, a hobbyist photographer, a restless traveler, and a food creator.