There has been immense growth in AI-powered tools for software development, and many tech and business leaders are left confused with the choices available in the market. This talk will provide insights to help decision-makers sift through the noise and select the best possible solution given their situation. The talk will also propose a framework wherein AI and ""humans in the loop"" will be working hand-in-hand to for requirements engineering tighter and software builds faster.
In the context of software dev shops, the talk will be useful in understanding how AI helps achieve better internal efficiency across the SDLC so that teams are able to increase capacity and realize business growth.
In this talk, we will examine the causes and consequences of flaky tests, particularly within Continuous Integration/Continuous Deployment (CI/CD) pipelines. We explore how flaky tests erode trust in test results, delay releases, consume resources, and mask genuine bugs.
We delve into the common pitfalls of test automation that lead to increased maintenance overhead and decreased reliability. We also emphasize the importance of establishing testing guidelines, leveraging reporting tools, using artificial intelligence (AI), and documenting flaky tests to improve overall test suite health.
I will also share practical and effective mitigation strategies, emphasizing the importance of code quality, design patterns, advanced tools, comprehensive logging, team collaboration, and regular reviews.
Finally, we will discuss actionable steps for creating meaningful automated tests, focusing on principles derived from real-world situations from my experience. By adopting these strategies, engineers can build a more reliable and trustworthy test automation suite, leading to faster and more stable software releases.
Key Takeaways:
1. Evaluate how flaky tests can erode trust and slow down developments.
2. Analyse the context and purpose of tests to avoid common pitfalls.
3. Concrete strategies for identifying and mitigating flaky tests.
4. Exploring the role of AI and reporting in creating reliable tests.
This dream scenario can easily turn into a nightmare. If the pipeline is broken frequently, or running slowly, or is not covering the whole scope; we may start worrying about it. If the pipeline is not failing at all, that is not good news either. It may be a sign to check the quality of tests.
Learnings from this session include key take-aways such as:
A checklist to choose automation tools and make the best use of them
Tips to get rid of flakiness in the deployment pipelines
A guideline to build a quality mindset and tips for a strong quality gate
and a set of sample quality metrics to track the product and process maturity
The session will also sneak-peak into potential challenges associated with migration to v17, such as modularity, deprecated or removed features, GC policies and much more.
The session aims to help developers and organizations navigate the transition from Java v8 to v17, ensuring they can leverage the new capabilities effectively while managing the transition smoothly.
Whether you’re a backend developer seeking reliability, a frontend developer interested in real-time apps, or simply curious about new technologies, this session will inspire you to give Elixir a try.
We'll share practical insights gained from assembling the ideal cross-functional team, establishing clear and targeted evaluation criteria, and executing detailed comparisons of AI testing tools to assess compatibility, scalability, and integration ease.
While our experience centers on AI tool selection for software testing, attendees will find our home-made approach, adaptable and valuable as a starting point for qualifying AI solutions in other phases of their SDLC. You'll walk away with a proven, experience-based process that can be tailored to your team's unique development context.
In this talk, I will share how we revamped an OS course to be more of a hands-on approach rather than conceptual. The project's reference to Linux/Windows shells and commands like ""nvidia-smi,"" ""free,"" and ""vmstat"" highlights its connection to real-world operating systems. Students are exposed to the challenges and complexities of OS development, preparing them for potential careers in systems programming, embedded programming, OS development, or related fields.
My talk will cover the proposed mapping of OS theory (e.g., I/O, process scheduling, demand paging, etc.) to actual dev skills and the OS emulator project specifications (written in C++). Lastly, I will share our teaching experiences and general student feedback regarding our approach, where 200+ students have already taken our revamped OS course.
The flow of the workshop will be:
- 10 mins: Introduction to Serverless Technology
- 10 mins: How we structure Python applications when deployed in Serverless
- 10 mins: Hands-on Workshop Introduction
- 90 mins: Hands-on Workshop Proper
The rules are:
- Participants will be required to bring laptops
- Participants will be grouped into teams of 4 people
- The group with the most points at the end of the workshop wins. In case there is a tie, the team that is the fastest wins
- AWS Accounts will be provided
- Internet connection required to hold the workshop as we would be deploying our applications to the cloud
The hands-on workshop's milestones are:
1. Deploy a simple hello world Python application in AWS Lambda
2. Create a CRUD API that uses DynamoDB as database
3. Use SQS and S3 to demonstrate event-driven architecture
28-30 October 2025 | Online (Days 1-2) + Crowne Plaza Manila Galleria (Day 3)