Submission Index
- Go, Wasm, Play: Building Browser Games with Go and WebAssembly
- Generative Art in Go: Exploring Creativity Through Algorithms
- The Power of Cilium For eBPF
- Build performant and production-ready applications with GoFr
- Builiding a reactive website in Go with go-fir
- Lookup inside Go Objects using Object Graph Notation Languate (OGNL)
- gRPC in Go: A View from the Trenches of IOT
- Maximizing Scalability with Go and Redis: A Telemetry Processing Journey
- Secure Coding with Golang: Preventing Cyber Attacks Through Robust Code
- Building a Blog Web Application with Golang.
- Ocean's 1011 - Think Like a Cyber Criminal to Protect Your Business in the Age of Digital Heists
- Surviving Panics, Fatal Errors, and Crashes: Lessons from the Trenches
- To Optimize or Not to Optimize: Navigating Performance in Go
- Make Go Apps faster with Profile-Guided Optimization
- Bugs Lightyear Away With Fuzz
- Broken Go: The Unexpected Behaviors
- Paradigms of Rust for the Go developer
- Evolving a Commercial Open-Source Go Project
- Go Plug and Play: Runtime Extensibility with Plugins
- Faster I/O operations using io_uring
- Go Test, Go Further: Evolving Your Testing Strategy
- Think Pragmatically: Elevating Your Go Development Skills
- Go See It All: Observability for the Rest of Us
- The Evolution of Go: What’s New in the Latest Releases and How It Impacts Your Code
- Memory Management: Go's Garbage Collection vs. Rust's Ownership Model
- Building Reliable Robotic Systems with Go
- Go for Gold: Best Practices for Go Development
- Go and AI: Integrating Machine Learning into Go-Based Applications
- Mastering Concurrency in Go: From Patterns to Production
- Building Graphical Go apps is Fyne :)
- Resizing Animated GIFs Without CGO or Third-Party Libraries
- Profiling WebAssembly with pprof and wzprof
- Generate RESTful services using gRPC-Gateway
- Empowering Go with WebAssembly System Interface (WASI) Unleashed
- Testing GenAI applications in Go
- Go Big on Benchmarks
- PGO - Should you use it and, if so, how?
- Starting and stopping things
- Go for the Edge: Building Ultra-Low Latency Applications with Go and WebAssembly
- GPT in Go-Land: Building an AI-powered Narrative Generation Engine Using Go, AWS, and GPT
- Goroutines as Cognitive Threads: Replicating Human Behavior in Go
- Delightful integration tests in Go applications
- Is Go a Good Language for Building a Compiler?
- Linters: Stop Go-ing Insane in Code Reviews
- Building and Maintaining Large Scale Time Series Database with Go
- Heimdall: Coban’s Go-to control-plane for platform automation
- Scaling Go Monorepo Workflows with Athens
- The Why of the iterator design
- Enhancing Application Performance with Profile-Guided Optimisation in Go
- Automatic efficient Go application by Profile-guided optimization
- 80% faster, 70% less memory: the Go tricks we've used to build a high-performance, low-cost Prometheus query engine
- From Bottlenecks to Breakthroughs: Elevating Performance with OpenTelemetry and Go
- Why we can't have nice things: Generic methods
- So, you want to add sum types to Go?
- Exploring the Robustness of Go: Balancing Strengths and Fragilities
- Mastering Error Handling in Go: Best Practices and Pitfalls
- Accelerating Cloud-Native development with Workspace: A Fast-Track to Consistency and Efficiency
- From the Top: Mastering the DevOps Machine Learning Pipeline for Unrivaled Innovation - A CEO's Perspective on Cool DevOps
- How Golang Changed My Life.
- Who broke the build? — Using Kuttl to improve E2E testing and release faster
- From fmt.Println("Hello, world!") to continuously deploying apps to production.
- Practical GenAI with Go
- Using Go to Scale Audit logging at Cloudflare
- Securing Golang Services with Relationship-Based Access Control (ReBAC) Authorization
- 🚀 The Power of Bloom Filters: Building a Cutting-Edge Go Search Engine to Explore the World's Source Code
- Processing 40 TB of code from ~10 million projects with a dedicated server and Go for $100
- Abusing Go, AWS Lambda and bloom filters to make a true Australian serverless search engine
- Channeling your Inner Tech Blogger
- Applied Psychology: Psychology-based UI improvements
- Continuous Improvements of The Code Review Process
- How Regex Works: The Secret Sauce Behind Pattern Matching
- Usage of default Golang templating in complex report generation.
- Making non-Go tools accessible to Go developers using WebAssembly
- Software Tomorrow
- The Rise of AI Agents
- 42! (A Developers Guide to the Future)
- Exporing Domain Driven Design in Go
- The internals of the context package
- Beyond the Basics: Elevate Your Go Testing Game
- Evil Tech: How Devs Became Villains
- The Art Of Scalable Intelligence: Distributed Machine Learning with Go
- Test like a ninja with Go
- Swiss knife for Go debugging with VSCode
1. Go, Wasm, Play: Building Browser Games with Go and WebAssembly
Abstract
Unlock the potential of Go for web-based game development using WebAssembly. Learn how to create engaging, high-performance browser games that combine Go’s simplicity with Wasm’s near-native speed.
Description
In this talk, we’ll embark on an exciting journey exploring the synergy between Go and WebAssembly for creating engaging browser-based games. We’ll uncover how Go’s simplicity and efficiency can be harnessed to build high-performance games that run directly in web browsers, opening up new horizons for both web and game development.
Attendees will gain insights into setting up a Go environment for WebAssembly, implementing game mechanics, and optimizing performance for the browser environment. We’ll explore techniques for integrating with web technologies, handling user interactions, and creating immersive gaming experiences.
Throughout the session, we’ll tackle common challenges in browser game development and demonstrate how Go’s unique features can provide elegant solutions. From basic 2D games to more complex interactive experiences, we’ll cover a range of possibilities that showcase the potential of this powerful combination.
By the end of the talk, attendees will have a comprehensive understanding of how to leverage Go and WebAssembly for game development, equipped with practical knowledge to start building their own browser-based games.
Notes
To summarise the speaker’s speaking experience, he has spoken at more than 30+ Python and JS conferences including GCI Summit, PyCon US, PyCascades, EuroPython, Drupal, PyCon APAC, and many more.
Upcoming conferences: Serverless Architecture Berlin, PyCon Korea, PyCon France
Some videos from his previous talks are here:
- https://youtu.be/yJz4QLh-fA0?si=ypd0IPqNzFDTzBrE
- https://www.youtube.com/watch?v=5ZQXWs9_11Y
^ back to index
2. Generative Art in Go: Exploring Creativity Through Algorithms
Abstract
Dive into the world of generative art using Go. Learn how to create stunning visual pieces by combining Go’s concurrency model with creative algorithms.
Description
This talk will take you on an exciting journey into the world of generative art using Go. We’ll begin with a brief introduction to generative art principles, setting the stage for our creative coding adventure. Next, we’ll dive into Go’s powerful image package, exploring how it can be harnessed for artistic expression. You’ll learn to implement fundamental generative algorithms like cellular automata and L-systems, bringing mathematical patterns to life through code.
A key focus of our exploration will be Go’s concurrency model and how it can be leveraged to create complex, parallel art generation processes, pushing the boundaries of what’s possible in algorithmic creativity. The highlight of the session will be a live coding demonstration, where we’ll create a unique piece of generative art in real-time, showcasing the techniques discussed.
Throughout the presentation, we’ll discuss the fascinating intersection of creativity and code, examining how Go’s simplicity and efficiency make it an excellent choice for artistic coding projects. By the end of this talk, you’ll have gained insights into a novel application of Go, blending technical skills with artistic expression, and hopefully inspiring you to explore the creative potential of Go in your own projects.
Notes
Talk Outline:
- Introduction to generative art and its principles
- Overview of Go’s image package and how to use it for creative coding
- Implementing basic generative algorithms (e.g., cellular automata, L-systems) in Go
- Leveraging Go’s concurrency for parallel art generation
- Live coding demo: Creating a unique piece of generative art
- Exploring the intersection of creativity and code in Go
^ back to index
3. The Power of Cilium For eBPF
Abstract
This talk is for people who are getting started with eBPF and they don’t want to use Rust for their user space code . I will go upon my personal experience of using Cilium, why exactly did I choose it, what challenges I faced, and what mistakes people should not make.
Description
So eBPF is a very powerful technology in the observability space. It essentially allows developers to attach probes to system calls after which they can do all sorts of things there. But it only gives you the information in the kernel space. To get the information in the user space, you need to write some user space code either in Golang, Rust or C.
I will go over why I chose Go for the user space code over the other languages mentioned above. What kind of things I tried with Go before finally stumbling upon Cilium which made the process much smoother and easier. Then I will go over how actual eBPF code and Cilium code looks like and how they come together to perform magic. Along with this I will also be covering the various mistakes that I made in the process and what all people can do to improve their experience when using this technology.
I am not sure if there will be seperate time given for questions and answers or not so I am reserving the last 5 minutes of my talk for QnAs.
Notes
^ back to index
4. Build performant and production-ready applications with GoFr
Abstract
Imagine having full observability for your apps from the moment POC is done, with no setup needed—even for databases. Gofr makes this possible by managing everything from logs to metrics to traces. Plus, it supports multiple open-source tools, so you can start monitoring instantly.
Description
Effortless Development
- Intuitive API: Minimize boilerplate code and focus on core application logic with GoFr’s clean and concise API.
- Automatic Code Generation: Streamline repetitive tasks with optional code generation capabilities (e.g., RESTHandlers).
Enhanced Observability
- Structured Logging: Integrates a robust logging system that captures detailed and structured information about application events, making debugging and monitoring easier.
- Customizable Logging Levels: Configure logging levels based on your application’s needs to control the verbosity of log messages.
- Metrics Integration: Easily integrate with popular metrics collectors to gather valuable insights into application performance and resource utilization.
- Distributed Tracing: Track request flows across your application ecosystem, pinpoint bottlenecks, and optimize performance with built-in tracing capabilities.
Seamless Database Connectivity
- Database Agnostic: Supports a wide range of popular database backends (e.g., MySQL, PostgreSQL, Redis) through well-established drivers, offering flexibility in data storage solutions.
- Database Migrations: Efficiently manage database schema changes with a built-in migration system, ensuring smooth upgrades and rollbacks.
Robust Security
- Built-in Authentication Mechanisms: Offers out-of-the-box support for common HTTP authentication methods (e.g., Basic Auth, JWT), simplifying user access control.
- Customizable Authorization: Implement granular authorization rules to control access to specific application resources and functionalities.
Why Choose GoFr?
- Developer Productivity: Build distributed, performant Go applications with minimal effort.
- Advanced Observability: Gain deep insights into application behaviour through advanced logging and observability features.
- Seamless Database Integration: Connect seamlessly with various database backends for efficient data management.
- Secure Access Control: Implement robust and secure HTTP authentication mechanisms for user access control.
Notes
Our team has spent the last 8 years in microservices development, transitioning through various languages before settling on Golang. Our fascination led us to create an open-source framework, GoFr, which has garnered over 2.5K stars on GitHub. With extensive experience, we understand that building scalable applications is just the beginning; monitoring, maintaining, and debugging is equally crucial.
GoFr embodies our vision of accelerating all these aspects. It’s an opinionated Go framework designed for accelerated microservice development, ensuring you have robust tools for observability and scalability right from the start.
^ back to index
5. Builiding a reactive website in Go with go-fir
Abstract
Unlock the power of Go for web development! Join me as we build reactive websites using Go-Fir, a framework to streamline front-end and back-end integration. Learn how to create fast, scalable, and reactive web apps while leveraging Go’s efficiency!
Description
We’ll explore the power of go-fir to build fully reactive websites. As web development evolves, the demand for fast, interactive, and scalable applications grows. Go-Fir, a relatively new framework, provides a solution for creating highly reactive web applications by integrating the power of Go on both the backend and front end.
We will dive deep into how Go-Fir handles common web development challenges such as state management, real-time interactions, and efficient server-client communication. By leveraging Go’s concurrency model and type safety, Go-Fir enables developers to build robust web applications with fewer moving parts, making it easier to maintain and scale.
During the session, we will cover how Go-Fir facilitates data-driven UI updates. Whether you’re a seasoned Go developer or new to web development, there will be something for everyone. Learn how Go-Fir can help you build reactive websites with Go’s simplicity and efficiency!
Notes
I explored go-fir as a part of a technical writing engagement with LogRocket. I am predominantly a web developer with some experience in Go. I was impressed by the library so I wrote a blogpost for Logrocket and also made a YouTube video for my channel. Both of these resources are listed in the Community section of the package’s GitHub repo.
go-fir: https://github.com/livefir/fir
Blogpost: https://blog.logrocket.com/building-reactive-web-app-go-fir/
Youtube: https://www.youtube.com/watch?v=7hpXdG-Nw00
Note: My employer provides travel assistance.
^ back to index
6. Lookup inside Go Objects using Object Graph Notation Languate (OGNL)
Abstract
Do you have the object path of your field and want to get the value of it. Use this library to achieve it.
This is inspired from Apache OGNL (https://commons.apache.org/dormant/commons-ognl/)
Description
You may have came across scenarios where you want to extract some
information from your go object. But, field information is provided at
runtime, which is in string format.
In such case, you will need to maintain way, which will enable return of
the value associated with the requested field name.
This library helps here.
Notes
You
may have came across scenarios where you want to extract some
information from your go object. But, field information is provided at
runtime, which is in string format.
In such case, you will need to maintain way, which will enable return of
the value associated with the requested fieldname
^ back to index
7. gRPC in Go: A View from the Trenches of IOT
Abstract
In this talk I’ll share my insights from real-world usecases in IOT with Go and gRPC leveraging Buf and ConnectRPC.
We’ll go over DevEx, best practises for api schemas and data streaming.
You’ll leave with a better understanding of gRPC in Go and will have new ideas to put into practise.
Description
If you’re looking for a first, or new glance, at gRPC in Go, this is the talk for you! :)
I’ll share my insights from real-world usecases in IOT with Go and gRPC leveraging Buf and ConnectRPC.
Everything from the developer experience: how to get up and running, dependency management with buf, schema linting, and schema evolution.
To different api implementations, including message validations, error handling, Bidirectional Streaming and more!
See you there!
Notes
Hi!
My name is Patrick Akil, software engineer, trainer and podcast host. Go has been my favourite language since a long, long time. My main experience using Go is in ecommerce and IOT. Next to software engineering, I’ve given several Go fundamentals trainings to peers and other companies. This includes bol.com as well, which I’m proud of.
This will be my first conference talk, but I have public speaking experience from hosting conferences meetups, and mainly podcasting. I’ve been podcasting for 3+ years, weekly and consistently.
Good luck with all the proposals! 🤞
^ back to index
8. Maximizing Scalability with Go and Redis: A Telemetry Processing Journey
Abstract
At Delivery Hero, we process 10,000 requests per second using Go and Redis. Join us to learn how this powerful duo handles high-load telemetry data efficiently and cost-effectively, with scalability, resource optimization, and continuous innovation through customized data flows.
Description
At Delivery Hero, we process a staggering 10,000 requests per second globally, particularly in our critical TIER1 flow, where we handle telemetry data primarily from riders’ phones. In this session, we’ll delve into how we harnessed the power of Go and Redis to handle this high-load, mission-critical system at an incredibly low cost.
Key Points:
Go and Redis as the Perfect Pair: Discover how Go and Redis form the backbone of our telemetry processing infrastructure. Go’s concurrency model and performance complement Redis’s high availability and resilience, enabling seamless handling of our demanding operations.
Efficient Data Management with Redis: Explore how Redis’s versatile features, including sorted lists and key expiration (TTL), enable efficient telemetry data storage, event queue management, and fraud prevention. Learn how Go’s native support for Redis interactions streamlines integration and enhances overall system performance.
Scalability Made Simple: Dive into how Go and Redis effortlessly scale to handle our high-volume telemetry flow. With Go’s lightweight footprint and Redis’s scalability features, such as automatic sharding and replication, we ensure our system remains responsive and reliable, even under peak loads.
Cost-Effectiveness and Resource Optimization: Learn how we achieved cost-effectiveness by leveraging the smallest Redis instances available, strategically deployed across multiple regions. Explore how Go’s efficient resource utilization and Redis’s pay-as-you-go pricing model contribute to significant cost savings without compromising performance.
Customized Data Flows and Experimentation: Our implementation with Redis has opened doors for various experiments, including different location frequency updates, giving us the flexibility to optimise and innovate based on unique service needs.testing different telemetry processing strategies, empowering continuous optimization and innovation.
Notes
^ back to index
9. Secure Coding with Golang: Preventing Cyber Attacks Through Robust Code
Abstract
This talk is about how Golang’s features like strong typing, memory safety, and testing tools support secure coding. We will talk about real-world examples to prevent cyber-attacks through robust code and continuous testing, reducing vulnerabilities in your software.
Description
In today’s digital age, the security of software systems is more critical than ever. Cyber attacks continue to exploit vulnerabilities in poorly written code, causing widespread data breaches and financial losses. Developers are now expected to write code that is not only efficient and scalable but also secure by design. While many modern programming languages offer features that support secure coding, Golang (Go) stands out for its simplicity, strong typing, built-in concurrency, and memory safety, making it a powerful choice for building resilient, secure applications.
This talk explores how Golang’s unique strengths can help mitigate common security threats such as injection attacks, memory-based exploits, and insecure APIs. Practical examples of secure coding in Go will demonstrate how features like its strict error handling and memory management reduce vulnerabilities from creeping into your codebase.
Beyond writing secure code, thorough testing is critical to ensure its security holds up in real-world conditions. Golang provides a robust testing framework that enables developers to write unit tests, execute fuzz testing, and implement continuous security checks, all of which help identify vulnerabilities before they become exploited. Testing in Go isn’t an afterthought—it’s an integral part of the development lifecycle, ensuring that secure coding practices translate into secure applications.
Whether you’re a developer, security expert, or software architect, this session will provide actionable insights into writing secure, well-tested code using Golang. By combining secure coding principles with effective testing strategies, you can significantly reduce your software’s attack surface and protect your systems from evolving cyber threats.
Notes
Dear Organizers,
I am submitting a CFP for the GopherCon Singapore event, where I would like to present a talk titled “Secure Coding with Golang: Preventing Cyber Attacks Through Robust Code.”
With over 20 years in the IT industry, I have managed multiple projects across various domains and technologies. A recurring theme I’ve observed is the lack of security considerations at the coding level; developers often assume that security testing will be handled by a separate team. I firmly believe that incorporating security from the design phase can resolve many issues upfront. Additionally, making testing a mandatory part of the development process is crucial.
I am passionate about security and have been actively researching and working in this area for the past few months. While I may not be an expert, my experience has equipped me with valuable insights that I am eager to share, helping the audience take away practical knowledge.
Technical Requirements:
For my presentation, I will need a laptop with Microsoft Office or Google Slides to display my presentation. I would also appreciate having Golang installed along with VSCode for code demonstrations. If I can use my own laptop for the presentation, I would only require a projector to connect it.
Thank you for considering my proposal. I look forward to the opportunity to contribute to the event. Please let me know if you need any additional information.
^ back to index
10. Building a Blog Web Application with Golang.
Abstract
This talk will be about how to build a blog web application using Golang. Learn routing, data handling, user authentication, and front-end integration with Go’s powerful tools. Perfect for developers looking to create scalable, efficient web apps from scratch.
Description
Topic Brief:
One of the ways to exploit the power of a robust programming language like Golang is by using it to develop scalable, efficient web applications. In this session, we’ll explore creating a blog web application from scratch using Go, covering essential aspects like routing, data handling, user authentication, and front-end integration.
We’ll begin by setting up a basic web server using Go’s built-in net/http package, demonstrating how to configure routes and handle HTTP requests. Next, we’ll dive into database integration using tools like GORM or sqlx to store and retrieve blog posts efficiently. We’ll also cover implementing user authentication, including secure password handling with bcrypt and session management using gorilla/sessions.
The talk will further explore how to render dynamic content using Go’s HTML templating system and enhance the design with CSS frameworks such as Bootstrap or Tailwind CSS. For those seeking to add interactivity, we’ll briefly touch on using JavaScript or Alpine.js.
Finally, we’ll discuss how to package and deploy the blog application on popular platforms like Heroku, DigitalOcean, or Google Cloud. This talk will provide practical insights and hands-on knowledge, making it suitable for both beginners and experienced developers looking to leverage Golang for scalable, secure, and maintainable web applications.
Outline:
-
Introduction to Golang for Web Development
We’ll start by discussing why Golang is well-suited for building web applications, focusing on its concurrency, simplicity, and performance. A quick setup of a basic web server using net/http will be demonstrated.
-
Structuring the Blog Application
This section will cover the foundational aspects of building a blog: setting up routes using net/http or Gorilla Mux for RESTful routing, serving static files, and rendering pages with Go’s HTML templates.
-
Handling Blog Posts
We’ll go through the CRUD (Create, Read, Update, Delete) operations for blog posts, integrating a database with GORM or sqlx to store posts, and fetching them efficiently.
-
User Authentication
Security is crucial in web applications. Here, we’ll implement user login and registration, hash passwords securely using bcrypt, and manage user sessions with gorilla/sessions.
-
Front-End Integration
The front end of the application will be built using Go’s HTML templating system. We’ll style the app with CSS frameworks like Bootstrap or Tailwind CSS and explore adding client-side interactivity with JavaScript or Alpine.js.
-
Deploying the Application
We’ll conclude by demonstrating how to package the application for deployment on cloud platforms like Heroku, DigitalOcean, or Google Cloud.
-
Conclusion
The session will end with a recap of key takeaways.
Notes
Notes
Background on My Experience:
With over 20 years of experience in the IT industry, I have worked on various projects across diverse domains and technologies. My passion for web development and security has led me to focus on Golang as a powerful tool for building scalable applications.
As part of learning Golang, I am working on creating a personal blog where I can share my learnings, and I hope to share this journey with the community as well. My approach emphasizes best practices in coding alongside practical insights, ensuring attendees leave with valuable knowledge they can apply.
Technical Requirements:
For my talk on Building a Blog Web Application with Golang, I would require the following:
- A projector for displaying my presentation slides and live coding demos.
- A laptop with Golang installed, along with GORM or sqlx for database integration.
- VSCode or any IDE of your choice for coding demonstrations.
- Internet access for any online resources or APIs that may be needed during the session.
- If possible, access to a database (e.g., SQLite or PostgreSQL) for live demos.
If I can use my laptop for the presentation, I would only require an internet connection and a projector connected to the laptop.
I am excited about the opportunity to share my insights and engage with fellow developers during this session. Thank you for considering my proposal!
^ back to index
11. Ocean's 1011 - Think Like a Cyber Criminal to Protect Your Business in the Age of Digital Heists
Abstract
How can organizations protect themselves against disruptive cyber attacks when cyber criminals are only becoming smarter? To beat a criminal at their game is to start thinking like one. Vishal will explore how to analyze your organization from a cyber criminal’s perspective and proactively defend
Description
Cybercrime has evolved into sophisticated “digital heists,” where attackers operate with precision, strategy, and relentless focus. In this thrilling and highly interactive session, I will show you how to “think like a criminal” to stay ahead of today’s cyber threats.
Drawing inspiration from real-world case studies, like the Clorox supply chain attack, we’ll break down how cybercriminals plan and execute their digital heists, targeting your most valuable assets. This isn’t just theory—it’s a deep dive into the mindset of attackers and how they exploit vulnerabilities, evade defenses, and strike where it hurts the most.
What makes this session different?
Interactive and Actionable Insights: You’ll step into the shoes of attackers, learning how they choose their targets and evade detection. Together, we’ll explore real strategies to build a resilient defense.
Strategic Focus: This presentation goes beyond technical jargon, focusing on how organizations can proactively protect their critical business operations and meet regulatory expectations like SEC cybersecurity rules.
Data-Driven Defense Planning: Learn how to align your security investments with your company’s highest-risk areas, ensuring that every dollar spent delivers real risk reduction.
Key Takeaways for the Audience:
Adopt an attacker’s mindset to outsmart cyber threats.
Learn how to defend critical business functions with precision.
Walk away with actionable insights on how to protect your business from modern digital heists.
This presentation will not only engage but equip cybersecurity professionals, executives, and decision-makers with the tools they need to anticipate and counteract the next cyber attack.
Notes
With over 30 years of hands-on experience in cybersecurity, I’ve spent my career developing strategies to defend the world’s leading organizations from increasingly sophisticated cyber threats. My experience spans industries and includes work with Fortune 100 companies such as Deutsche Bank, Citibank, and Capital One. I’ve also held key leadership roles at Deloitte, PwC, and Grant Thornton, where I built and led transformative cybersecurity programs.
What sets me apart is my unique approach: “Think like a criminal to beat them.” This mindset has allowed me to anticipate and outmaneuver cyber threats by understanding how attackers think and operate. I don’t just focus on theoretical concepts—I’ve implemented practical, real-world strategies that protect critical business operations.
My thought leadership has been featured in The Wall Street Journal, MIT Review, and Risk Management Journal, among others. These insights aren’t just for discussion—they come from my direct experience helping companies navigate complex security challenges, aligning cybersecurity measures with business needs and regulatory demands.
I’ve also served as a strategic advisor to startups and BluVentures Investors, helping organizations proactively address the next wave of cyber risks. This combination of hands-on experience, innovative thinking, and leadership in shaping cutting-edge security strategies makes me uniquely qualified to speak on this topic.
My deep understanding of how cybercriminals operate, paired with my extensive experience advising top-tier companies, equips me to deliver actionable insights that can transform how organizations approach cybersecurity.
^ back to index
12. Surviving Panics, Fatal Errors, and Crashes: Lessons from the Trenches
Abstract
Join us to uncover how we overcame system crashes and panics, transforming chaos into stability. Learn our survival tactics: logging, panic recovery, reading stack traces, and handling errors in Go. Gain practical insights to build robust applications and tackle critical issues with confidence.
Description
Ever found yourself in the midst of chaos caused by panics or unexpected crashes? Join me as I share our team’s journey through the wilderness of debugging and resolving critical issues that threatened the stability of our system. In this engaging presentation, we’ll explore the challenges we faced, the lessons we learned, and the strategies we employed to emerge victorious.
With the rollout of a new feature, our system encountered pod crashes triggered by a dreaded “fatal error: concurrent map iteration and map write.” What ensued was a month-long saga of investigation, root cause analysis, and relentless pursuit of solutions. Throughout this ordeal, we discovered invaluable insights that transformed our approach to handling errors and ensuring system resilience.
Our arsenal of survival tactics included:
- Logging, logging, and more logging: Amplifying our logs to capture crucial insights into system behavior.
- Strategic placement of panic recovery: Learning the importance of recovering from panics within the goroutine where they occur.
- The art of reading stack traces: Recognizing the significance of dissecting stack traces, even when they initially seem perplexing.
- Uncovering the hidden impact of non-critical functionality: Understanding how seemingly innocuous components can disrupt critical workflows.
- Distinguishing between fatal errors and panics: Recognizing the distinction between different types of errors and their implications.
- Identifying unrecoverable errors in Go: Gaining awareness of the range of errors that cannot be recovered from in the Go programming language.
By sharing our hard-won wisdom and practical insights, we aim to empower fellow developers to navigate similar challenges with confidence and resilience. Whether you’re a seasoned engineer or a newcomer to the field, this presentation offers invaluable guidance for building robust, reliable applications that stand the test of time.
Join us as we unravel the mysteries of error handling and equip ourselves with the tools and knowledge to overcome any obstacle that comes our way. Let’s transform setbacks into opportunities for growth and emerge stronger together.
Notes
I was performing this as Lightning talk at GopherCon EU this June.
Here is actual presentation https://docs.google.com/presentation/d/19rbQNZ5V91xD7PfHjVkskb6n14hehU8YgrS_EkyLc5M/edit
^ back to index
13. To Optimize or Not to Optimize: Navigating Performance in Go
Abstract
Optimization is a double-edged sword. While it’s crucial for creating efficient applications, premature optimization can lead to unnecessary complexity and wasted time. This talk delves into the nuanced decision-making process behind when, what, and how to optimize your Go applications.
Description
As developers we never purposefully build non-performant software, on the contrary, we usually fall in the trap of premature optimization which is one of the most common pitfalls we have to be aware of at all times. In the world of development, we often find ourselves walking a tightrope between the desire for peak performance and the need for maintainable, readable code.
Let’s face it: we’ve all been there. You start with a simple, elegant solution, and before you know it, you’re knee-deep in a complex optimization rabbit hole. But what if I told you there’s a better way?
In this talk, we’ll dive into the real problems that arise when we over-optimize or optimize without need:
-
The Complexity Trap: You optimize early, thinking you’re future-proofing your code. Fast forward a few months, and now you’re struggling to understand your own creation. We’ll explore how to avoid turning your codebase into an indecipherable maze.
-
The Time Sink: Remember that week you spent shaving milliseconds off a function that’s called once in a blue moon? Yeah, we’ve all been there. We’ll discuss how to identify what actually needs optimization and what’s just a waste of your precious time.
-
The Scalability Mirage: Your benchmarks look great on your machine, but somehow, your application falls flat in production. We’ll uncover the pitfalls of optimizing for the wrong scenarios and how to avoid them.
-
The Premature Celebration: You’ve optimized your code to perfection, only to realize you’ve solved a problem you don’t have (and created three new ones in the process). We’ll learn how to resist the siren call of unnecessary optimization.
But it’s not all doom and gloom! We’ll also explore practical solutions and strategies for when optimization is truly needed:
-
Measure, Don’t Guess: We’ll dive into Go’s powerful profiling tools like pprof and learn how to let data drive our optimization efforts.
-
Benchmark Like a Pro: Discover how to write meaningful benchmarks that actually reflect real-world usage of your Go applications.
-
Optimize for Readability First: We’ll explore techniques to write clear, maintainable Go code that’s primed for performance improvements when they’re actually needed.
-
The Art of Compromise: Sometimes, the most elegant solution isn’t the fastest, and the fastest isn’t the most maintainable. We’ll discuss how to strike the right balance for your specific needs.
Through real-world examples and hard-earned lessons from the Go community, you’ll leave this talk with a practical toolkit for making smart optimization decisions. Whether you’re a Go newbie or a seasoned gopher, you’ll gain insights into writing performant Go code without falling into the premature optimization trap.
Join me as we explore the art and science of Go optimization, and learn to write code that’s not just fast, but also maintainable, readable, and right for your needs. Let’s optimize our approach to optimization!
Notes
Why should you select this talk?
Whether you’re building a small service or scaling a large application, the principles shared here will be invaluable in your Go development journey.
Attendees will gain insights on learning when to optimize, how to use Go’s tools effectively, and how to write performant code without sacrificing readability.
Along with that, i have also presented talks in conferences before which includes:
^ back to index
14. Make Go Apps faster with Profile-Guided Optimization
Abstract
Imagine if your Go program could adapt based on real-world data, fine-tuning itself into a leaner, faster version. With profile-guided optimization, you can just do that. With PGO, your app uses actual performance data to drive smarter compiler optimizations, breaking past performance barriers.
Description
In the quest for faster, more efficient Go applications, we often focus on writing clean, abstracted code. However, there’s only so much optimization one can do in the code. It is also important to understand how the code you’ve written is interpreted by the compiler and is there a possibility that compiler itself can optimize the performance of your code?
This talk explores how we can harness the power of Profile-Guided Optimization (PGO) to make our Go applications faster.
We’ll start by examining the problem like slow function calls, and bloated binaries because of excessive inlining.
By default compiler makes optimization decisions based on static analysis, but what if compiler could learn from real-world application behavior?
Enter Profile-Guided Optimization. Profile-Guided Optimization (PGO) allows the compiler to make more informed decisions based on actual runtime behavior, potentially leading to substantial speed improvements.
In the talk, we will touch upon the topics like :
- Process of compilation
- The role of Linux perf in collecting runtime data
- how compilation can be made effective by feeding runtime data
- How Instrumentation based Feedback-directed optimization works
- How sampling based Profile-guided optimization works
Through practical examples, we’ll also demonstrate how PGO can significantly improve application performance. We’ll walk through the process of building and benchmarking a Go application with and without PGO, showcasing real performance gains.
By the end of this talk, you’ll understand how to leverage PGO to create leaner, faster Go binaries without sacrificing code clarity. You’ll gain insights into the compilation process and learn how to make your clean, abstracted code run with the efficiency of hand-optimized routines.
Notes
Why should you select this talk?
By attending this talk, developers will gain practical knowledge on how to leverage PGO effectively, enabling them to optimize their Go apps beyond what traditional compilation methods can achieve. This knowledge is particularly valuable as applications grow in complexity and scale, where even small performance gains can have a major impact. The talk offers both theoretical insights and hands-on examples, equipping attendees with immediately applicable skills to enhance their Go development practices.
Along with that, i have also presented talks in conferences before which includes:
^ back to index
15. Bugs Lightyear Away With Fuzz
Abstract
Imagine your Go code finding its own bugs before production, eliminating worries about unexpected breaks. Go v1.18’s built-in fuzz testing is a game-changer, especially for security-sensitive projects. No security expertise needed – just safer, more robust code. Ready to revolutionize your testing?
Description
In the process of building software, we write test cases to ensure different features within a system perform as expected. We try to cover most edge cases, but we’re only human – we might miss a few. These oversights can lead to security vulnerabilities, a programmer’s nightmare.
Traditional testing methods like unit tests often fail to simulate real-world scenarios and unexpected inputs, leaving potential security risks undiscovered. These hidden vulnerabilities are one of the root causes of cybersecurity threats.
Fuzz testing solves this critical problem of uncovering unknown bugs and flaws in software applications.
Fuzzing is a process of finding defects & vulnerabilities in which system under subject is repeatedly injected with invalid, malformed and unexpected inputs. It can reach edge cases which humans often miss.
For example, in Go, a simple fuzz test for a string parsing function might look like this:
func FuzzParse(f *testing.F) {
f.Fuzz(func(t *testing.T, input string) {
Parse(input) // Your function to test
})
}
This test will generate thousands of random string inputs, potentially uncovering edge cases you never considered.
By identifying and addressing these issues early on, fuzz testing helps improve the overall security, stability, and reliability of software systems, mitigating the risk of costly breaches and system failures.
As of August 2023, OSS-Fuzz has helped identify and fix over 10,000 vulnerabilities and 36,000 bugs across 1,000 projects.
With the release of v1.18, Go has finally added Fuzzing or fuzz testing as a part of its standard toolchain. And what is great about this feature is that you don’t need to be a security expert to write the fuzz tests.
Think of fuzzing as the ultimate form of unit testing.
Join me to explore how fuzz testing can revolutionize your approach to software quality and security in the Go ecosystem.
Notes
Outline
While working on a project, my team faced a serious issue which we were not able to catch on time. We could have avoided that issue very easily if there were Fuzz tests in place.
We are generally used to writing unit tests, integration tests etc. for our software applications, but what is this fuzz test thing?? Is it like unit tests ?? Let’s take a look at exactly what fuzz testing is and what’s the hype about.
Now that we know what fuzz tests are, let’s checkout how to fuzz test your golang application. We will take a function which generally is unit testable and will try to write fuzz tests for it Live and will see it helps us find any new unknown bugs which we didn’t realize exists
As important as it is for us to know how to write fuzz tests, it is equally important to know how the fuzzing process works internally and how the fuzzing engine is optimized to generate seed corpus entries to find branches in our code to increase test coverage.
When dealing with unit tests we define what will be the input to a function under test and what is the expected output for that certain input, but the same is not the case with fuzz tests. Here we will take a look at a few cases which makes writing fuzz tests worth it.
-
Problems Fuzzing can identify
Unit tests are great for verifying individual units of code, while fuzz testing excels at exploring the input space and finding unexpected bugs, making it a valuable tool for improving software security and robustness.
-
Tips to write fuzzable code
To fully utilize the benefits of fuzz testing, we need to ensure that we follow the best practices. We will explore some ways to ensure that fuzz testing is done efficiently and how to write code which is easier to fuzz test.
Why should you select this talk?
Even though fuzzing was possible before Go 1.18 with the help of third-party fuzzers, now being included in the standard toolchain, it is expected to be used a lot with other utilities from the testing library. Since fuzzing is still a new and untested feature in Go, people will explore it in the coming time and what is better than a talk to get started. Along with that, I have a good understanding of the topic that attendees of any level would be able to connect and benefit from.
I have presented a few talks on fuzzing in Golab Conference 2022, GoWest Conference 2022 and Gophercon India 2023, so I have few things to share with the community, how my experience has been with the features, it’s good parts and also the bad parts.
^ back to index
16. Broken Go: The Unexpected Behaviors
Abstract
Did you know Go’s time.RFC3339 constant isn’t a valid RFC3339 timestamp? Or that you can’t remove a NaN key from a map without clearing it? Or why some non-nil errors are actually nil? Let’s Explore these quirks and deep dive into Go’s unexpected behaviors.
Description
“Broken Go” is a deep dive into the subtle, often counterintuitive behaviors of the Go programming language. This talk explores edge cases and lesser-known intricacies that can catch even experienced developers off guard.
We’ll dissect a range of unexpected behaviors, including:
- The peculiarities of nil error handling and type assertions
- Surprising outcomes in time formatting and parsing
- Quirks in I/O operations and their implications for testing
- The nuanced behavior of panic and recover
- Unexpected merging in JSON unmarshaling
- Concurrency gotchas in testing and benchmarking
- The intricacies of map operations, including NaN key handling
- Limitations and surprises in Go’s implementation of generics
Each topic will be examined through code examples, highlighting potential pitfalls and offering insights into Go’s internal workings. We’ll explore why these behaviors exist, their impact on day-to-day programming, and strategies to work with (or around) them effectively.
This talk is designed for Go developers looking to deepen their understanding of the language’s internals. By shedding light on these corner cases, attendees will gain valuable insights that will enhance their ability to write more robust, efficient, and idiomatic Go code. Whether you’re debugging complex issues or architecting large-scale systems, understanding these unexpected behaviors is crucial for mastering Go.
Notes
Outline :
-
Introduction
- Brief overview of the talk’s purpose
- Why understanding unexpected behaviors matters
-
Nil Errors and Error Handling
- Non-nil errors that are actually nil
- Best practices for error checking
-
Time and Date Quirks
- The time.RFC3339 constant mystery
- Implications for timestamp parsing and formatting
-
I/O and Testing Surprises
- os.Stdout changes in testable examples
- Write() method and slice retention
-
Panic and Recovery Gotchas
- Limitations of recover()
- Best practices for handling panics
-
JSON Unmarshaling Behavior
- Merging behavior for structs, slices, and maps
- Potential issues and how to handle them
-
Concurrency and Testing
- defer and parallelized tests
- Benchmark behavior with long setup times
-
Map and Slice Oddities
- clear() function behavior
- NaN keys in maps and removal challenges
-
Generics and nil
- Limitations of generic type T any
- Implications for nil checking
-
Conclusion and QA
- Recap of key takeaways
- Resources for further exploration
Why should you select this talk?
This talk on “Unexpected Go” offers invaluable insights into the nuanced behaviors of Go that often elude even seasoned developers. Understanding these quirks is crucial for writing robust, efficient code and for mastering the art of debugging in Go.
Along with that, i have also presented talks in conferences before which includes:
^ back to index
17. Paradigms of Rust for the Go developer
Abstract
The talk delves into 3 key paradigms: Go’s CSP model for concurrency, Rust’s data race prevention through ownership and borrowing, and Rust’s opt-in shared memory model. With practical examples and comparisons, attendees will learn how these paradigms foster robust, efficient, and safe software.
Description
Go and Rust introduce fundamentally different paradigms that significantly impact how we approach software development. This talk delves into three key paradigms: Go’s CSP model for concurrency, Rust’s data race prevention through ownership and borrowing, and Rust’s opt-in shared memory model.
Attendees will learn how these paradigms guide the design of robust, efficient, and safe software, supported by practical examples and comparisons.
The talk aims to use side-by-side examples of solving the same problem in Go and Rust.
Outline
Notes
I am the right person to speak on this subject because:
- I have deep understanding of Go’s CSP model and Rust’s ownership and borrowing rules.
- Hands-on experience writing robust, efficient, and safe software in both languages.
- Skilled at simplifying complex concepts with practical examples.
^ back to index
18. Evolving a Commercial Open-Source Go Project
Abstract
Bytebase is an open-source Database DevOps platform that started in 2021. It has 11K+ GitHub stars and 13k+ PRs. Bytebase picks Go in the backend. The Go LOC is 350k+. We want to share how we evolve our Go codebase around scaling, security, and engineering velocity to meet our customers’ demands.
Description
#Project
#Presentation outline
- Why we chose Go in the first place.
- How we implement the data access layer (we don’t use ORM).
- How we implement the API layer (we have migrated from HTTP to gRPC)
- How we implement the access control (we have migrated from casbin to our own self-built IAM)
- How we architect the codebase to make it extensible. Bytebase needs to integrate with 20+ databases, all mainstream version control systems, and many other integrations. We design a plugin system to achieve this.
- The specific Go features and ecosystems we leverage. embed, build tags, channel, and dependent packages.
- Specific challenges we have. e.g. We have to build parsers for each supported database.
- Why we think Go is the right choice.
Notes
I have 16 years of professional programming experience. I picked up Go in 2015. Before that, I mainly used C++ and Java. I wrote the first version of Bytebase and then evolved the codebase with my teammates. As a retrospection, I have made some good choices, while also introducing quite a few costly mistakes. I would like to share this journey.
^ back to index
19. Go Plug and Play: Runtime Extensibility with Plugins
Abstract
Discover the untapped potential of Go’s plugin system! In this talk, we’ll dive into dynamic loading, module extensibility, and how you can supercharge your Go apps with runtime plugins—making your software more modular, flexible, and future-proof.
Description
Since Go 1.8 introduced plugins, they’ve been an underused gem, often misunderstood and underexplored. But plugins offer a game-changing way to extend your Go apps dynamically at runtime! This talk will cover the basics of Go’s plugin architecture, including how to load shared objects and the key benefits of decoupling functionality. We’ll also touch on real-world use cases, pitfalls, and best practices when designing systems with plugins, ensuring both stability and flexibility. Whether you’re building large-scale services or lightweight applications, you’ll leave with actionable insights into harnessing the full power of Go plugins to take your projects to the next level.
Suitable for developers building microservices, tools, or enterprise applications seeking to enhance modularity and extensibility in Go projects.
Notes
As a Go developer with experience in building and optimizing services, my background includes:
- Over 5 years of professional Go development experience
- Successfully optimizing large-scale Go services handling millions of requests per day
- Contributing to open-source projects focusing on ML
^ back to index
20. Faster I/O operations using io_uring
Abstract
Tired of slow, I/O operations in your Go apps?
I’ll talk about how to dramatically improve performance and scalability for both file & network I/O using io_uring! I’ll share code snippets & metrics to show how io_uring can revolutionize Go development.
Build faster servers and new LLM-based apps.
Description
Are you frustrated with slow I/O operations in your Go applications?
Struggling to handle high-concurrency workloads or achieve the desired level of responsiveness?
If so, it’s time to discover the transformative power of io_uring.
In this talk, we’ll dive deep into the challenges of I/O-bound Go applications and explore how io_uring can revolutionize your approach. From slow file access to network latency, we’ll discuss the common pain points that developers face. Whether you’re building high-performance servers, complex LLM-based apps, data processing pipelines, or any other I/O-intensive Go application, this talk will equip you with the knowledge and tools to overcome common challenges and achieve exceptional results. Don’t miss this opportunity to learn how io_uring can transform your Go development journey.
Notes
^ back to index
21. Go Test, Go Further: Evolving Your Testing Strategy
Abstract
Unlock the secrets to painless Go testing! Break free from unwieldy TableDrivenTests and master effortless mock data creation for database tests. Discover two game-changing techniques that will revolutionize your testing workflow, save countless hours, and make Go tests a joy to write and maintain.
Description
Go developers face two common challenges in testing: managing complex test suites and preparing mock data for integration tests. While TableDrivenTests are the standard approach in Go, they can sometimes lead to unmanageable test code as complexity grows. Additionally, integration tests, regardless of the testing approach, often require extensive setup for mock data, especially when working with databases. This talk explores two powerful solutions: the function-per-test pattern to improve test manageability, and the gofacto library to simplify mock data preparation. By the end of this session, you’ll have practical strategies to make your Go tests more maintainable and your integration testing process more efficient.
-
The Go testing landscape
- Overview of standard testing practices in Go
- Introduction to TableDrivenTests and their benefits
-
Challenge 1: Unmanageable test code
- Scenarios where TableDrivenTests become unwieldy
- The cost of maintaining complex TableDrivenTests
-
Solution 1: Function-per-test pattern
- Introduction to the function-per-test approach
- Benefits of isolated test logic
- Real-world examples comparing TableDrivenTests and function-per-test approaches
-
Challenge 2: Complex mock data preparation for integration tests
- The struggle with setting up test data for database integration tests
- Time and effort costs in preparing diverse test scenarios
-
Solution 2: Leveraging gofacto for efficient mock data
- Introduction to the gofacto library
- How gofacto simplifies mock data preparation
- Demonstration of reduced boilerplate and improved test readability
-
Best practices and tips
- When to use TableDrivenTests vs. function-per-test
- Integrating gofacto into your testing workflow
- Strategies for maintaining test suites as projects grow
The function-per-test pattern offers a solution to unmanageable test suites, improving code readability and ease of debugging. Meanwhile, the gofacto library streamlines mock data preparation for integration tests, significantly reducing setup time and effort. By adopting these complementary approaches, Go developers can create more maintainable and efficient test suites, especially for complex and database-heavy applications. These techniques don’t replace TableDrivenTests entirely but expand your testing toolkit, allowing you to choose the right approach for each scenario. With these strategies, you’ll be well-equipped to write clear, efficient tests that scale with your Go projects, ensuring robust and maintainable codebases.
Notes
This talk is designed to benefit Go engineers of all levels, from beginners to experienced developers. While the content will be particularly resonant with those who have backend development experience, especially in projects involving complex business logic or database interactions, the concepts are accessible and valuable to all Go programmers.
I’ve conducted extensive research on these testing approaches and have successfully implemented them within our company. The results have been significant: we’ve seen a substantial reduction in time spent writing tests, markedly cleaner and more manageable test code, and improved team productivity and overall code quality. To address the challenges of mock data creation, I developed a custom factory library from scratch. This library has been battle-tested in both my side projects and our company’s production codebase, demonstrating its effectiveness in real-world scenarios.
The presentation will include practical code examples and before/after comparisons to clearly illustrate the benefits of these techniques. I’ll also share best practices and potential pitfalls to watch out for, ensuring attendees can immediately apply these concepts in their work. By sharing these insights and tools, I aim to help the Go community write more efficient, maintainable tests, ultimately leading to higher quality software and more productive development processes.
^ back to index
22. Think Pragmatically: Elevating Your Go Development Skills
Abstract
Learn how pragmatic programming enhances Go development. This talk covers simplicity, effective tooling, and collaboration for building robust apps. Discover real-world examples and best practices to achieve cleaner code, better team dynamics, and more successful projects.
Description
Have you wondered why some Go projects feel like a breeze to work on, while others turn into a never-ending nightmare? The answer lies in the principles of pragmatic programming – a mindset that can revolutionise the way you approach software development.
Outline
-
Introduction [2 min]
- What is pragmatic programming and its relevance to Go development
-
Key Principles of Pragmatic Programming in Go [4 min]
- Care About Your Craft: Importance of writing clean, maintainable code
- Think! About Your Work: Encouraging critical evaluation of solutions
- You Have Agency: Empowering developers to take ownership of their code
-
Tools and Techniques for Pragmatic Go Development [4 min]
- Effective Go: Best practices for writing idiomatic Go code
- Error Handling: Go’s approach to error handling
- Testing in Go: Importance of writing tests alongside code
-
Pragmatic Programming with Generative AI (Live Demo) [5 min]
“AI is not a replacement for human intelligence; it’s a tool that enhances our capabilities.” - Unknown
- How generative AI tools (e.g., GitHub Copilot, ChatGPT) can assist in writing Go code
- Potential pitfalls and ethical considerations when using AI in development
-
Code as a Living Entity (with examples) [2 min]
- Refactoring: When and how to refactor Go code
- Go Modules: Benefits of using Go modules for dependency management
-
Collaboration and Team Dynamics [2 min]
- Organize Around Functionality: Importance of cross-functional teams
- Code Reviews: The role of code reviews in maintaining quality
“Code reviews are not about finding bugs, they’re about learning from each other.” - Trisha Gee
- Conclusion [2 min]
- Benefits of adopting a pragmatic approach to Go programming
“Pragmatism is not about theory, it’s about making things work.” - Dave Thomas
Notes
Technical Requirement
- Understanding of Go language and familiarity with software development concepts.
Why am I the right person?
As a passionate Go developer with over nine years of experience, I have embraced pragmatic programming principles in my work. I have successfully implemented various strategies to improve code quality, maintainability, and team collaboration across multiple Go projects. My background in software engineering, combined with my commitment to sharing knowledge, makes me an ideal candidate to present this topic at GopherCon.
Target Audience
- Go developers at all experience levels.
- Engineers interested in improving code quality and maintainability through pragmatic programming.
- Teams and organizations looking to enhance development speed and application stability.
^ back to index
23. Go See It All: Observability for the Rest of Us
Abstract
Learn to maximize the potential of your Golang applications by implementing comprehensive observability. Leverage robust metrics, logs, and distributed tracing to proactively identify and address issues, enhance performance, and ensure high availability.
Description
Effective observability and monitoring are critical for the success of any mission-critical Golang system. In this session, we will dive deep into best practices and practical strategies for implementing comprehensive observability in Go-based backend services and microservices.
Key topics to be covered include:
- Metrics: We will discuss how to identify and instrument the right metrics to gain deep insights into application performance, resource utilization, and overall health. Attendees will learn techniques for collecting, aggregating, and visualizing meaningful metrics using technologies like Prometheus.
- Logging: We will explore the benefits of structured logging in Go and demonstrate how to leverage it to enable powerful filtering, aggregation, and analysis of application logs. Attendees will learn how to integrate their Go applications with centralized logging solutions for enhanced observability.
- Distributed Tracing: We will introduce the principles of distributed tracing and show how to implement it in Go-based systems. Attendees will understand how to use distributed tracing to identify performance bottlenecks, troubleshoot complex issues
- Observability-Driven Development: Finally, we will explore the concept of observability-driven development, where observability is shift-left to guide architectural decisions and development workflows. Attendees will learn how to leverage observability to build more reliable, performant, and maintainable Go-based systems
Notes
As a Go developer with experience in building and optimizing services. My background includes:
- Over five years of professional Go development experience
- Successfully optimizing large-scale Go services, handling millions of requests per day
- Contributing to open-source projects focused on performance optimization.
^ back to index
24. The Evolution of Go: What’s New in the Latest Releases and How It Impacts Your Code
Abstract
Stay ahead of the curve with the latest updates in Go. Explore new features, performance improvements, and tooling enhancements that streamline development. Learn how these changes impact your codebase, and discover practical tips to optimize your projects with the newest Go releases.
Description
Go continues to evolve, with each release introducing new features, optimizations, and improvements to the language and ecosystem. In this session, we will explore the latest changes in Go’s most recent releases, diving into how they impact performance, code readability, and developer productivity.
We will cover:
Significant language updates, including new syntax, language constructs, and the introduction of key features like generics and more.
Toolchain improvements, such as faster compilation times, debugging enhancements, and better memory management optimizations.
A look at new or improved libraries in the Go standard library that make everyday tasks easier and more efficient.
Practical examples of how to leverage these new features and tools to improve existing codebases or start new projects more efficiently.
Attendees will leave with a solid understanding of Go’s newest capabilities and how to integrate them into their workflows. Whether you’re a seasoned Gopher or just getting started with Go, this session will help you stay on top of the latest trends and make informed decisions about when and how to adopt new features.
Notes
Highlight the introduction and impact of generics on Go codebases.
Discuss performance optimizations in recent releases, including Go’s garbage collection enhancements.
Include practical code examples showing how new features improve everyday development tasks.
Provide tips on transitioning to new versions with minimal disruption to ongoing projects.
^ back to index
25. Memory Management: Go's Garbage Collection vs. Rust's Ownership Model
Abstract
Discover the key differences between Go’s garbage collection and Rust’s ownership model. Learn how Go’s automatic memory management contrasts with Rust’s compile-time guarantees, and how each affects performance, safety, and developer experience in modern software development.
Description
Memory management is at the heart of system-level programming, influencing performance, safety, and developer productivity. Go and Rust approach this fundamental challenge in very different ways. Go simplifies memory management with its automatic garbage collector, freeing developers from manual memory tasks. However, this convenience can come at the cost of occasional performance pauses due to garbage collection cycles.
On the other hand, Rust avoids a garbage collector entirely, relying on its ownership model, which enforces memory safety and concurrency guarantees through strict compile-time rules. Rust’s model ensures zero-cost abstractions, meaning that once the code compiles, memory management overhead is minimized, resulting in predictable performance with no runtime surprises. But this comes at the cost of a steeper learning curve and more manual effort during development.
In this session, we will deep dive into both approaches, exploring:
Go’s garbage collection mechanism, including its latest improvements for reducing latency.
Rust’s ownership and borrowing system, and how it prevents common memory issues like data races and dangling pointers.
The trade-offs between performance, safety, and ease of development in each language.
Practical examples and benchmarks to highlight the strengths and limitations of both models.
By the end, attendees will have a clearer understanding of how to choose between Go and Rust for their projects based on the memory management needs and performance considerations.
Notes
- Focus on practical examples of Go’s Garbage Collection in real-world applications.
- Compare how Rust’s ownership and borrowing system enforces memory safety and eliminates data races.
- Discuss the trade-offs between performance and developer productivity in both models.
- Highlight the latest updates in Go’s GC improvements and Rust’s evolving compiler capabilities.
^ back to index
26. Building Reliable Robotic Systems with Go
Abstract
Robotics demands precision and reliability, and Go is the perfect tool to achieve both. In this talk, I’ll showcase how Go’s concurrency and simplicity can be leveraged to build reliable robotic systems that handle real-time tasks, ensuring robustness and performance in critical environments.
Description
Building reliable robotic systems requires a balance of performance, precision, and scalability—key strengths of Go. In this session, I’ll take you through how Go’s lightweight concurrency model and clear syntax make it an excellent choice for building responsive and reliable robotics applications.
We’ll cover:
How Go’s goroutines and channels can be used for multitasking in real-time robotic operations.
Strategies for designing fault-tolerant robotic systems with Go.
Integrating Go with hardware interfaces and sensor management.
Real-world examples of using Go to control and coordinate robots in critical environments.
By the end of this talk, attendees will understand how to leverage Go’s strengths to build robotic systems that can be trusted in production settings, handling everything from precision movements to large-scale operations.
Notes
As a software developer with extensive experience in Go, I have worked on multiple backend and system-level projects where performance and reliability were paramount. In addition, my experience with robotics gives me a unique perspective on how to apply Go’s capabilities to robotics, ensuring systems are built with reliability and efficiency.
This session will provide practical insights for attendees on how Go can be effectively applied to the challenging domain of robotics.
^ back to index
27. Go for Gold: Best Practices for Go Development
Abstract
Unlock the full potential of Go with time-tested best practices that ensure scalable, efficient, and maintainable code. In this talk, I’ll share the gold standard techniques for Go development, learned through hands-on experience, to help you write cleaner, faster,and more resilient Go applications.
Description
As Go continues to rise as a leading language for building highly performant and scalable systems, developers must adopt the best practices to fully utilize its capabilities. In this session, I’ll walk you through the golden rules of Go development, from structuring your codebase to maximizing concurrency and optimizing performance.
Key takeaways from this session include:
Proven techniques for writing idiomatic Go code that’s easy to maintain.
Best practices for handling concurrency with goroutines and channels.
How to structure large Go projects for scalability and team collaboration.
Real-world examples and lessons learned from implementing Go in production systems.
Whether you’re new to Go or have years of experience, this talk will provide actionable insights to elevate your Go code to the next level. Join me as we dive into what it takes to write Go code that’s not just functional but exceptional.
Notes
I have extensive experience building production-grade systems using Go, and I’ve worked on several large-scale projects where Go’s simplicity and efficiency were paramount. My hands-on experience with Go in critical, high-performance environments gives me a deep understanding of the language’s strengths and the best practices needed to maximize its potential. My aim with this session is to provide practical, actionable advice that attendees can immediately apply to their own Go projects.
^ back to index
28. Go and AI: Integrating Machine Learning into Go-Based Applications
Abstract
Tired of slow, complex machine learning in your Go apps? Learn how to seamlessly integrate powerful AI models into your Go projects. Discover techniques for efficient data handling, model deployment, and real-time predictions. Boost your app’s intelligence and performance with Go and AI!
Description
In a world where AI is transforming industries, Go developers are now empowered to bring machine learning capabilities to their applications without compromising on performance. As an expert in Go, I will guide you through the practical steps to integrate AI into your Go projects, making the most of Go’s concurrency and efficiency.
This session will cover:
- The best libraries and tools for incorporating machine learning into Go.
- Real-world case studies showcasing AI-enhanced Go applications.
- Techniques to optimize performance when running AI models in Go.
- How to bridge the gap between Go’s backend strengths and AI’s data-driven power.
Join me for an actionable, hands-on talk that will help you stay at the forefront of Go development by incorporating AI in a way that’s both practical and scalable. Whether you’re building high-performance systems or exploring AI for the first time, this session will provide the knowledge and tools to make it happen. Don’t miss this opportunity to learn how to supercharge your Go applications with machine learning!
Notes
I have been working with Go for several years and have a deep understanding of its core strengths, especially in building high-performance, scalable systems. My expertise extends into integrating machine learning models into Go applications, an emerging area that I’ve actively explored in real-world projects.
As a speaker, I’ve presented at multiple developer conferences, helping attendees grasp complex topics with clear, actionable insights. This talk is tailored to bridge a significant gap many Go developers face—how to embrace AI without compromising Go’s efficiency. I’m confident that my background, combined with practical examples and hands-on techniques, will deliver value to the attendees of GopherCon Singapore.
I am passionate about this topic and excited to share how Go developers can harness AI for future-ready applications. Thank you for considering my talk!
^ back to index
29. Mastering Concurrency in Go: From Patterns to Production
Abstract
Unlock the full potential of Go’s concurrency model with advanced patterns like pipelines, fan-out/fan-in, and graceful cancellation. This talk offers practical insights and real-world examples to help developers build scalable, efficient systems while avoiding common concurrency pitfalls.
Description
Description:
Concurrency in Go is a hallmark feature that allows developers to design systems that can scale effortlessly and handle multiple tasks concurrently. But it’s not enough to simply start goroutines—understanding and applying the right concurrency patterns is essential for writing efficient, bug-free code.
This talk will explore key concurrency patterns in Go, including:
- Pipelines: Learn how to build chains of goroutines where data flows efficiently from one stage to another, with cancellation support to avoid resource leaks.
- Fan-in and Fan-out: Distribute tasks across multiple goroutines and aggregate their results using fan-in, while leveraging fan-out to efficiently parallelize workloads.
- Graceful Cancellation: Implement graceful termination of goroutines using contexts and the
select statement, ensuring no goroutines are left hanging.
- Advanced Techniques: Use select statements, buffered/unbuffered channels, and synchronization techniques to orchestrate complex concurrent workflows.
With real-world examples from Go’s own concurrency talks and web resources, as well as additional insights from my personal experience, attendees will leave equipped to apply these patterns in their projects.
Notes
As an experienced Go developer with a deep interest in concurrency, I have extensively studied both the theoretical foundations and practical applications of Go’s concurrency model. This session will focus on how concurrency patterns can be applied to real-world problems. I will showcase code examples from Go’s official resources, as well as share insights from my personal projects that emphasize best practices in concurrent programming.
This talk is highly practical and leverages my extensive experience in distributed systems and Go. I will walk through code examples and real-world scenarios where these patterns have helped build robust systems. The talk will draw from both academic resources and practical experience, and will include live coding demonstrations and interactive examples to illustrate key concepts.
The technical requirements for this session include a projector for code demos and slides.
^ back to index
30. Building Graphical Go apps is Fyne :)
Abstract
Have you ever wondered if you could use Go to build compelling graphical apps across all platforms? If so, then my talk “Building Graphical Go apps is Fyne” is for you.
In this talk, I will show you how to get up and running quickly by building an initial app using barely 10 lines of Go code.
Description
Introduction
Tools for building graphical apps with Go have become popular over the last few years. In fact, Go now powers 2 of the top 10 cross-platform toolkits according to @OSSInsight. Fyne is one of these cross-platform toolkits and it supports coding graphical apps purely with the Go programming language. As a founder of the Fyne project, I see great new apps being created every month. In this presentation, I will show you how easy it is to create production level graphical apps using Fyne and Go - and the tools that accelerate development.
Outline
This talk is an overview of building a GUI app with the Go programming language and an introduction to the Fyne toolkit showing how easy it is to get building. It also covers a summary of some of the great tools that help make platform-agnostic app creation a breeze. We show source code and the setup steps required to get hello world built, along with code for an example markdown editor application. It concludes with a demo of some leading apps to inspire the audience.
It is broadly split into the following sections:
- The frustrations of app development
- Go provides a delightful alternative
- It couldn’t be simpler - with Go and Fyne
- Your first app in just minutes
- Going further - toolkit capabilities
- A look at many interesting Fyne apps
Conclusion
Tools for building graphical apps with Go have become popular over the last few years. In this presentation, you saw that you can build desktop or mobile apps with Fyne and Go with minimal code. As more people decide to use Fyne and Go to build their graphical apps, this technology will only get better and more capable. The big advantage is that you can use one programming language to develop your entire stack.
Consider using Go and Fyne for your next app project, see more apps at https://apps.fyne.io.
## Key Takeaways
- Go is a great language for building universal graphical apps.
- With the Fyne library anyone can easily code an app using Go and install it to all their devices.
- The supporting tools make it easy to manage your user interface code.
Notes
I am passionate about spreading the word of how easy it is to build graphical apps with the Go language. As a toolchain that is easy to get started with we should be encouraging more developers to try it out and be excited by the results. I founded the Fyne project and have worked on it for over 6 years now, so can answer any questions that arise as well as deliver a passionate presentation on the topic.
Ideally this is presented from my laptop which runs Fyne for the full desktop, resulting in a great “wow” factor at the end of the talk.
^ back to index
31. Resizing Animated GIFs Without CGO or Third-Party Libraries
Abstract
GIF animation icons enable users to express themselves better.
Animated GIF resizing is necessary to support them.
Popular methods to implement it are using CLIs or C library wrappers.
However, those invoke several issues.
Stop using them by learning how to implement GIF resizing with Pure Go!
Description
GIF animation icons enable users to express themselves much better. Resizing animated GIFs is necessary to support them.
What would you use when implementing GIF resizing in your Go web application? ImageMagick as we did? Another CLI? Or a C library wrapper?
There is little knowledge of GIF resizing in Pure Go. There are articles about basic image resizing, but they don’t cover animated GIF-specific processing. Therefore, popular methods to implement it are using CLIs or C library wrappers.
However, those invoke several unignorable issues. Calling CLIs directly from the code causes security concerns. Image-processing CLIs and C libraries are typically large files that don’t fit into a container environment. Also, they cause performance overheads that affect app latency.
Stop using them by implementing GIF resizing with pure Go! It requires some specific knowledge, but it’s so simple that we can implement it with only standard libraries and sub-repositories (golang.org/x libraries)!
Learn how to implement GIF resizing with pure Go step by step and the cautionary points you may face during the implementation.
Agenda:
- Why we needed to implement GIF resizing in pure Go (5 min)
- Self Introduction
- Introducing traQ (https://github.com/traPtitech/traQ), a web application I had maintained
- Explain why we wanted to get rid of ImageMagick
- Wanted to replace base image with distroless due to security concerns, but blocked by ImageMagick library installation
- Docker image size was large (over 600MiB), but we don’t utilize most of the library
- (1) Basic resizing implementation (5 min)
- (2) Handling frame optimization (5 min)
- (3) Handling black glitchy noise (10 min)
- (4) Handling frame disposal (10 min)
- GIF frame disposal specification
- None: keep stacked frame to the next frame
- Background: fill the canvas with background color after the current frame rendering
- Previous: put back canvas to the previous condition
- Handling frame disposal in Go
- After-resize processing for the temporary canvas
- Implementation: https://github.com/logica0419/proposals/blob/main/2024/GopherCon/step4.go
- Conclusion (10 min)
- Summary of the implementation
- A little explanation of parallelization of resizing
- The resizing step is the slowest
- Frame stacking can’t be parallelized (must be in order)
- Wrapping only resizing step with goroutine
- Effect on our app
- Docker image shrinks down (40~50% in size)
- Able to move on to distroless
- Performance improvement (x3 faster in same CPU usage)
- Able to select the resizing kernel
- Introducing resigif (https://github.com/logica0419/resigif) library as a result of work.
Notes
Go doesn’t have a rich ecosystem of image processing, so I guess this proposal is unique.
Thus, accepting this proposal will give this conference a diversity and variety of topics.
I have created the GIF resizing library (https://github.com/logica0419/resigif) using the methods I’ll introduce in the session.
Also, the methods in the session are currently used in traQ (https://github.com/traPtitech/traQ), a messaging service for a university tech club.
For these reasons, I’m the best person to speak about the animated GIF resizing in Pure Go.
^ back to index
32. Profiling WebAssembly with pprof and wzprof
Abstract
Are you ready to take your Go applications to the next level with WebAssembly? Join us for an exciting session where we’ll explore how to optimize Go-powered web applications using the powerful profiling tools pprof and wzprof.
Description
Embark on a journey to supercharge your Go applications targeting WebAssembly by harnessing the formidable profiling capabilities of pprof and wzprof. Join us for an illuminating session where we’ll unravel the intricacies of optimizing Go-powered web applications for maximum performance.
Central to our discussion will be the powerhouse duo of pprof and wzprof. You’ll discover how pprof, Go’s venerable profiling tool, forms the foundation of our optimization journey, providing deep insights into CPU and memory usage. Complementing pprof’s prowess, wzprof emerges as a specialized profiler tailored for WebAssembly, offering streamlined performance analysis during the execution of WebAssembly modules.
Through practical demonstrations and real-world examples, we’ll showcase the symbiotic relationship between pprof and wzprof, illuminating how they work in tandem to identify and resolve performance bottlenecks in your WebAssembly applications. From CPU-bound computations to memory management intricacies, you’ll gain actionable insights into optimizing your Go code for unparalleled efficiency and speed.
Whether you’re a seasoned Go developer or embarking on your WebAssembly journey, this talk promises to equip you with the tools and techniques needed to unlock the full potential of your applications.
Notes
As an experienced developer who has worked extensively with Go and WebAssembly (Wasm), I am well-versed in the subject matter. My background includes practical hands-on experience with Go’s WebAssembly support, as well as a previous talk on gRPC-Gateway at GopherCon Europe 2022, WASM I/O 2024, and KubeCon Europe 2024.
My knowledge of Go and WebAssembly, combined with my communication skills, make me the ideal person to speak on this subject. I can effectively convey the concepts, demonstrate practical examples, and provide insights into leveraging the WebAssembly System Interface (WASI) in Go.
^ back to index
33. Generate RESTful services using gRPC-Gateway
Abstract
This talk will be about writing REST services using gRPC-Gateway. I will try to give intuitive and rigorous meanings of gRPC-Gateway and its usage.
I will try to demonstrate simple Hello World gRPC services using gRPC-Gateway and will give more understanding of Protobuf, REST, gRPC, etc.
Description
I will be starting the talk with REST and then its disadvantages after I will introduce gRPC and its advantages and disadvantages related to the use case of both REST and gRPC. Now, I will start with gRPC-Gateway like why we need it, how to use it and what problem it solves. At last, I will demonstrate a simple Hello World REST service using gRPC-Gateway.
Notes
As an experienced developer with extensive work in Go and gRPC, I have a deep understanding of the subject matter. My background includes hands-on experience with gRPC-Gateway and related technologies, and I have previously spoken on gRPC-Gateway at events like GopherCon Europe 2022, Wasmio, KubeCon, and the gRPC Community Meetup.
My expertise in Go and APIs, coupled with strong communication skills, positions me as an ideal speaker on this topic. I can effectively convey concepts, demonstrate practical examples, and offer insights into leveraging RPC and Go.
^ back to index
34. Empowering Go with WebAssembly System Interface (WASI) Unleashed
Abstract
Discover the future of cloud-native development with Go and the WebAssembly System Interface (WASI). Join our session to explore the power of Go’s new WASI support. Learn how to compile once and run anywhere, unlocking limitless possibilities for portable, secure, and high-performance applications.
Description
The WebAssembly System Interface (WASI) is gaining popularity as a compile-once-run-anywhere target for developers of cloud-native applications. WASI is a system interface that provides a standardized way for WebAssembly modules to interact with the underlying system, regardless of the specific operating system or architecture.
WASI greatly improved interoperability in the WebAssembly ecosystem. Still, its use cases have been focused on basic OS integration, such as reading environment variables or interacting with file systems.
Go 1.21 added a new port named wasip1 (short for WASI preview 1), enabling Go developers to target server-side WebAssembly runtimes implementing WASI, such as Wasmtime, WasmEdge, or Wazero. Along with this addition to the Go toolchain, solutions have also emerged in the ecosystem, bringing full networking capabilities to Go applications compiled to WebAssembly.
This session starts with an introduction to the WebAssembly System Interface and an overview of the support for WASI in the Go toolchain, illustrated by live code examples, and dives into how applications can leverage WASI and networking extensions to build powerful WebAssembly applications with Go.
This talk gives attendees a comprehensive understanding of building and running Go applications with the WebAssembly System Interface (WASI).
Notes
As an experienced developer who has worked extensively with Go and WebAssembly (Wasm), I am well-versed in the subject matter. My background includes practical hands-on experience with Go’s WebAssembly support, as well as a previous talk on gRPC-Gateway at GopherCon Europe 2022.
My knowledge of Go and WebAssembly, combined with my communication skills, make me the ideal person to speak on this subject. I can effectively convey the concepts, demonstrate practical examples, and provide insights into leveraging the WebAssembly System Interface (WASI) in Go.
^ back to index
35. Testing GenAI applications in Go
Abstract
Answers provided by LLMs are in natural language and non-deterministic. So using the current testing methods to verify them is difficult, as they are better suited to testing predictable values. However, we already have a tools for understanding non-deterministic answers in natural language: LLMs
Description
The evolution of GenAI applications brings with it the challenge of developing testing methods that can effectively evaluate the complexity and subtlety of responses generated by advanced artificial intelligences.
The proposal to use an LLM as a Validator Agent represents a promising approach, paving the way towards a new era of software development and evaluation in the field of artificial intelligence. Over time, we hope to see more innovations that allow us to overcome the current challenges and maximize the potential of these transformative technologies.
This proposal involves defining detailed validation criteria and using an LLM as an “Evaluator” to determine if the responses meet the specified requirements. This approach can be applied to validate answers to specific questions, drawing on both general knowledge and especialised information. By incorporating detailed instructions and examples, an Evaluator can provide accurate and justified evaluations, offering clarity on why a response is considered correct or incorrect.
In this session we’ll leverage langchaingo to interact with LLMs, and Testcontainers Go to provision the runtime dependencies to use RAG.
Notes
^ back to index
36. Go Big on Benchmarks
Abstract
I will be covering techniques for benchmarking Go services, identify performance bottlenecks, and optimize code effectively. This talk will provide a guide to measure and improve the performance of Go applications.
Description
Performance is crucial for any Go service in production. This talk dives deep into the art and science of benchmarking Go services. We’ll explore the practices, tools, and techniques to accurately measure and improve your service’s performance.
Key topics include:
- Introduction to Benchmarking in Go: Understanding the importance of benchmarking
- Setting up Telemetry to gather benchmark data
- Writing Effective Benchmarks
- Interpreting benchmark results and identifying performance bottlenecks
- Optimizing Go Services: Practical tips for improving performance based on benchmark data.
- Common pitfalls and how to avoid them
- Integrating benchmarking into your development workflow and CI/CD pipeline
Notes
As a Go developer with experience in building and optimizing services.
My background includes:
- Over 5 years of professional Go development experience
- Successfully optimizing large-scale Go services handling millions of requests per day
- Contributing to open-source Go projects focused on performance optimization
^ back to index
37. PGO - Should you use it and, if so, how?
Abstract
PGO (profile guided optimization) is enabled by default since Go 1.21, but let’s find out if you really need it and if it’s worth the effort. I’ll look at how it works internally and some pitfalls but most importantly how to generate good profiles.
Description
Abstract
PGO (profile guided optimization) is enabled by default in Go 1.21. It is a technique that allows the compiler to generate better code based on knowledge of the how the code actually runs.
This talk is mainly about how to gather the best profiles. You don’t have to have studied compiler construction, but you will learn a little (just enough) about how PGO works in order to get the most out of it.
Introduction
At first I was unsure about PGO. I was dubious that it would be useful but having tried it I have found you can get noticeable improvements in performance.
You are probably aware that you don’t have to make any code changes to use PGO. However, you do have to generate the right profiles to get the benefits. You have to know how to update the profiles as your code changes to continue to get the benefits.
Before I go on, I should point out that I am not an expert on PGO. I simply have an interest in it, and have done a lot of reading and experimenting with it.
Outline
Profile Guided Optimization
When generating code a compiler has to make lots of decisions based on trade-offs. These decisions are often hard-coded, perhaps arbitarilly (ie educated guess) or even based on probabilities from analysis of lots of supposedly typical software.
PGO provides information to the compiler to make more informed decisions about these trade-offs. For example, whenever there is a branch (aka jump) in your code, such as in an if or at the end of a for loop the compiler generates code based on which outcome it has decided is more likely.
Did you know that the Go compiler is “optimized” to assume that an error return value is more likely to be nil than not. This is almost always the case but consider the following function that checks if a password has been cracked. In this code the most common case is actually that CompareHashAndPassword() does not return nil.
func CheckPasswordCracked(hash []byte) bool {
for _, pwd := range CrackedPasswords {
if err := CompareHashAndPassword(hash, pwd); err == nil {
return true
}
}
return false
}
In other words an error is the most likely return value and the if will result in incorrectly optimized JUMP/BRANCH instruction in the assembly code. (Of course, the exact details depend on the target CPU.)
This is a simple example that is not really indicative of the power of PGO. In fact, Go’s PGO does not (yet) support branch counts.
Currently the main types of optimizations are:
- identifying and inlining “hot” functions that would not be otherwise inlined
- move data around to improve cache hits
- reposition code (functions) to improve instruction cache
How to use PGO - profiles
- understand how your code runs in production
- for example, I am working on a projects which has daily, weekly and even yearly cycles
- take representative profile sample(s) based on the above
- profiling small functions is not useful - PGO looks at bigger picture
DEMO: simple demo of taking a profile
Scenario:
Here I describe some experiences with using PGO on large production server.
The first things we found was that using a representative profile (as recommended), or even merging several representative profiles produced negligible effect and even seemed to reduce performance during heavy load.
Next we identified periods of heavy load and concentrated on using profiles gathered at those times. This produced much better results when the PGO enabled code ran under heavy load.
However, there are two sources of load that this server must deal with:
- large number of messages coming in that are processed and used to update internal data structures
- large amount of use by users (currently not often experienced but easily simulated using K6)
We profiled using both types of loads separately but found that PGO with one profile increased performance in one area but was detrimental in the other. Eventually, we determined that merging these profiles produced a PGO-enabled executeable that gave improvements under all types of load.
We also found it useful to disable some parts of the code for profiling. For example, we have “cleanup” code that attempts to run at quiet times. We also tried disabling garbage collection while profiling but this had no effect.
More Tips
- usual profiling tips, eg make sure nothing else is running
- some types of load may not be easily or consistently reproduceable - we used load testing tools for this
Code Changes
Unfortunately, your shiny new profiles will degrade over time. As the source code changes the profiles become less accurate. To continue to get the performance benefits you need to regularly update the profiles based a on the latest code.
Finally, since your code is likely to change you will have to update your profile regularly. It may be possible to automate this as part of a “full build” process. To dod this it might be necessary to generate some or all of your profiles using test data and/or load testing tools such as K6.
Things to Look out for
One of the problems that PGO used to suffer from and may still manifest, is called “iterative instability”. This is caused by a type of feedback loop where a PGO-optimized executeble is used to generate the next profile but the optimization itself disables subsequent detection of the optimization opportunity.
One way this occurs is a function is inlined by PGO but then not inlined the next time before being inlined again the time after, and so on. These sorts of “oscillating” problems have been worked on hopefully fixed but it’s best to keep a watch for them.
Conclusion
PGO is a useful tool for improving the performance of your software. You don’t even need to change your code. But it is important that you understand how to generate the profile(s) used to drive the optimization.
Unless you can accurately simulate the production environment, profiles should be taken on code running in production.
To understand the best time to generate profiles you need to understand how the software behaves over time, especially peaks times. For server software there may be daily, weekly even yearly cycles.
Once you understand this you can choose the best time(s) to takes profiles. The best results are produced by generating profiles when the code is being used heavily (but normally). Also remember to disable any background tasks (eg the cleanups we saw above) that may distort the results.
If the software has different load scenarios you need to generate profiles for all these scenarios and merge them together.
Finally, you need to regularly update the profiles used to guide the optmizations. Code changes means that the original profiles will become out of date and less useful. Even just changing the name of a function will mean that it is no longer inlined.
PGO works well but in the future even more will be added (such as branch optimizations). If you start using them you should see immediate improvements and be set reap the benefits of future additions.
Notes
Criteria
Relevance
This is a fairly new topic of relevance to gophers who want to improve code performance
Clarity
The talk focusses on how to use PGO without getting distracted by details of how it works
I’ll add some graphs to make things clear.
Correctness
The discussion will be backed up with example code and tables of (reproduceable) results.
Achievability
The talk could possibly go under or over time. If under I could add more about the problems I encountered. If over I could cut the discussion on iterative stability.
Impact
The talk will emphasize some little understood but important aspects of creating profiles.
The audience will come away with a better understanding of how to succesfully use PGO.
Talk Timing (minutes)
3: introduction
2: explanation of PGO
1: explanation of inling
2: explanation of instruction cache
2: understanding how your code runs in production
1: measure performance over time looking for cycles
3: take profiles under well-understood load situations
2: merging profiles from different load scenarios
1: disabling extraneous tasks for better profiles
2: using load testing tools for consistent profiles
1: conclusion
20: total
^ back to index
38. Starting and stopping things
Abstract
In Go it’s trivia to start a goroutine, but, as I’m fond of saying, do you know when that goroutine will stop? How will it stop it? How will you know when it has stopped? And so on.In this talk I want to present a solution that I find myself rewriting often. It’s small enough that it fits on a slide
Description
This is a talk about a solution I find myself rewriting on pretty much every backend Go service I work on. Even in a microservice shop the requirement to set up a bunch of parallel goroutines is ubiquitous. For example, the main external web service, plus one listening on loopback for metrics and debug. Most monitoring clients run a background process to batch metrics. Heath checks, database pools, etc, all need to start correctly for your service to run and must trigger a shutdown of your service if one fails.
The group type, documented below, is an idea which has evolved since the Juju days a decade ago of William Reade’s Manifold, through Peter Bourgon’s “How I do things” talk, and through many of my own iterations. Similar, but not identical to, sync.ErrGroup, the group type allows users to compose goroutines with predictable lifetimes; all goroutines start successfully, or non do; when one goroutine exits, all others are shut down automatically. This powerful pattern allows reliable service startup and shutdown behaviour.
Notes
Here’s the code
// A group manages the lifetime of a set of goroutines from a common context.
// The first goroutine in the group to return will cause the context to be canceled,
// terminating the remaining goroutines.
type group struct {
// ctx is the context passed to all goroutines in the group.
ctx context.Context
cancel context.CancelFunc
done sync.WaitGroup
errOnce sync.Once
err error
}
// newGroup returns a new group using the given context.
func newGroup(ctx context.Context) *group {
ctx, cancel := context.WithCancel(ctx)
return &group{
ctx: ctx,
cancel: cancel,
}
}
// add adds a new goroutine to the group. The goroutine should exit when the context
// passed to it is canceled.
func (g *group) add(fn func(context.Context) error) {
g.done.Add(1)
go func() {
defer g.done.Done()
defer g.cancel()
if err := fn(g.ctx); err != nil {
g.errOnce.Do(func() { g.err = err })
}
}()
}
// wait waits for all goroutines in the group to exit. If any of the goroutines
// fail with an error, wait will return the first error.
func (g *group) wait() error {
g.done.Wait()
g.errOnce.Do(func() {
// noop, required to synchronise on the errOnce mutex.
})
return g.err
}
^ back to index
39. Go for the Edge: Building Ultra-Low Latency Applications with Go and WebAssembly
Abstract
Unlock the full potential of edge computing with Go! Learn how to build ultra-low latency applications using Go and WebAssembly, optimized for the edge. This session will show you how to create a responsive, real-time analytics dashboard that runs directly in the browser, close to your users.
Description
Edge computing is revolutionizing the way we develop applications by bringing processing closer to the user, significantly reducing latency and enhancing responsiveness. In this talk, we will explore the powerful combination of Go and WebAssembly (Wasm) for building ultra-low latency applications that run efficiently at the edge. This session will provide a comprehensive guide on how to harness these technologies to create fast, scalable, and responsive applications that run directly in the browser.
Our focus will be on developing a real-time analytics dashboard using Go and WebAssembly. This project showcases how Go’s efficiency and concurrency features, when combined with WebAssembly’s ability to run code in the browser, can be leveraged to build applications that process and visualize data in real time. By performing data processing tasks client-side, we can minimize latency and optimize the user experience, making this approach ideal for edge computing scenarios.
We will start with an introduction to edge computing and WebAssembly, covering the fundamentals and highlighting their benefits for creating low-latency applications. We will then delve into why Go is an excellent choice for edge computing, particularly when targeting WebAssembly. The talk will explore Go’s performance advantages, ease of use, and capability to compile to WebAssembly, making it a versatile tool for developing applications that run close to the user. The core of the session will be a step-by-step walkthrough of building the real-time analytics dashboard. This includes setting up a development environment for Go and WebAssembly, writing Go code to handle data processing and visualization tasks, and deploying the application to run directly in the browser. We will discuss the key tools and libraries used in the project, such as Go’s syscall/js package for interacting with the JavaScript environment, Wasm-bindgen for compiling Go code to WebAssembly, and Chart.js for creating interactive data visualizations.
To bring the concepts to life, we will have a live demonstration of the analytics dashboard in action. Attendees will see how the dashboard processes data and updates visualizations in real time, demonstrating the powerful synergy between Go and WebAssembly. Throughout the session, we will highlight best practices for optimizing Go code for edge environments, ensuring that applications remain fast, efficient, and highly responsive. By the end of this talk, attendees will have a solid understanding of how to build and deploy Go applications for the edge using WebAssembly. They will leave with practical knowledge on optimizing performance and reducing latency in edge computing environments, ready to apply these techniques to their own projects.
Notes
I am an experienced software engineer with a strong background in Go and edge computing, having developed several projects that leverage WebAssembly for client-side processing. This combination of expertise makes me uniquely qualified to present this topic. I will also do live-coding in this session.
^ back to index
40. GPT in Go-Land: Building an AI-powered Narrative Generation Engine Using Go, AWS, and GPT
Abstract
Unveil the power of Go with AWS and ChatGPT in creating an AI-powered storytelling engine, opening possibilities towards entertainment, education and more. Join us for an exciting dive into concurrent programming, architectural decisions, and interactive narratives crafted real-time by AI.
Description
In an increasingly AI-driven world, Go’s simplicity, efficiency, and strong support for concurrent programming make it an ideal language for building robust, scalable applications. This session offers an inspiring dive into the implementation of a unique conversational AI project in Go.
This talk explores the development of a dynamic narrative generation engine using Go, AWS infrastructure, and the linguistic proficiency of OpenAI’s GPT-4. This engine weaves on-the-fly interactive stories based on user input, opening exciting opportunities in entertainment, education, and beyond. Starting with an introduction to the trifecta of Go, AWS, and GPT-4, we proceed to break down the architectural choices and programming practices followed in building this application. We focus on how Go’s capabilities have been leveraged to manage the orchestration between various AWS services, and to handle the interaction with GPT-4.
A significant part of the session includes a deep dive into our Go codebase. We share real-world insights into handling concurrent requests, managing error propagation, maintaining scalability, and writing testable, maintainable code. We also reveal how Go’s advanced features, such as Goroutines and Channels, are instrumental in handling the complex tasks of this narrative engine. To bring the session to life, we demo the narrative generation engine in action, guiding attendees through an engaging, real-time storytelling experience.
Notes
My extensive experience as a software engineer, coupled with my deep understanding of Go, AWS infrastructure, and GPT-4, uniquely positions me to speak on this topic. Having led several projects involving Go for high-performance, scalable applications, I have firsthand knowledge of the language’s capabilities and nuances. Additionally, my role as a contributor to various open-source Go projects and my experience speaking at 50+ conferences and events equip me with the skills to effectively communicate complex technical concepts to a diverse audience.
^ back to index
41. Goroutines as Cognitive Threads: Replicating Human Behavior in Go
Abstract
Humans are capable of doing multiple things at once—thinking, speaking, listening, and acting. In this talk, we’ll replicate these cognitive threads using Go’s goroutines. Learn how to architect a system that performs tasks in parallel, managing memory and communication just like the human brain.
Description
In this talk, we will explore the intersection of Go’s powerful concurrency model and AI-driven systems to replicate human multitasking capabilities. At Callchimp.ai, where I am responsible for building and maintaining AI systems that simulate call center agents, we have developed an innovative approach using Go’s goroutines. This session will walk you through how we integrate advanced technologies like Gemini and Firebase Genkit Golang to create a humanoid system that mimics human agents’ cognitive functions, such as listening, thinking, and speaking, all while handling multiple tasks concurrently.
Key Points:
-
Introduction to Multitasking in AI:
- Overview of how human agents manage multiple tasks simultaneously
- Challenges in replicating human-like multitasking in AI systems
-
Why Go for Human-Like Multitasking:
- Advantages of using Go’s goroutines for parallel task execution
- How Go’s concurrency model aligns with the requirements of simulating human cognitive processes
-
Integration of LLMs and Firebase Genkit Golang:
- Introduction to LLMs and Firebase Genkit Golang and their roles in the system.
-
Utilizing Google Search for Grounding and Context Caching:
- How Google Search is leveraged for grounding the AI in real-world knowledge, providing context to conversations.
- Implementing context caching to maintain coherent and relevant responses over time.
- A brief discussion on alternatives to Google Search, for the privacy focused
-
Architecting the Humanoid System:
- Detailed architecture of the multitasking humanoid system, illustrating how different components interact.
- Use of channels and goroutines to mimic human cognitive threads and manage task synchronization.
-
Example of the Call Center Agents at Callchimp.ai:
- Lessons learned and best practices from developing this system at Callchimp.ai.
-
Challenges and Future Directions:
- Addressing the challenges of real-time processing, synchronization, and error handling in a highly concurrent system.
By the end of this talk, attendees will gain insights into creating complex, concurrent systems in Go that can mimic human multitasking. They will learn practical techniques for integrating advanced AI technologies and effectively managing task orchestration in real-world applications like call center simulations.
Notes
I am well-suited to deliver this talk due to my extensive experience in building AI systems that replicate human multitasking, particularly using Go. At Callchimp.ai, I lead the development of AI solutions that simulate call center agents, requiring complex task orchestration and real-time decision-making, which are key elements of this talk. As a Google Developer Expert in Machine Learning and Google Cloud Platform, I have deep expertise in integrating advanced technologies like AI Agents and Firebase Genkit Golang, ensuring I bring both technical depth and practical insights to the discussion.
My background spans over a decade in programming and AI, during which I have authored two books on building deep learning powered applications for the web and the mobile devices.
^ back to index
42. Delightful integration tests in Go applications
Abstract
Streamline your integration tests with Dockerized services using Testcontainers. This session covers how to programmatically manage databases, queues and more directly from your test code. Ensure consistent environments in both local development and CI pipelines, all without manual config hassles.
Description
Dockerized services are an excellent tool for creating repeatable, isolated environments ideal for integration tests. In this session, we’ll look at the Testcontainers libraries which provide flexible and intuitive API for programmatically controlling lifecycle of your service dependencies in Docker containers.
Running databases, Kafka, Elasticsearch, and even cloud technologies, straight from your test code ensures environment config is always up-to-date and consistent during local development and in CI pipelines.
You’ll learn everything necessary to start adding powerful integration tests to your codebase without the headache of managing external service dependencies manually!
Notes
I’m the core maintainer of Testcontainers for Go.
^ back to index
43. Is Go a Good Language for Building a Compiler?
Abstract
This session explores creating an open-source compiler in Go, covering syntax compilation, internal optimizations, and Profile-guided Optimization (PGO). We’ll also discuss LLVM backend integration and plans for compiler bootstrapping, with a focus on accessibility for beginners at compiler.
Description
Session Overview: Building a Compiler with Go
What You’ll Learn
- Creating an Open-Source Compiler in Go
- Step-by-step guide to compiling a new syntax entirely in Go.
- Optimizations
- Internal Optimizations: Techniques for improving the performance of your compiler.
- Profile-Guided Optimization (PGO): An overview of PGO and how to implement it.
- LLVM Backend Integration:
- Advantages: Why integrating with LLVM can be beneficial.
- Key Considerations: Important factors to keep in mind during the integration process.
4. Compiler Bootstrapping:
- Plans and strategies for bootstrapping your compiler.
Who Should Attend?
- Compiler Beginners Welcome: This session is designed with beginners in mind, so no prior experience with compiler concepts is necessary.
Notes
^ back to index
44. Linters: Stop Go-ing Insane in Code Reviews
Abstract
We often use linters in our day to day development for instantaneous feedback of our code. Few developers venture beyond using linters and start writing them. Journey with me in writing custom linters and learn how these custom linters have made code reviews more pleasant for everybody in the team.
Description
Linters: Stop Go-ing Insane in Code Reviews
GopherCon 2024 — Proposal
Abstract
We often use linters in our day to day development for near instantaneous feedback of our code. Few developers venture beyond using linters and start writing them. In this talk, I tell the story of my own journey in writing custom linters for my team and how these custom linters have made code reviews more pleasant for everybody in the team.
Talk outline
Background
- This year, I found myself becoming a team lead / primary code reviewer for my team.
- It was a fresh team (more newjoiners) that was inheriting a substantial codebase, spanning across several microservices, with one flagship service.
- The flagship service had around 1.2 million lines of code and had been actively developed since 2019.
- 5 years of active development meant that the codebase had legacy patterns that could be updated.
Pain point
- Many common mistakes were already caught by our IDEs (GoLand and gopls with govet), but these static analyses were limited to public (i.e. non-inhouse) libraries.
- Chief among these inhouse libraries were the logging libraries. Grab has had several iterations of logging libraries, with support for both unstructured and structured logging.
- Long story short, I was spending a lot of time drilling the importance of logging and observability practices in code reviews. Some merge requests had more than 10 comments dedicated solely to the proper use of the logging library.
- Code reviews were unpleasant for everybody.
- Problem statement: Automate away the review for trivial stuff.
Motivation
- I decided to write a custom linter for my team. Because: (1) why not?, (2) how hard can it be? (famous last words).
- “Necessity is the mother of invention”. I’m personally fond of saying “The best motivation for any engineer is frustration.”
- I had known about the go/analysis libraries for awhile.
- Two resources were helpful, Fatih Arslan’s “Using go/analysis to write a custom linter”, and Akhil Indurti’s “Writing a Static Analyzer for Go Code”.
Some linting rules
- Use logger.Warn instead of logger.Error.
- In Grab, the Error level is reserved for errors that need an on-call engineer’s attention. The Warn level is meant for errors that are already handled.
- My team’s services had the wrong understanding and the error log level became a catch all.
- We wanted to reclaim the Error log level.
- Use the standard library for error handling instead of the internal errors package.
- The internal errors package was very API-centric. It required an error code, a message description, the request ID, and lots of superfluous information.
- We had model objects returning 404 as the error code when something was not found.
- Encourage static messages in structured log calls.
- Key-value pairs in structured log calls obsolete most dynamically constructed log messages.
A note about the linting rules
- The rules may seem odd to outsiders, the rules were constructed to take advantage of Grab’s observability stack, namely Elasticsearch’s indexing capabilities.
- Linting rules will differ for each team, the key is to understand what code style rules can be automated.
The first linting rule
- The general algorithm for implementing the first linting rule is as follows:
- Find all function calls that are named “Error”.
- If the function call has a receiver of the Logger type, report a diagnostic.
The second linting rule
- This rule is conceptually similar to the first.
- In the first rule, the function calls are usually selector expressions (small aside on what a selector expression is)
- The general algorithm is then:
- Find all function calls.
- If the function call originates from the internal errors package, report a diagnostic.
The third linting rule
- We can reuse bits from the first linting rule here.
- We already know how to identify the logger type and its function calls.
- The general algorithm is therefore:
- Find all function calls of the logger type.
- If the message argument of the logger is not a string literal or a constant, report a diagnostic.
Aside: Type checking
- All linting rules rely on the type checker for its heuristics.
- The go/analysis package has the inspect analyzer that collects type checking information.
- The inspect analyzer powers the three linting rules we are presenting.
Great, how do we start using these linting rules?
- There are two ways: integrate it into CI and create a linter.
- No points for guessing which one I went with.
- Grab has a monorepo internally. Since these linting rules were team-specific, we had no good way to add them into the CI pipeline.
- golangci-lint is a great project and has support for defining custom linters.
- The plugin system (really, any Go plugin system) results in a custom golangci-lint binary, which can be a challenge to distribute across the team and use in CI.
Including the custom linters into golangci-lint for IDEs
- golangci-lint has custom plugin support.
- Custom plugin support is compatible with the go/analysis package.
- (Show high level overview of steps)
- Specify custom linters in .custom-gcl.yml
- Compile custom binary
- Enable golangci-lint in VSCode
- A bash script to compile the custom binary and copy it to where it was needed.
Great, we can lint stuff. What else can we do?
- Analyzers can also specify “suggested edits”. These suggested edits can be used to automatically fix flagged lines of code.
- Only the first linting rule can benefit from it.
- (Show how to implement it)
- (Demo how it is used)
- Note: golangci-lint doesn’t have support for suggested edits yet, so we were unable to use it directly from IDEs.
Refactoring at scale
- With suggested edits, we did a mass refactoring of all log levels in the flagship service.
- Since the linter was a go/analysis Analyzer, we leveraged the multichecker (singlechecker works too) package with the ‘-fix’ command line option.
- logger.Warn -> logger.Error in 1 commit by a new member of the team.
- It compiled fine and no production incidents occurred as a result.
Reclaiming sanity
- Code reviews are more pleasant now.
- Little to no comments about the linted rules.
- Code reviews comments focused on other matters.
Call to action
- Go through the rules in your team’s style guides.
- Find a public linter for that rule.
- If there isn’t one, consider if writing a custom linter could help reclaim everyone’s sanity.
—
Future and ongoing work in Grab
Defining more linting rules
- Ensure that all logger.Warn and above logs has the error tag.
- We could possibly start exploring the SSA analyzer for this.
- The challenge here is figuring out whether a logger.With(tags…) call or a logger.Warn(…, tags…) has the error tag in it.
Filtering and bringing relevant lint issues to users.
- By processing lint results and diff files from MRs, we can filter through the lint results and show directly related issues with current change with the goal of stopping new lint issues to be introduced.
AI-powered lint fixer
- Lint issues have a very explicit description and location where it happened, feeding both of these into a LLM will produce a very good result most of the time.
- The rest of cases where vanilla prompting is not enough can be improved by providing more context through AST, more complex prompting techniques, etc.
- As golangci-lint doesn’t have support for suggested edits yet, we are embedding it into a CLI.
Notes
^ back to index
45. Building and Maintaining Large Scale Time Series Database with Go
Abstract
Have you ever wondered on how to build and maintain large-scale time series database written in Go? In this talk we will share the architecture of Grafana Mimir, the time series database for long-term storage for Prometheus and how golang helps to achieve its functional and performance requirement.
Description
Grafana Mimir is an open source, horizontally scalable, highly-available, multi-tenant time series database (TSDB) for long-term storage for Prometheus. It is written in Golang and can be deployed in microservices mode and on bare metal, virtual-machine or Kubernetes.
This talk will present the architecture of Mimir and how the use of Golang helps in achieving its functional and performance requirements.
- Architecture and Component Boundaries: It is not Monolith to microservices. Mimir is designed to be deployable independently across its different components from the ground up. Each mimir components have clearly defined boundaries and can be scaled independently. All mimir source codes are in grafana/mimir repository but different components are put in different Golang packages. Mimir provides meta monitoring dashboard that can be broken down to some per component monitoring. (7 minutes)
- Communication and Data Management : Mimir ingest time series data through remote write protocol. Mimir exposes different query restful API which is compatible with Prometheus. Mimir is horizontally scalable and using consistent hashing for sharding and replication with memberlist gossip protocol for cluster membership. Mimir stored new time series data in the file system. Long term data is moved to object storage. Mimir optionally support caching to improve performance. (7 minutes)
- How Go Helps: clean syntax, concurrency, strong standard library and existing ecosystem for observability tools. (4 minutes)
- ConclusionI have been part of Mimir team in Grafana since I joined in 2022.
(2 minutes)
Notes
I have been part of Mimir team in Grafana since I joined the company in 2022.
^ back to index
46. Heimdall: Coban’s Go-to control-plane for platform automation
Abstract
Discover Heimdall: The powerhouse behind Coban-UI, enabling seamless data resource management with Go. Dive into its architecture for scalable, efficient automation in Grab’s data streaming ecosystem.
Description
Abstract
Coban, Grab’s premier data streaming platform team, develops and maintains the infrastructure supporting the robust data product marketplace utilized by countless engineering teams. To streamline resource provisioning and management, we developed Coban-UI, a self-service portal empowering users to effortlessly create and manage data resources such as Kafka topics, S3-sink pipelines, Flink pipelines, and Kafka connectors. At the heart of Coban-UI lies Heimdall, the backend service orchestrating the provisioning and management of these resources. This talk will dive deep into Heimdall’s architecture and design, exploring how it leverages Go to provide a scalable, reliable, and efficient solution for resource provisioning in Grab’s complex data streaming ecosystem.
Talk outline
Introduction
- Data is the new oil/gold
- Coban is a platform engineering team comprising of App and Infra subteams
- Collectively, we strive to create a NoOps (self-service) data product marketplace
- Provide data of high quality and high freshness ensured by data contracts
- Users can self-serve creation and management of resources like Kafka topics and Flink pipelines through our portal, Coban-UI
- The core service that makes this possible is Heimdall, our backend Go service which integrates with Coban-UI and other team services
- Being a Coban App Engineer, I have spent much of my 3 years at Grab contributing towards the development of Heimdall and bringing it to where it is today
Team Problem statement
- As a platform team, a crucial part of achieving our goals is abstracting ourselves away from the entire user process and experience
- Users should frequently see our platform, and seldomly see us
- If users need to approach us, that likely means there is a problem
- In order to abstract us humans away from the process, we needed a service which could automatically handle multiple responsibilities
- Receive, authenticate/authorize, and process user requests
- Communicate request progress/status to users
- Serve desired data, metadata, and metrics to users with low latency
- Orchestrate workflows to provision/manage users’ resources
Heimdall architecture
- Developed in Go
- Reasons for using Go
- Immediate reasons: speed, concurrency, minimalism, vibrant community (we’re at GopherCon)
- Additionally, Grab has a mature Go ecosystem
- Internal grab-kit multi-tool (inspired by go-kit)
gen: code-generation for scaffolding fully functional Go microservices in seconds with ready-made HTTP/gRPC server, useful middleware, stats, Data Access Objects for DB integration, etc
entity: auto-gen Data Transfer Object Go structs based on defined protobuf messages
start: start Go microservice with auto-reload on code changes (similar to nodemon)
coverage: run test coverage check and provide a pretty coverage report
- Etc…
- Of Go, by Go, for Go
- Various useful common Go packages developed internally (e.g. algo/scheduler, a distributed cronjob scheduler)
- Comprises of logically distinct components
- REST API server
- Workflow Orchestrator
- Cronjob scheduler (won’t be covered)
- Fun fact: during very early stages, users used to directly send requests to Heimdall with cURL. i.e., our original UI was the Terminal
Quick peek at our GitOps IaC
- We use GitOps and Infrastructure-as-Code with Hashicorp Terraform to provision and store resources on our GitLab repository
- CI pipelines carry out terraform plan and apply commands on committed .tf files
REST API and Frontend Integration
- Coban works tightly with a frontend team, Chroma, in order to create an excellent UI/UX on Coban-UI
- Thanks to grab-kit, REST API development is quick and easy
- Define the RPC service in a .proto file under the pb directory
- Define the required messages in a .proto file under the pb directory
- Run
grab-kit gen and wait a few seconds
- The Go service handler interface is updated with a new method corresponding to the defined proto service
- Implement the newly-generated method with business logic
- Heimdall provides API for integrating with Coban-UI frontend, such as
- Endpoints for posting user requests, which triggers the workflow orchestrator to execute a workflow to fulfill the request
- Endpoints for getting resource data and metadata, so that users can view lists of provisioned resources and details like PICs, configs, confidentiality tier, etc
Workflow Orchestration
- Originally, Heimdall’s workflow orchestrator was implemented as a finite state machine
- States and state transition mappings were defined in code
- State executions and state transitions were handled by some dedicated logic in Heimdall
- Consisted of many moving parts spread across different packages
- Implementing new workflows was not simple
- It was impossible to write unit tests for a workflow from start to end state
- However, we subsequently discovered Temporal and migrated over
- Temporal is a workflow orchestration service that delivers durable and visible workflow executions
- Temporal is open source and has a well-developed Go SDK for integrating with Go apps
- Heimdall’s workflows are now implemented using Temporal Workflows and Activities, allowing for durable execution with configurable retries
- Workflow logic is compartmentalized into the same package and file, making development much more straightforward
- With Temporal’s testsuite tooling, it’s easy to write unit tests for workflows and activities in a mocked environment
- One commonly requested workflow is for creating a Kafka topic on one of our Kafka clusters, and it consists of the following steps
- Create a GitLab branch (on our IaC repo)
- Push commits to the new branch, adding 2 files
- main.tf - terraform file for provisioning the actual Kafka topic
- metadata.yml - contains relevant info like resource ownership
- Create an MR for the branch
- Approval stage
- Staging - service accounts auto-approve the MR
- Production - Heimdall notifies requestor through Slack to get reviews and approvals from 2 other teammates
- Loop check the MR for sufficient approvals and check for passing CI pipeline
- Once conditions are fulfilled, merge the MR
- Loop check the merge CI pipeline to ensure terraform apply succeeds
- Notify the user that their Kafka topic has been successfully created and is ready
- Walkthrough of our Temporal Go workflow implementation
Notes
This talk requires a basic understanding of Go, and preferably familiarity with Temporal Technologies.
^ back to index
47. Scaling Go Monorepo Workflows with Athens
Abstract
Grab’s 600+ microservice Go monorepo caused performance issues with go commands straining our GitLab server. We implemented a private Go module proxy using Athens, leveraging GOVCS and fallback mode to efficiently cache external/internal modules, reducing resource usage and improving DevEx
Description
Introduction/Background
Grab backend developers share a Go monorepo which contains codebase for 600+ Go microservices at Grab
At the root of the Go monorepo, we have a single go.mod file to manage our dependencies
The Go monorepo also contains common libraries code (which are Go modules) used by everyone at Grab
Pain point
Frequently ran commands like go list and go get are painfully slow for CI & developers
Context: running go list and go get (tip: with -x flag) hits the /list endpoint; hitting the /list endpoint exacts a heavy toll on our Gitlab server
Impact:
End user: frustrated + loss in productivity; disrupted CI, cannot git pull/push, cannot merge MR
Gitlab teams: constantly in fire-fighting mode due to the server load on Gitlab
The Solution: Go Proxy
What is a Go proxy?
A server that caches and serves Go modules from various sources
E.g. the most popular Go module proxy is proxy.golang.org, maintained by the Go team
Use Athens proxy
How do we use it at Grab?
For us, the default behavior of Athens didn’t solve our problems above
Requests from running go list and go get still hits our Gitlab, causing high load to our Gitlab server
Eventually, we have to use the fallback network mode + GOVCS
What is the GOVCS environment variable?
The GOVCS environment variable in Go allows users to control which VCS the go command is allowed to use when downloading modules
We can use this to tell Go to not fetch modules with Go monorepo endpoint
GOVCS=gitlab.myteksi.net/gophers/go:off
This basically tells the Go toolchain to disable any interaction with our Go monorepo
Putting Athens into fallback mode
Explain the “fallback network mode” of Athens
In fallback mode, Athens always retrieves the module from its storage if the VCS fails
Remember our GOVCS setting above? This effectively tells our Athens cluster that “whenever you see requests to download a Go module from the Go monorepo, only get it from your storage no matter what!”
Benefits and Results
Reduced CPU and memory usage, enabling scaling down of the Athens cluster
Mitigation of GitLab load issues
Improved developer experience with faster dependency fetching
(Include performance improvement metrics/graphs)
Why use a private Go Module Proxy?
Discuss the “leftpad” issue and the importance of dependency management – you don’t want to deal with disappearing dependencies
In March 2016, a developer deleted the left-pad package from npm, which was a small utility that many projects depended on. This deletion caused significant disruptions, as numerous projects failed to build due to their reliance on this seemingly trivial piece of code.
“You could say, I could vender my Go modules :shrug:”
Vendor directory is large (esp. for a monorepo) and causing slow checkouts
Merge conflict
Security reasons, people can change the module without you know, but having a proxy is like storing the modules on your own terms, you know you’re getting it from the “right” source
Call to Action
Encourage the audience to set up private Go module proxies in their organizations
Mention the possibility of running Athens in offline mode for air-tight environments
Invite contributions to the Athens project
Blog post for further reading: https://engineering.grab.com/go-module-proxy
Notes
^ back to index
48. The Why of the iterator design
Abstract
With Go 1.23 Go got user-defined iterators integrating with range. The final design has sparked a lot of discussions: Why does it use functions, not interfaces? Why this particular signature?
I you through the history of this design, as a case-study of how Go language features are discussed.
Description
The discussion about custom iteration started already back when generics where released in Go 1.18. It was clear, at the time, that if we wanted to support user-defined container types, we need a good answer for how they should get iterated over. Since then, we have gone through several different designs, all of which where rejected, until we finally converged on the now released design towards the end of 2023.
Many people who saw this new feature for the first time nearing the release of Go 1.23 reacted to it with doubt. The implementation looks complex, the syntax hard to read. Many alternatives are being proposed on Reddit, Twitter or Slack. But many of these alternatives have already been discussed.
I want to talk about these alternatives. I don’t want to tell you how iterators work in Go 1.23 - but why they look like they do. I want to explain what we wanted from them and go through the various designs that came before and summarize, why they were ultimately rejected. And hopefully give you some insight into how new features like this are designed and discussed with the community.
Notes
I have been a vocal part of the discussions around generics and iterators in particular since the beginning. So I have first-hand knowledge of how it went and what the major arguments where. I think it is very useful for the community, to get a bit of that institutional knowledge out there. Not just to defend this design against some of the criticism, but in particular so that people can feel empowered to participate in the process themselves, earlier.
^ back to index
49. Enhancing Application Performance with Profile-Guided Optimisation in Go
Abstract
Unlock the power of Profile-Guided Optimization in Go! Learn how I boosted performance by up to 38% in production services. Discover practical strategies for implementing PGO, from Docker tweaks to automated platforms. Don’t miss this chance to supercharge you applications and slash resource usage!
Description
GopherCon 2024 — Proposal
Abstract
Profile-guided optimization (PGO) is a powerful tool that uses CPU profile data from an application to fine-tune subsequent compiler builds. Current improvements range from 2% to 14%, but future releases could offer even greater enhancements.
In this talk, I tell the story of my own journey in enabling PGO on our two services on production and
A Primer on PGO
PGO is a widely used technique that can be implemented with many programming languages. When it was released in May 2023, PGO was introduced as a preview in Go 1.20. Since then, several other companies have planned or started implementing GPO:
Enabling PGO in a Service
Enabling PGO starts with building your service using Golang version v1.20 or higher because PGO support is only available from these versions. Enable pprof in your service and then capture a 6-minute profile and save it to /tmp/pprof with the following command:
curl ‘http://localhost:6060/debug/pprof/profile?seconds=360’ -o /tmp/pprof
Applying PGO to TalariaDB
As an example, let’s consider TalariaDB. It’s a distributed, highly available, and low latency time-series database for Presto, which we open-sourced at Grab.
As the entire service is managed by our team and runs on an EKS cluster, the application of PGO involves updating the Docker image’s go build command to include -PGO=./talaria.PGO. The file talaria.PGO is a pprof file generated from production services.
We use a Go plugin in TalariaDB, thus, the Dockerfile is updated to include the same PGO.
Here is an example of how the Dockerfile would look: <shorten it>
The Impact of PGO
Applying PGO makes a noticeable difference. Based on real measurements in our TalariaDB service, we achieved a minimum of 10% reduction in CPU usage, at least 10GB (30%) memory usage reduction, and a massive 38% volume reduction, which is primarily used for storing ingestion event queues.
Deploying PGO on Catwalk Service
We carried various PGO experiments on other services as well, such as Catwalk. However, we observed that a mere 5% improvement wasn’t worthwhile considering the effort required to generate Docker images for each Catwalk application and to devise a workaround to hand over the pprof dump.
Planning Ahead with a PGO Optimization Flow
To tap into the power of PGO, we plan to establish a PGO platform. This strategy will streamline the process for teams integrating PGO into their services.
With this platform, users can define a strategy that specifies the features of the target service and the requirements for profile generation. Automated tasks will be setup for creating PGO profiles suitable for the LLVM based on this strategy.
This profile can then be used for compiling an optimized binary version during the build process. Developers can perform standard performance tests and decide whether to roll out the optimized version.
This approach aims to facilitate efficient use of PGO, and paves the path to more performant applications.
Conclusion and Insights
In some services like TalariaDB, PGO has demonstrated significant benefits by improving service efficiency and reducing resource usage. However, the benefits can vary, and it’s essential to evaluate the alignment with operational realities and strategic objectives. Further improvements to go-build and PGO support for monorepo services could drive broader adoption and lead to powerful, system-wide benefits, enhancing response times, resource usage, and user experiences.
Notes
Notes for Reviewers
Why I’m the Best Person to Speak on This Subject
I have participated a few Go conf.
At Grab, I have been instrumental in leading the adoption of Profile-Guided Optimization (PGO) for our Go services. I have hands-on experience in enabling PGO for two of our critical services, TalariaDB and Catwalk, and have been closely involved in developing a streamlined PGO optimization flow for our engineering teams.
Through this journey, I have gained valuable insights into the implementation challenges, performance benefits, and operational considerations of PGO. I can share real-world examples, lessons learned, and best practices that can benefit other organizations and developers seeking to leverage PGO for performance optimization.
My strong technical background in Go, combined with my practical experience in implementing PGO at scale, makes me well-equipped to deliver an informative and engaging talk on this topic.
Overall, I am excited about the opportunity to share my knowledge and experiences with the Go community at GopherCon 2024. I believe my talk will be valuable for developers and organizations looking to optimize the performance of their Go applications through PGO.
^ back to index
50. Automatic efficient Go application by Profile-guided optimization
Abstract
We manage plenty of Kubernetes clusters in private cloud. So we need to use CPU more efficiently. PGO enables compiler to determine aggressive optimization like a inlining and de-virtualizations. This presentation gives you the knowledge about PGO and practical example in our company.
Description
Our team manages plenty of Kubernetes clusters in private cloud for services of our company and develop many custom controller to work efficiently and to reduce the management costs for each service. So it’s so important for scaling up, growing business to reduce little bit of a CPU utilization.
PGO is a feature that we can use since Go1.20. this feature provides the compiler more meaningful profiles to determine aggressive optimization like inlining and de-virtualization. To make the result of optimization best, PGO requires the profiles that are representative of actual behavior in production environment. Of course, we can pass to PGO the representative benchmark results, but it’s so difficult and, in many cases, it’s not useful to optimize compiler. So Go’s PGO recommends to collect the profiles directly from the production environment.
The official post introduces typical workflow like following.
- Build and release an initial binary (without PGO).
- Collect profiles from production.
- When it’s time to release an updated binary, build from the latest source and provide the production profile.
- GOTO 2
This workflow is just typical and, in actual, it’s more complicated to apply for PGO to your application.
So, at first I introduce an outline about PGO. Then, I’ll share practical example for our team. In this section, I introduce the workflow to apply for PGO to your application in Kubernetes.
Abstract
- Why do we need PGO (5min)
- Self introduction
- The detail of my company
- The basic information about PGO (5min)
- What is PGO and how does PGO work
- The benefit to enable PGO
- Practical example of my company (10min)
- How to introduce PGO to your application
- The result before and after introduction
- Conclusion
Notes
The presentation that describes practical PGO example is rare (as far as I know, a presentation about PGO was only gave in Gopher Con 2024.) So I believe that it’s useful presentation to share the knowledges of PGO to everyone.
In additional, there are some articles about PGO, but a lot of these only describe how to use PGO. Presumably, the Gopher that has read the official post of PGO knows usage. but I think most people doesn’t know advanced step next since PGO is newly since Go1.20.
From this point too, I think it’s meaningful presentation for all Gophers.
^ back to index
51. 80% faster, 70% less memory: the Go tricks we've used to build a high-performance, low-cost Prometheus query engine
Abstract
We’re building a brand-new Prometheus-compatible query engine for Grafana Mimir which runs up to 80% faster and with up to 70% lower peak memory usage. In this talk, we’ll share how we’ve achieved this, some of the Go performance lessons we’ve learnt, and how you can apply them to your own projects.
Description
We’ve been building a brand-new, Prometheus-compatible query engine from the ground up for Grafana Mimir in Go.
Our new query engine has been designed to deliver an improved user experience and vastly improved performance: our benchmarks show queries running up to 80% faster and with 70% lower peak memory consumption than Prometheus’ default engine, and our real-world testing shows similar results.
As we’ve been building the engine, we’ve learnt a number of Go performance lessons the hard way, including why using byte slices can sometimes be preferable to strings, the benefits and costs of memory pooling and the surprisingly large impact of function pointers. And we’ve seen the complexity (and bugs!) these things can introduce too, and developed a number of techniques to help combat this.
In this talk, you’ll:
- Get a peek inside the engine and some of the key design decisions that have enabled these results
- Learn some of the Go performance lessons we’ve learnt along the way: the things that worked, the things that didn’t, and the thing that later caused us a day of hunting down a hard-to-replicate bug
- Learn some of the techniques we’ve implemented to combat the issues some of these performance optimisations can introduce
- Learn how to apply these ideas to your own projects
- Hear what we plan to do next to improve the engine even further
Notes
This talk could either be a standard 20 minute talk or an extended 35 minute talk.
^ back to index
52. From Bottlenecks to Breakthroughs: Elevating Performance with OpenTelemetry and Go
Abstract
This talk explores how effectively implementing the three pillars of observability—logs, metrics, and tracing—combined with several essential instrumentation techniques, can uncover critical bottlenecks, leading to more resilient software and improved business outcomes and system performance.
Description
In this talk, I will share lessons learned from past incidents and experiences in instrumenting various parts of a web service with logs, metrics, and traces using OpenTelemetry. We’ll explore techniques like writing cost-effective and impactful logs, both manual and automatic instrumentation to capture not just system performance but also business metrics, and comprehensive client and server-side monitoring for third-party services and infrastructures such as databases, queues, and message brokers. I’ll also introduce various types of middleware options that can help instrument different signals with Go.
We’ll then see how these techniques, along with observability tools, can empower software engineers to gain deeper insights into system performance, enabling them to detect issues long before applications reach production. By the end of this session, participants will have a stronger grasp of causality in performance contexts and be equipped to use the right tools to efficiently navigate and resolve system and performance issues.
Notes
This talk is based on real-world experiences, focusing on lessons learned from past incidents where proper instrumentation was key to quickly identifying problems and finding root causes. I’ll also cover important practices that work well with instrumentation, such as visualizing metrics, creating metrics from logs, and using traces with tools like Prometheus, Loki, and Jaeger.
In my role as the SRE Lead at Goto Financial, I’ve gained a lot of hands-on experience in improving system performance and reliability. I’ve also created and taught several bootcamps, an online course, and a Udemy course on software instrumentation and performance using Golang. This talk combines my practical experience with my teaching background to give attendees useful insights and techniques they can apply in their own work.
^ back to index
53. Why we can't have nice things: Generic methods
Abstract
Go got generics, but they are pretty limited. One of the most unpopular limitations is the inability of having methods with extra type parameters. The proposal is stalled, despite having over 600 upvotes.
Why can’t add this widely popular feature? There happen to be good, technical reasons.
Description
Generic methods would allow us to write chaining APIs. This is particularly important for functional iterator patterns, which currently have to stretch over multiple lines, or be awkwardly nested. It would also allow us to properly attach generic code to the type it belongs to. An example is func N[T constraints.Integer](n N) N in package rand: It makes it easier to choose a random time.Duration, for example. But it should really be a method on *rand.Rand.
So no one really contests that it would be useful to have this feature. Even the Go team largely agrees. But the omission was intentional. The reasons for it stretch all the way back, to even before Go’s first open source release. In fact, it might be the reason we even got generics at all.
I walk through the uses of this feature, the reasons adding generics took so long and how they lead to this omission.
Notes
The talk won’t require any specialized knowledge of Go or programming language design. Some familiarity with Go is assumed, but anything else is explained from first principles.
The goal is to provide a digestible summary of the discussion #49085 and to hopefully give some insights into the language change process and the kinds of questions that come up for such proposals.
The structure of the talk is
- Outlining the feature and how it would be useful
- Introducing the generic dilemma by Russ Cox, which is what kept Go from getting generics for a long time
- Explaining how the impasse was resolved by the design decision not to require specific implementation strategies in the generics design, thus finally enabling us to add them.
- Show how this implies we can not have rank 2 polymorphism (i.e. uninstantiated generic types/functions)
- Show how that implies we can not have generic methods - at least in any way that fits into the language. There are three sub sections, they all boil down to “dynamic dispatch mans you’d need runtime code generation or boxing”
- Mention a couple of limited alternatives suggested and why they won’t happen.
I have most of the slides prepared and extensive notes on what I will say.
^ back to index
54. So, you want to add sum types to Go?
Abstract
Despite being a widely requested language feature, Go does not have sum types. I will discuss the design space for them and use that as a case study to illustrate how new Go language features are discussed and the kinds of questions that need to be answered before they can be added.
Description
You might have head of “sum types”, “union types”, “variants”, “algebraic data types” or “enums”. You might have heard someone say “Go already have sum types, they are called interfaces” and the retort “but I want closed sums!”. You might have heard that “there is no point adding sum types without pattern matching”. And maybe you are curious what all those words mean. Or maybe you just want to be able to say that a value has to be one of several types, why is that so hard‽
I will explain what variants are, what they are useful for and all the questions you need to answer if you actually want to add them to a language and what their implications are. And I will put that into the context of adding them to Go: Why it has not happened yet and what I would predict for their future.
Notes
Note: I proposed this as a Deep Dive talk, as I think I have more content than fits into 20 minutes. But I can imagine cutting it down to 20 minutes, if need be.
I think it would be very useful to provide the community with a bit of insight into the process to change the language. This particular feature is extremely often discussed (especially now that we have generics), but it’s also easy to lose track of the design space. So I think it is a good case study to demonstrate Go’s design philosophy and process. I believe it would also be helpful to the Go team, if more people have context for this particular discussion.
The specific questions I want to talk about are
- The difference between Sum and Union types.
- Enums as a special case.
- Open vs. Closed and gradual repair.
- The problem of zero values.
- Exhaustiveness checking for switch. Especially the difficulties of doing that with Union types.
- Pattern matching.
- My prediction of whether Go will ever get them and if so, how (and why the people asking for them will probably be unhappy if they get them).
I’m qualified to be the person to talk about it, because I’ve observed this discussion for close to a decade now. And I think I have a fairly accurate idea of how the Go team thinks about these questions. Also, there are interesting complexity-theory problems of the kind I have talked about before, when it comes to exhaustiveness checking.
^ back to index
55. Exploring the Robustness of Go: Balancing Strengths and Fragilities
Abstract
This talk explores Go’s strengths in memory and type safety and addresses weaknesses like panic handling and lack of generics. Attendees will gain insights into Go’s capabilities and learn strategies to enhance the resilience and reliability of their Go applications.
Description
Talk Description:
Introduction:
Robustness in software refers to the ability to withstand and adapt to unforeseen challenges and changes in the environment. In this talk, we’ll explore what makes Go robust and where it falls short. We’ll start by defining what robustness means in the context of programming and then move into an analysis of Go’s features that promote robustness, such as memory safety, type safety, and built-in concurrency mechanisms.
Outline:
Defining Robustness:
Explanation of robustness in software.
Examples of robustness versus fragility in systems.
Go’s Robust Features:
Memory Safety:
How Go’s memory model avoids common pitfalls like memory corruption.
Discussion of pointers, garbage collection, and slice/array bounds checking.
Type Safety:
Static typing and its impact on robustness.
Avoidance of unsafe type coercion and the benefits of explicit type conversion.
Concurrency and Error Handling:
Go’s approach to managing concurrent processes.
Use of goroutines, channels, and why Go opts for errors over exceptions.
Go’s Fragilities:
Mutable Shared State:
Risks associated with mutable shared state and data races.
Tools and techniques to mitigate these risks, such as the race detector.
Panic Handling:
Limitations of Go’s panic and recover model.
Scenarios where Go’s approach can lead to system-wide failures.
Lack of Generics:
How the absence of generics impacts code reuse and error handling.
Potential improvements with the upcoming introduction of generics in Go.
Conclusion:
In conclusion, Go’s built-in features provide a strong foundation for building robust applications, but there are areas where it can be fragile. By understanding these strengths and weaknesses, developers can make informed decisions and implement strategies to enhance the resilience of their Go applications.
Notes
^ back to index
56. Mastering Error Handling in Go: Best Practices and Pitfalls
Abstract
Explore Go’s unique error handling, from best practices to using panics judiciously. Learn techniques to enhance error management, making your Go code more reliable and maintainable. This talk offers practical strategies to distinguish robust systems from fragile ones in Go development.
Description
Talk Description:
Introduction:
Error handling in Go is a topic that often sparks debate, especially among developers coming from languages with different paradigms. Go’s unique approach to errors—as values—demands a shift in mindset and coding practices. This talk aims to provide a deep dive into Go’s error handling philosophy, practical strategies for handling errors effectively, and common pitfalls to avoid.
Outline:
- Why Error Handling Matters in Go:
- The inevitability of errors in programming and the importance of handling them correctly.
- Comparison with error handling in other languages (e.g., Java, Python) and why Go’s approach is unique.
- The philosophy behind Go’s error handling: defensive programming and the principle of explicit error checking.
- Understanding Go’s Error Handling:
- How Go treats errors as first-class citizens: the error type.
- Common patterns: if err != nil and why this pattern is both powerful and necessary.
- The benefits of checking errors immediately where they occur, promoting clear and readable code.
- When and How to Use Panic and Recover:
- The appropriate use cases for panic and recover in Go.
- Differentiating between errors and panics: when is it right to use each?
- Examples of using panic in scenarios like irrecoverable states or programmer errors.
- Best practices for using recover to gracefully handle panics and prevent application crashes.
- Enhancing Errors with Context:
- Techniques for adding context to errors to make them more informative.
- Using custom error types and wrapping errors with additional context.
- How to implement and use Go 1.13’s errors package to unwrap and inspect errors.
- Handling Errors in Large Codebases:
- Strategies for managing errors in large, complex Go applications.
- Organizing error handling code to maintain readability and reduce boilerplate.
- The role of logging in error handling: when to log, what to log, and how to ensure logs are useful.
- Common Pitfalls in Go Error Handling:
- Avoiding the trap of overusing panic and recover.
- The dangers of ignoring errors or using error values improperly.
- How to prevent error handling from becoming repetitive and bloated.
- Real-World Examples and Case Studies:
- A walk-through of error handling in a real-world Go project.
- Before and after: how applying best practices improved code reliability and maintainability.
- Lessons learned from Go projects where error handling went wrong and how to avoid similar mistakes.
- Conclusion:
- Recap of the key takeaways: the importance of explicit error handling, when to use panic and recover, and how to enhance errors with context.
- Encouragement for Go developers to embrace Go’s error handling model and apply these best practices in their own projects.
Conclusion:
This talk will provide Go developers with a comprehensive understanding of error handling in Go, equipping them with the knowledge and tools to write more reliable and maintainable code. Attendees will leave with a clear strategy for handling errors in their Go applications, making their codebase more robust and their debugging process more straightforward.
Notes
^ back to index
57. Accelerating Cloud-Native development with Workspace: A Fast-Track to Consistency and Efficiency
Abstract
Discover how Workspace, a powerful Go framework with ‘Kit’ packages, integrates with Kubernetes and Tilt to streamline cloud-native development. In just 20 minutes, learn how to enforce consistent standards, optimize local environments, and boost innovation.
Description
Cloud-native development can be complex and chaotic, but it doesn’t have to be. Workspace is a Go framework designed to bring order and efficiency to your development process. Central to Workspace is “Kit,” a set of standard packages that provides a proven starting point, reducing repetitive tasks and enabling engineers to focus on what truly matters.
In this fast-paced 20-minute session, we’ll explore how Workspace and Kit integrate seamlessly with Kubernetes and Tilt, transforming the way you build and deploy applications.
Key Takeaways:
- Effortless Integration: Learn how Workspace simplifies the integration of applications with Kubernetes and Tilt, optimizing your local development environment for faster, more reliable testing and deployment.
- Jumpstart with Kit: Discover how Kit provides a solid foundation for new projects, following Go best practices to minimize guesswork and help developers quickly maintain a mental model of their projects.
- Consistency at Scale: See how Workspace enforces coding standards and best practices across teams, ensuring that your projects remain consistent, maintainable, and scalable.
- Proven Impact: Hear about real-world examples where Workspace and Kit have dramatically improved development speed and code ownership, providing you with actionable insights you can apply immediately.
By the end of this talk, you’ll be equipped to implement Workspace and Kit in your projects, ensuring that your applications are built and deployed with the highest standards of consistency and efficiency.
Notes
- Technical Requirements: A stable internet connection is necessary for a brief demonstration of Workspace and Kit in action.
- Speaker Expertise: With 6 years of Go experience and hands-on success implementing Workspace and Kit in real-world environments, I’m uniquely positioned to share practical insights that attendees can apply to their own projects.
- Why This Talk?: In just 20 minutes, this talk will offer a concentrated dose of practical, actionable advice that can make a real difference in the way attendees approach cloud-native development.
^ back to index
58. From the Top: Mastering the DevOps Machine Learning Pipeline for Unrivaled Innovation - A CEO's Perspective on Cool DevOps
Abstract
From a CEO’s perspective, integrating DevOps with machine learning pipelines is key to strategic advantage, driving innovation, market agility, and operational efficiency. This presentation underscores real-world successes and views DevOps as crucial for future business growth.
Description
How can DevOps, when seamlessly integrated with machine learning pipelines, become a powerhouse for innovation and competitive advantage? This exploration, from a CEO’s perspective, unveils DevOps not just as a collection of practices and tools but as a pivotal asset in strategic business positioning and market leadership. Learn how melding DevOps with machine learning amplifies its strategic importance, driving product innovation, enhancing customer experiences, and facilitating agile responses to market dynamics. We will highlight concrete examples where the synergy of DevOps and machine learning has led to notable business successes, including market expansion, the swift introduction of innovative products or features, and achieving operational efficiencies. The presentation concludes with a visionary outlook on DevOps, enriched with machine learning, as a transformative force in business growth and evolution.
Notes
Notes:
Key Takeaways:
Innovative Integration: Veritas Automata leverages the innovative integration of blockchain technology and smart contracts with autonomous transaction processing to revolutionize industries.
Strategic Use of Technology: Utilizing Rancher K3s Kubernetes, GitOps, OTA custom edge images from Mender, and ROS2 signifies a strategic approach to achieving efficiency and continuous delivery in the cloud and at the edge.
Blockchain and AI/ML Synergy: The synergy between Hyperledger Fabric Blockchain and AI/ML capabilities at the edge showcases a comprehensive solution for complex business challenges, ensuring secure and intelligent transaction processing.
Industry Leadership: The company’s excellence in sectors such as life sciences, supply chain management, transportation, and manufacturing highlights its role as an industry leader.
Commitment to Positive Impact: Veritas Automata’s focus on innovation, improvement, and inspiration underlines a commitment to steering the future towards positive outcomes, avoiding dystopian scenarios.
^ back to index
59. How Golang Changed My Life.
Abstract
“The only easy day was yesterday. ”, Revisiting your past Programming Fears/Challenges/Experiences with a Different perspective and Mindset.
Description
1- Introduction ( The early Days ):
The first time I saw a computer was in form 3, and it was a moment I’ll never forget. Until then, my exposure to technology had been limited to television and the occasional radio. When I entered the computer lab at school, I was amazed by the rows of computers with their blinking lights and screens. I didn’t know what those lights signified, but I was fascinated by the idea that you could interact with a screen using a small device called a mouse. I remember watching other students navigate through menus and click icons, thinking it was the coolest thing in the world.
I think I have the best parents in the world that have always wanted me to succeed, but they couldn’t afford to buy a computer. As a result, I often had to borrow completed assignments from other students to do my homework, emphasizing the word “completed” since most times I didn’t understand what the teacher was asking us to do. Although I didn’t fully understand the code they had written or what they’ve done, I was eager to learn more. I would ask them to explain their work, hoping to grasp the concepts behind it. There were many times when I had to go to a friend’s house to complete my practical assignments. She and her family were very welcoming, and lived just a few meters away. She obviously knew a lot more about programming than I did. Although I said I was doing my assignments, she did most of the work while I followed along, trying to understand what was happening on the computer. We would spend hours after school at her place, and I became more comfortable with programming thanks to her guidance.
Getting access to the school’s computer lab wasn’t easy. The school had a strict policy on which classes were allowed in the computer lab, and being a young kid in a non-examination class, I felt discouraged by this policy. Despite this, I understood that the school had valid reasons for such restrictions.
Due to my lack of access to the computer lab and the fact that we didn’t have a computer at home, my enthusiasm for learning programming began to wane. Because of this, I didn’t do well in the first two terms of Form 4. My computer teacher passed an ultimatum: anyone who didn’t pass his tests during the first two terms wouldn’t be allowed to take his subject in Form 5, which was the next year and an examination class. Given this ultimatum, I had no choice but to drop the subject in Form 5.
In my final year of high school, I managed to build some momentum and reignite my programming journey. However, I still struggled during the first and second terms, leaving me with a lot of catching up to do. There was one pivotal moment in the computer lab that ultimately became a significant learning experience. One day during a practical session in the Lab, I was asked to explain the difference between “volatile” and “non-volatile” memory. Having just returned to computer science after being forced to drop it in Form 5, I was excited but nervous. Unfortunately, I reversed the definitions, assigning the characteristics of volatile memory to non-volatile memory, and vice versa. I presented my answer with confidence, but my teacher simply shook his head, and the entire class burst into laughter. As one of only three students from an Arts Major taking a computer science course among a class full of Science Majors, the experience was humiliating.
I seriously considered never returning to that classroom. However, when the next class came around, I mustered the courage to go back. Looking back on it now, I realize that my determination to succeed was stronger than my embarrassment. Despite being laughed at by an entire class, I returned to continue my learning journey.
In high school, I didn’t do a lot of programming. We started with HTML, but I found it intriguing how lines of code could create web pages. At the time, programming in C was part of the curriculum, but it seemed so distant and complex. I knew it was a programming language, but the specifics were beyond my grasp. We didn’t have many resources, and there were few teachers who understood advanced programming languages.
When I entered college, I was surprised to find that most of the courses focused on hardware and theory, with limited attention to software development. This made it challenging to find guidance and resources to pursue my passion for programming. My first attempt at writing a C program in college was daunting—I struggled with syntax errors and understanding basic concepts. Adding to the challenge, I didn’t have a laptop during my first semester in college, so I had to write the code by hand and cross my fingers that it would work or it was correct. The lack of mentorship and limited access to educational materials made the learning process slow and sometimes frustrating.
In college, I faced the most challenging time of my academic life. I loved computers, but I didn’t meet all the necessary requirements, coming from an arts background instead of a science background. As a result, I couldn’t get into a state university, which would have significantly reduced the tuition burden on my parents. The state university rejected my application to study computer engineering because I was an arts major. This made sense, as computer engineering is typically associated with those from a science background, who are expected to have more mathematical brain cells or something like that. The only options available to me were public administration and a host of other courses that I didn’t like or understand.
In a state of confusion, since attending a state university on a scholarship was my only plan for pursuing my dream of computer science, I had no choice but to apply to a private university. This was contrary to what I had hoped for my parents, as it would increase the financial strain, but it allowed me to pursue what I wanted to study in college. Fortunately, the private university offered candidates the flexibility to choose their field of study, so I selected computer engineering.
The first and second semesters were the most difficult part of my college experience. As a quick note, I was an arts major who had never taken advanced mathematics, physics, or chemistry in high school. I knew before choosing this path that it wouldn’t be easy, and I remember the day I saw the department’s course timetable. I had to ask the meaning of some of the course names listed on the schedule, as they were entirely new to me coming from an arts background. Additionally, my science-major classmates in the room during coding sessions or engineering mathematics classes seemed to grasp concepts much faster. It often felt like I was the only one struggling to keep up, I would frequently ask professors to explain concepts two or three times.
Despite this, I had excellent professors who wanted me to succeed, and my habit of asking for help when I didn’t understand became something that paid off. This habit, which I wasn’t ashamed of, helped me improve my relationship with my classmates, but of course some of them hated me. It became a class-wide joke that when a professor explained something during a coding session, everyone would turn and look at me, knowing I would say, “Wait, what? Could you explain that again?” Even though it felt like I was holding the class back, I found that my determination to ask questions and seek help from those who were smarter than me ultimately helped me succeed.
2- Discovering Go - ( The Turning Point ) :
A few years ago, I embarked on a journey to teach myself various programming languages, including JavaScript, Python, Java, and even a bit of C. However, my relationship with C was fraught with hesitation, primarily due to its complex schema and the discouragement from my college classmates. They often remarked that C was a challenging language to grasp, and I let that sentiment shape my perception of it, solidifying my aversion to C. Despite these challenges, my mentor, Ryan Yoder, introduced me to Golang while we were working on a minimum viable product (MVP) for a startup. I had just over a week to get up to speed with Go, which felt like a daunting task, especially with my preconceptions about C looming in the background. At first glance, Go seemed like a superset of C, evoking memories of my college years when even the mention of “pointers” sent shivers down my spine. Other programming languages I had explored never intimidated me as much as C, and this sudden reintroduction to similar concepts initially felt like a hurdle. However, as I delved deeper into learning Go, I began to notice the differences that set it apart from C and other languages I was familiar with.
One unique aspect of Go that quickly stood out to me was its design philosophy and the rationale behind its creation. In my experience with other programming languages, I never took the time to explore the reasons behind their development or the specific problems they aimed to solve. This was the first time I delved into the “why” of a programming language—the underlying purpose driving its creation. The compelling force that attracted me to Go and made me a Golang enthusiast was its commitment to simplicity. The idea that a language could be built on such clear and straightforward principles was truly fascinating to me. The idiomatic approach to coding, with a focus on efficiency and readability, made Go distinct and refreshing in a world where programming languages often prioritize complexity and feature overload.
Another feature of Go that quickly stood out was its approach to concurrency through goroutines and channels. This concept of lightweight threads was unlike anything I had encountered in languages like Java, where concurrency was often achieved through complex patterns or additional libraries. The ease of managing concurrent operations in Go was like a breath of fresh air, offering a straightforward way to handle parallel tasks without the overhead of traditional threading. As I progressed through online resources and tutorials, I discovered Go’s emphasis on simplicity and readability. Unlike C, where I often felt tangled in complex syntax and cryptic error messages, Go provided a much clearer and more concise way to express logic.
Although it took me some time to grasp concepts like “pass by value” and “pass by reference,” Go’s straightforward error handling and detailed stack traces helped ease the learning curve. Despite the tight deadline to get up to speed with Go, I found myself gradually enjoying the language’s simplicity and flexibility. During that intense week, I managed to build a small billing system and learned on the job, which was a significant milestone for me. It wasn’t just about meeting the project deadline; it was about overcoming my fear of C-like languages and discovering the beauty of Go’s design philosophy.
In the end, my journey with Golang taught me that it’s possible to break through preconceived fears which i think might be a problem for some developers that don’t like a particular programming language not because it’s a terrible language but because of the ergonomics of the language i might call it or maybe i’m the only one, but learning Go provided a space for me to revisit my past experience when pointers was a problem learning C with a different mindset..
The language addressed many of the issues I had encountered in other programming endeavors which were mainly Simplicity, big Community support (the ones that would actually answer your question and give feedback Like the Golang-insider community on X), baked in prerequisites tool chains for developers like formatters, testing packages etc all baked into the language standard Library, building a language with a design philosophy around it (Go idioms) , offering a unique combination of power and simplicity. This experience not only broadened my programming skills but also reshaped my perspective on learning and growth.
3- Professional Development (Impact of Golang):
When I started my journey with Golang, I experienced a steep learning curve. Coming straight out of college with only a handful of programming language knowledge and software development experience, I didn’t realize the full extent of skills required to succeed in the software engineering field. In my country, software engineering is a highly sought-after skill across all sectors, but there’s a common misconception that getting a job is straightforward if you call yourself a software engineer or hold a certificate.
My early years after college were marked by this “get a job quickly” mentality, believing that having a degree would be enough to secure employment. It wasn’t until I began learning Golang that I understood the depth of skills and complementary expertise needed in the tech industry.
One significant turning point was joining the Golang Slack community, thanks to an introduction by Martin Gallagher. Engaging with other developers in this community, as well as the Golang-insider community on X (shoutout to Matt Boyle), provided me with a broader perspective on software development. These interactions helped me set a clearer vision for my software career, emphasizing the importance of continuous learning and community support.
Initially, learning Golang was challenging. I wasn’t sure where to start, and there was always a question of which technology or programming language to choose, which I guess it’s a common question among beginners either in tech or deciding to learn a new programming language. Fortunately, I found helpful resources like online courses, books, and active developer communities. These resources provided the guidance I needed to navigate the Golang ecosystem and gain confidence in my skills.
The first time I used Golang was literally building a startup company MVP product which played a significant role in my learning and exploration of Golang. They allowed me to apply what I was learning, and through them, I experienced both successes and failures. These early experiences were crucial in shaping my understanding of the language and its real-world applications. They taught me the importance of debugging, code optimization, and effective communication with my peers.
Fast forward to today, I’m a tech lead after learning Golang for just one year, and I’ve become a major lead backend engineer at work. This rapid career progression is not to brag, but to illustrate how Golang can fast-track one’s career and open up new opportunities. The success stories, like mine, showcase the increasing adoption of Golang in various companies and industries, reflecting its robustness and efficiency.
Overall, learning Golang has been a journey of continuous growth, challenging my skills, and offering new opportunities. My advice to anyone starting with Golang is to leverage the community, embrace the learning curve, and take on projects that push your boundaries. It might be challenging at first, but the rewards are worth the effort.
4- Looking Ahead - The Future with Go (Conclusion):
As I look to the future, my aspirations as a Go developer are driven by a deep appreciation for the language’s design philosophy of simplicity and efficiency.
Despite hearing that some tasks require more code in Go compared to other languages, I see this as an opportunity to embrace the clarity and readability that comes with explicitness. For me, the discipline of writing clear, straightforward code aligns with my values as a developer, promoting maintainability and reducing the risk of errors. The growing ecosystem and community support for Go are additional incentives for me to keep developing with this language.
I aim to use Go not only to further my career but also to make a positive impact in my local community. I plan to give talks at meetups about Go, share knowledge with colleagues, and contribute to open-source projects, fostering a collaborative environment where others can learn and grow. I want to inspire the next generation of developers by showing them the power of Go, encouraging them to pursue their dreams in technology, and helping them overcome challenges I once faced.
As I continue my journey with Go, I envision myself taking on more leadership roles, mentoring aspiring developers, and advocating for the language at industry conferences. By sharing my experiences and lessons learned, I hope to inspire others to explore Go and realize its potential. Ultimately, my goal is to make a lasting impact on the tech industry, one line of Go code at a time.
Notes
I don’t consider myself to be a particularly great engineer, but rather someone who has faced many challenges from the early days of my career. I’ve had to ask countless questions, seek advice from people smarter than myself, and bury my pride to overcome my fears and misconceptions. Learning Golang has been a significant part of this journey. As I write this paper now, I reflect on the importance of putting myself out there and persevering through these obstacles.
^ back to index
60. Who broke the build? — Using Kuttl to improve E2E testing and release faster
Abstract
No one wants to be responsible for breaking the build. But what can you do as a developer to avoid being the bad guy? How can project leads enable their teams to reduce the occurrence of broken builds?
Description
No one wants to be responsible for breaking the build. But what can you do as a developer to avoid being the bad guy? How can project leads enable their teams to reduce the occurrence of broken builds?
In talking within our own teams, we discovered that many developers weren’t running sufficient integration and End to End tests in their local environments because it’s too difficult to set up and administer test environments in an efficient way.
That’s why we decided to rethink our entire local testing process in hopes of cutting down on the headaches and valuable time wasted. Enter Kuttl. Connecting Kuttl to CI builds has empowered our developers to easily configure a development environment locally that accurately matches the final test environment — without needing to become an expert CI admin themselves.
These days, we hear, “Who broke the build?” far less often — and you can too!
Notes
Session Outline:
In this session, We’ll cover:
● A quick history of our testing challenges and what led us to Kuttl
● The benefits of our new testing approach — easy to configure and minimal investment
● How we combine Kuttl and CI pipelines for more streamlined testing and fewer broken builds
Session Key Takeaways:
- When and why we decided to rethink our e2e testing practices and our subsequent discovery of Kuttl.
- Why Kuttl has been the perfect tool for our developers to perform better local integration/e2e testing without the burden of becoming their own CI administrators.
- A detailed account of how we utilize Kuttl to set up development environments locally that match our final test environment in order to reduce unnecessary commits and minimize CI build breaks.
References of opensource projects used in this talk :
- KUTTL - https://github.com/kudobuilder/kuttl or https://kuttl.dev/
- Vcluster - https://github.com/loft-sh/vcluster
- k9s - https://github.com/derailed/k9s
- Helm - https://github.com/helm/helm
- Kubernetes - https://github.com/kubernetes/kubernetes
^ back to index
61. From fmt.Println("Hello, world!") to continuously deploying apps to production.
Abstract
Transitioning to Go can be challenging for developers familiar with other programming paradigms, particularly when it comes to concurrency. In this talk, I’ll share insights from my journey learning Go, highlighting common obstacles and practical strategies I’ve developed.
Description
For developers accustomed to other programming paradigms, transitioning to Go can present initial hurdles, particularly when dealing with concurrency. This talk unpacks my journey of learning Go, highlighting common roadblocks encountered and the practical strategies I’ve developed to bridge the gap for junior developers.
Target Audience:
Newcomers: Developers new to Go seeking foundational knowledge.
Transitioners: Experienced programmers exploring Go as a new language.
Mentors: Senior developers guiding junior colleagues in adopting Go.
Notes
^ back to index
62. Practical GenAI with Go
Abstract
Learn how to practice Generative AI in Golang, using some popular tools written in Golang. You’ll learn the lingo needed commonly used, like LLM, GenAI, RAG and VectorDB and what they mean. You’ll leave with some ideas on how to start coding, now, with a model hosted on your laptop!
Description
As gophers, we’re used to a new technology buzz every several years. Generative AI has excited us with ChatGPT and AI assistants, but the practical side may be more intimidating than most recent buzz. This is due in part to a large and deep landscape of tools, databases and jargon, as well as Python centrism.
This session is a programmer perspective, to begin your foothold in AI with Go. We’ll bind some jargon such as prompt, vector database, LLM, model, and RAG to simple working code. You’ll work exclusively in Go, and offline via Ollama, itself written in Go.
As this is an introduction, you won’t leave as a GenAI Engineer, but you will leave with a practical start to whatever journey you may have ahead.
Notes
I work on OpenTelemetry, specifically on aspects relating to GenAI/LLM observability. My goal is to bring more attention to the Go programming language, so developers don’t feel they need to switch to python in order to get a start in AI stuff.
^ back to index
63. Using Go to Scale Audit logging at Cloudflare
Abstract
Building a scalable distributed system is never easy. I will discuss the challenges we faced when scaling audit logs, particularly when the throughput increased by 100x times. I’ll delve into how we identify bottlenecks and the various techniques we used to achieve scalability.
Description
At Cloudflare we operate applications at massive scale, and audit logging serves as a prime example of a distributed system handling millions of requests.
Audit logging is integral to modern software development, allowing organizations to track user actions, system activities, and data changes for security, compliance, and troubleshooting purposes.
In this session, I will discuss what audit logs are and why they are important in modern applications. I will delve into the challenges we encountered in scaling audit logging to accommodate a substantial increase in throughput. Additionally I’ll delve into various methods for identifying bottlenecks in Go programs using different tools and techniques. How different services should interact in the distributed environment? Strategies for designing scalable and fault tolerant distributed systems in Go. Finally, I will talk through some common Go concurrency techniques we employed during the design of our audit log system.
Topics to be covered include:
- Understanding the Importance of Audit Logging in Modern Applications.
- Challenges of scaling Go applications in distributed systems.
- Tools and Technique to Identify Bottlenecks, including Go Profiling
- Communication Between Services in a Distributed Environment.
- Monitoring Go applications in production.
Notes
The talk aims to offer insight into unique challenges of scaling Go projects. This will equip engineers to identify bottlenecks and strategies to design scalable applications in distributed systems using Go.
^ back to index
64. Securing Golang Services with Relationship-Based Access Control (ReBAC) Authorization
Abstract
Join me for an in-depth exploration of Relationship-Based Access Control (ReBAC) in Golang services/APIs to enhance security and scalability. You’ll leave with practical techniques and valuable insights for seamless ReBAC integration to improve authorization mechanisms.
Description
Introduction
Mastering authorization in distributed systems poses significant challenges, particularly when relying on traditional Role-Based Access Control (RBAC) models. Enter Relationship-Based Access Control (ReBAC) based on Google Zanzibar white paper, a dynamic and nuanced solution that utilizes entity relationships to oversee permissions. This presentation will unveil REBAC, showcase its seamless integration with Golang, and offer real-world examples and best practices for successful implementation
Outline
-
Introduction to Google Zanibar ReBAC
- Definition and core principles with help of Authzed/Spicedb
- Comparison with RBAC and ABAC (Attribute-Based Access Control)
- Advantages of using REBAC in dynamic and complex environments
-
Implementing ReBAC in Golang
- Setting up Spicedb
- Setting up a Golang project with SpiceDB
- Defining relationships and permissions in SpiceDB DSL
- Implementing ReBAC logic in services and APIs using SpiceDB client
-
Challenges and Use-cases
- Challenges with ReBAC
- Performance and scalability concerns
-
Q&A Session
- Addressing audience questions and providing further insights
Conclusion
By the end of this talk, participants will thoroughly understand ReBAC and know how to integrate it into Golang projects. This will provide Golang developers with advanced authorization tools for better security and scalability of their services and APIs.
Notes
I have been working closely to develop a solution that uses ReBAC authorization and solves the current challenges for authorization needs in my organization. With my team, we have gone through a journey of answers to why, what and how around ReBAC. It makes eligible to share my journey experience and help other developers
^ back to index
65. 🚀 The Power of Bloom Filters: Building a Cutting-Edge Go Search Engine to Explore the World's Source Code
Abstract
Learn how to use the humble bloom filter and Go to search the worlds source code! Discover how http://searchcode.com/ uses both to provide blazing fast search over 75 billion lines of code across 40 million projects by uniquely combining trigrams and bloom filters, all on a single dedicated machine.
Description
So I have this content lying around from blog posts, so I thought I would put an outline below which helps with where I think this should go.
simple introduction, VERY simple, as nobody is there to hear about how I am great or where I work, so a simple, I am the author of searchcode and a dev (1 min)
explain what http://searchcode.com/ is, how it came about and why I work on it (2 mins at most)
personal project, large enough to be interesting, small enough one person can do it
test bed I use for programming experiments, PHP -> Python -> Go and caching, algorithms and such
run through history of how it evolved, sphinx -> manticore -> custom search called caisson
technical explanation of how bloom filters work (3 mins at most)
hashing
examples using classic "big yellow dog"
technical explanation of how the core search works (15-19 mins)
simple search with bloom filters, and explain the problems
lower memory access with rotated bit vectors (visual display of how this actually works and improves performance, ideas to improve things)
bitwise operations and performance
brief look at how bing/bitfunnel extends the idea (although not implemented as yet) (this is the first thing to drop, but mention as its interesting)
sharding based document length to avoid overfilling and wasting memory
how searchcode does this, sharding, splitting etc...
how ranking works (as its a non positional index)
term hashing to drive down false positive rate
false positives, and how trigrams make this worse
examples of how this works (may the demo gods be kind) (1 min)
I have a great deal I could run through on this, but have focused on the details above, since they are most relevant to this talk and the index is the “sexy” part of any search engine.. I could talk for a long time just about the core implementation, but its useful to know some of the issues with these approaches such as false positives and ranking. I do have another blog post about this coming going through how other parts work, but thats outside the scope of bloom filters.
Notes
Technical requirements… none. I guess an internet connection? Can do it without as can run things locally if required as I have the ability to run it totally offline.
Why I am the best person for this talk? Not only am I the author of http://searchcode.com/ I wrote a lot about this here https://boyter.org/posts/how-i-built-my-own-index-for-searchcode/ and as far as I know this sort of search engine where bloom filters and trigrams are fused together is unique. I know of no other case where this has been done, and I have looked. There are other code search engines, such as sourcegraph, debian code search, google code search, all of which do not use bloom filters, as they use traditional posting lists. If nothing else this approach is unique.
Besides the index is the cool part!
^ back to index
66. Processing 40 TB of code from ~10 million projects with a dedicated server and Go for $100
Abstract
The problem? We have some software that can tell you how complex a project is. However in order to gauge what that number means we need to compare it to other projects. But which projects? How about all of them!
Description
I have a lot of this content lying around from various blog posts, so nothing needs to be created here other than to get the content into a nice visible thing.
https://boyter.org/posts/an-informal-survey-of-10-million-github-bitbucket-gitlab-projects/
Learn why using AWS was not the right approach here, and how we ended up processing millions of repository to find the answers to such questions as,
YAML vs YML?
Which group of developers have the biggest potty mouth?
How many files does an average repository have?
How many lines of code are in a typical file per language?
How many repositories appear to be missing a license? Why does that even matter?!
And more!
Notes
No technical requirements beyond a machine to present from. As for am I the best person to speak about it, well I am the original and main author of the tool in question and wrote a post about this previously as well as have presented this talk before at DataEngBytes Sydney, so it should be smoother this time around.
^ back to index
67. Abusing Go, AWS Lambda and bloom filters to make a true Australian serverless search engine
Abstract
Everyone knows search engines require state, making AWS Lambda not ideal for building one… But, there is a saying in computing. Never do at runtime what you can do at compile time.Lets abuse (and I really mean heavily abuse) this by building a true serverless search engine in AWS Lambda!
Description
I have a lot of this content lying around from blog posts and as such don’t need anything new here https://boyter.org/posts/abusing-aws-to-make-a-search-engine/
I would want to start by testing the theory, then walking through the idea, and then showcasing in order
- Implementing an index using bloom filters which we can embed into the binaryt
- Early termination logic
- Crawling the data
- Ranking algorithms
- Adult content detection
- Snippet extraction (this could be its own talk honestly… so much)
- Indexing
- Architecture in AWS
- Results
Portions of it could easily be cut such as adult content and such, but you never know, it might be worth mentioning. I
Notes
No technical requirements other than a machine to host from. I am the author of the blog post this came from and have written about this topic a fair bit on boyter.org where I talk about Go and all of the bits that go into making a search engine. See below for everything that went into this
^ back to index
68. Channeling your Inner Tech Blogger
Abstract
Writing a tech blog is a great way to share knowledge, deepen understanding, and build a professional brand. In this workshop, we’ll explore the benefits, overcome barriers, brainstorm topics, and start drafting posts. My goal is to inspire you to continue your journey to become a tech blogger.
Description
There is a saying that the best way to learn something is to teach it to someone else. When we write a tech blog, we share our knowledge with others and deepen our understanding of the subject. We also improve other skills along the way and build our professional brand and signature.
In this workshop, we will discuss the benefits of writing a blog and try to remove all the barriers that have stopped us so far from writing a blog. Together we will think about topics to write about and start our draft. Hopefully, after this workshop, you will be inspired to continue your journey to become a tech blogger and publish a tech blog.
Notes
I can give it as a talk as well, or as a short workshop (~2 hours).
^ back to index
69. Applied Psychology: Psychology-based UI improvements
Abstract
As developers, we can take a more proactive role in the development process by providing our inputs and suggesting improvement ideas. In this talk, I will share with you knowledge from the field of cognitive psychology that you can apply to UI designs to improve them.
Description
As frontend developers, we implement given UI designs. But wouldn’t it be great if we could provide input and suggest psychology-based improvement ideas? And thus, making the product better and taking a more proactive role in the development process. In this talk, I will share with you knowledge from the field of cognitive psychology that you can apply to UI designs to improve them.
Notes
I have over a decade of experience in the industry as a software developer, and in the past few years, I am studying psychology at the Open University. During my studies, I found that many findings in the field of Psychology can be applied in designing UI systems. After all, these systems are used by humans. The better our understanding of human behavior is, the better systems we can design.
In this talk, I will start with a brief introduction to Cognitive Psychology. Afterward, I will focus on three mental processes. I will show how principles and laws found in these mental processes’ study can be applied to improve the UI and the UX.
*This talk is a lightning talk, around 15 minutes.
^ back to index
70. Continuous Improvements of The Code Review Process
Abstract
As a senior software engineer, I’ve seen & experienced various code review processes. In this talk, I’ll share the best practices I’ve learned for conducting an effective code review. These practices will improve your code review process and your team’s productivity, delivery, & quality of features.
Description
Different teams have different practices when it comes to code review. As a senior software engineer, working in the industry for over a decade, I have seen and experienced various code review processes. In this talk, I will share the best practices I have learned for conducting an effective code review. These practices will improve your code review process and contribute to your team’s productivity, delivery, and quality of features.
Notes
^ back to index
71. How Regex Works: The Secret Sauce Behind Pattern Matching
Abstract
Unlock the secrets of regex in a fun, easy-to-understand talk! We’ll demystify how regex engines work using Non-deterministic Finite Automaton (NFA) and build a simple regex matcher together. Perfect for beginners and seasoned coders alike, join us to level up your pattern matching skills!
Description
Ever wondered how regular expressions pull off their magic tricks? Let’s demystify the regex engine together! In this fun talk, we’ll peek under the hood and see how it all works, using something called Non-deterministic Finite Automaton (NFA), or as some like to call it, state machines. Don’t worry, we’ll keep things simple and easy to understand.
The regex engine is like a super-powered detective that’s really good at finding specific patterns in text. But did you know that behind the scenes, it’s using something we all study in universities, called Non-deterministic Finite Automaton (NFA)? It’s a bit like a map that helps the regex engine understand and match patterns efficiently.
Imagine trying to solve a really big puzzle. NFA helps by breaking down complex regex patterns into smaller pieces, or states. Then, all we need to do is follow the map, moving from one state to another until we find our match. It’s like having a guide to lead us through the maze of text, making pattern matching a breeze.
During our adventure, we’ll take a closer look at how the regex engine is put together. We’ll explore concepts like backtracking (when it needs to go back and try a different route), greedy quantifiers (how it decides how much to match), and character classes (the different types of characters it’s looking for). And remember, we’ll explain everything in plain language, so you won’t get lost in technical jargon.
But we won’t stop at just talking about theory. We’re going to dive deeper and take you through the process of building a simple regex matcher right in the presentation. By walking through this hands-on exercise, you’ll get a firsthand look at how the regex engine works in action. By the end of our journey, you’ll have not only a better understanding of regex but also the practical skills and confidence to use it effectively in your own projects.
Whether you’re a beginner or a seasoned coder, this talk will help you unlock the secrets of regex and level up your pattern matching skills. So, join us for a fun and enlightening exploration into the world of regex!
Notes
Here is a version of the presentation from WeAreDevelopers https://docs.google.com/presentation/d/1N_u0NvKW6HZlHR4gFGTqtSJ7aB0YxGiT/edit
And recording https://www.wearedevelopers.com/en/videos/1212/how-regex-works-the-secret-sauce-behind-pattern-matching
^ back to index
72. Usage of default Golang templating in complex report generation.
Abstract
Sharing experience in solving complex problem of generating reports
- abstract reports from data sources
- prepare MD templates with placeholders pointing to datasource + query + how to present
- execute queries to different sources
- process data: draw graphs, insert raw data
- generate PDF report
Description
At Delivery Hero we have a Reliability Manifesto and in section R-9 It states that all services should be load tested, be able to process at least 4x load rump-up and should be run on regular bases.
During Load Testing we generate a huge amount of monitoring data in different systems: DataDog APM, Prometheus and AWS Cloudwatch.
All this data should be aggregated in one presentable report.
The problem we want to solve:
- How to abstract reports from different metrics sources and different data they provide.
- How to give a flexible and simple instrument to generate templates automatically.
- How to make the lives of our engineers easier.
For instance, in my tribe we have dozens of services, in Logistics we have hundreds of services, In Delivery Hero overall we have thousands of services.
For each of them we need to run load tests on regular bases.
And for each of them we need to generate reports.
The problem of running tests automatically is easy to solve. But how to solve the problem with reporting?
We have an answer, because we designed a tool which helps us to abstract reports from metrics data sources and at the same time keep flexibility of calling different sources with different queries.
You provide a template in a format of Golang templates
{{ ddResult := queryDD .StartTime .EndTime “query” }}
The maximum value of the metric is {{ findMax ddResult }}
Or
{{ promResult := queryProm .StartTime .EndTime “query” }}
{{ drawProm promResult }}
And as the output you receive a generated PDF with either text “The maximum value of the metric is XX” or drawn graph from Prometheus.
All these queryDD, findMax, queryProm and drawProm are predefined template.Funcs and empower great flexibility of your report.
I will be talking about how to implement these functions, how to call different sources, process data and render outputs.
Notes
FYI, the actual presentation for the talk https://docs.google.com/presentation/d/1A_3p1oAbFw02wtd9RywMuKddjA6Baz2xy_1S_cPkFX0/edit
I want to present how easy it is to implement such a tool by using standard golang package “text/template” and some external tools for querying DataDog, Prometheus and Cloudwatch and rendering PDF from Markdown document.
We did it in Logistics last year and we are very satisfied how this simplified our lives regarding load tests.
^ back to index
73. Making non-Go tools accessible to Go developers using WebAssembly
Abstract
Running Go developer tools is trivial, using go run. But, there are still many useful tools out there in other languages. This talk will share how we can use WebAssembly to bring tools like protoc, yamllint, and more to go run. Let’s cut the prerequisites section down to one bullet - Install Go.
Description
With Go’s ease of cross-compilation and simplicity of otherwise using go run to automatically compile and run, we are seeing many new developer tools being written in Go. However, there are still plenty of necessary tools written in other languages, such as protoc (C++) or sql-formatter (TypeScript). This is even more so in codebases that are mostly, but not completely, Go, for example when a frontend is written in TypeScript and an ML backend is written in Python. gRPC users need to generate stubs in all of these languages, juggling between OS package managers, NPM, and more. Notably, reliance on OS package managers make it very difficult for a build to be reproducible across developer machines.
Luckily, we now have a cross-platform bytecode format, WebAssembly. If we compile C++ to WebAssembly, or compile a JS runtime such as QuickJS to WebAssembly, we can use the pure Go WebAssembly runtime, wazero, to bring tools written in these other languages to Go.
This talk will begin by explaning the concept of reproducible builds, why they are helpful (to reduce mysterious debugging), and how they can be achieved when able to completely stick to Go tools.
Then it will go through several tools that otherwise are tricky to install, not being in Go, but made trivial by packaging WebAssembly versions.
-
protoc / protoc-gen-grpc - C++ binaries, with the latter not even providing downloadable precompiled binaries to try to wire into a build
-
protoc-gen-es - NodeJS binary, which normally requires using NPM to download the packages and dependencies to run
And more protoc plugins written in languages including Python, Rust, Swift, and Zig.
When using Buf to orchestrate a protobuf build and using versions of the above packaged as Go binaries via Wasm, we can generate to all of C++, Python, Ruby, TypeScript, Go, and more, all with go run invocations, no developer tool installation required at all.
-
sql-formatter - NodeJS binary, which normally require using NPM to download the packages and dependencies to run. The NodeJS binary can continue be used by IDEs, such as with the VSCode plugin, while CI or CLI invocation can use the Go distribution to produce the exact same result - it’s the same code
-
yamllint / prettier - a Python binary to lint yaml files and a NodeJS binary to format various files such as yaml and markdown. Go projects will still often have these other file formats and we can lint and format them in a way that matches IDE integrations
-
sqlc - a Go binary that normally uses cgo to access PostgreSQL parser. This can cause slow build times and log spam on MacOS updates. Using the parser compiled to Wasm, we bring sqlc to go run with no gotchas
While mentioning go run frequently for installation-free invocation, for those that prefer go install naturally runs fine. And because Go binaries can be cross-compiled trivially, all of these tools are available with precompiled binaries for major platforms, all built from a normal Linux GitHub CI runner. There is no need for complicated runner setup as would be needed if compiling the C++ sources directly to these platforms
By the end of the presentation, the audience will understand reproducible builds and why they are important. They’ll see many tools that were out of reach behind bespoke package managers ported to Go using WebAssembly and be able to use these tools right away in their builds - after all, they already have Go installed.
Notes
This talk will present topics on improving the quality of the builds / CI of Golang codebases, so familiarity with real-world Go projects would help understand the motivation. Actually using the tools is simple and does not require special technical knowledge, though getting deep with Wasm is required to be able to port another tool using the technique.
As far as I know, I’m the only one using WebAssembly to make using these tools even easier than existing approaches like asdf and am qualified to speak on the topic, having been using WebAssembly with wazero and Go for some time porting various projects.
^ back to index
74. Software Tomorrow
Abstract
Join us for a journey through the past, present and future of software development. We’ll share insights from producing the largest tech web series in Israel about how developers work today, covering everything from testing to teamwork, and then we’ll fast forward to the future. Adventure awaits!
Description
Introduction
Join us for a journey through the present and future of software development 🚀. We’ll share insights from producing the largest tech web series in Israel about how developers work today, covering everything from testing to teamwork. Then, we’ll fast forward to the future. Find out why technical design is becoming more important and discover how AI agents are changing the way we code. We’ll show you how AI can make coding easier, giving engineers more time to focus on solving problems.
Takeaways
- Understand why technical design and architecture are becoming paramount in the evolving software development landscape, driven by advancements in AI Agents.
- Learn strategies to leverage AI Agents and technical specifications effectively.
- Discover the workflow of Natural Language Coding, enabling you and your organization to adapt to the changing dynamics of software engineering.
Relevancy
- Cutting-edge topic: AI Agents are at the forefront of industry discussions. An audience consisting of tech leaders, managers, and senior developers, would be interested in staying updated on the latest trends and advancements in the field.
- Guidance and Strategy: The talk provides insights and strategies that can help leaders and developers adopt new workflows, make informed decisions and create better software systems.
Notes
I delivered this talk in international conferences, and it always raises great interest. I’d love to share it with your community as well. My talks are structured to be engaging and informative, with splash of humor, and most importantly - practical. I am very passionate about public speaking, and you can see my positive reviews from past conferences here: https://www.linkedin.com/services/page/714a943231b2707a00/ and my personal speaker page: https://www.hajongler.com/
^ back to index
75. The Rise of AI Agents
Abstract
25 years after Agent Smith coined “Never send a human to do a machine’s job”, this futuristic idea seems closer than ever. Join us as we discover how AI agents are becoming the “jack-of-all-trades” in the tech world, revolutionizing the way we work and interact with technology.
Description
Not so long ago, AI was known for its ability to excel at specific tasks (“cat vs dog”), earning it the reputation of a specialist. However, with the advent of Large Language Models like OpenAI’s GPT, AI has transformed into a generalist, capable of writing naturally, answering questions, and even generating code. Despite facing limitations such as hallucinations, knowledge cutoffs, and struggles with basic math, these AI systems have opened up a world of possibilities.
Today, the next frontier in AI development are AI agents. By combining the power of LLMs with advanced capabilities such as planning, reflection, and tool use, AI agents are becoming the “jack-of-all-trades” in the tech world. Drawing inspiration from human cognition, AI Agents can accomplish complex goals, thinking and acting much like us humans. Join us as we explore agentic workflows and discuss the role of developers in guiding AI agents and shaping the future of AI. We’ll get familiar with the world of AI agents, and understand their potential to revolutionize the way we work and interact with technology, ultimately allowing humans (and developers 😉) to focus on the bigger picture.
Notes
I delivered this talk in international conferences, and it always raises great interest. I’d love to share it with your community as well. My talks are structured to be engaging and informative, with splash of humor, and most importantly - practical. I am very passionate about public speaking, and you can see my positive reviews from past conferences here: https://www.linkedin.com/services/page/714a943231b2707a00/ and my personal speaker page: https://www.hajongler.com/
^ back to index
76. 42! (A Developers Guide to the Future)
Abstract
Embark on a cosmic journey through AI’s impact on software development. Learn how your skills become superpowers in this new era. Discover opportunities, navigate challenges, and find out why “42” is the answer to thriving in AI-driven web development.
Description
In a universe where AI is rapidly evolving, developers might feel like they’re hitchhiking through a bewildering galaxy of new technologies. But fear not! In “42! (A Developers Guide to the Future)” Jorrik takes you on a hilarious and insightful journey through the challenges and opportunities that await developers in an AI-driven world.
Drawing inspiration from the iconic movie “Hitchhiker’s Guide to the Galaxy,” this talk explores the unique skills that make developers the perfect navigators of our AI-infused future. From mastering the art of prompt engineering to leveraging their innate understanding of complex systems, attendees will discover how their current abilities translate into superpowers in the age of AI. Jorrik will delve into potential pitfalls, emerging opportunities, and the evolving relationship between human developers and AI tools.
By the end of this cosmic adventure, you’ll have a roadmap for thriving in the AI era, complete with practical tips, interactive exercises, and enough nerdy humor to make even Marvin the Paranoid Android crack a smile. Whether you’re a seasoned developer or just starting your journey, this talk will equip you with the knowledge (and towel) you need to confidently hitchhike through future tech. Don’t panic, the answer to a dev’s future in AI is 42, and all will be revealed!
Notes
This talk humorously explores how developers can thrive in an AI-driven future, using “Hitchhiker’s Guide to the Galaxy” references. It covers key skills, potential challenges, and emerging opportunities in AI-frontend integration. The session includes interactive elements and practical tips for leveraging AI in web development. Suitable for all developers, it offers a unique, entertaining perspective on adapting to technological changes in the industry.
^ back to index
77. Exporing Domain Driven Design in Go
Abstract
Learn how to apply DDD to Go applications without compromising its unique idioms and language features. This talk covers tactical and strategic patterns with real-world examples and best practices, making it ideal for DDD beginners and those who previously struggled to use it in Go.
Description
Notes
This talk is not going to be about reusing the concepts from the DDD literature applied in other idioms, ending up with an object-oriented application written in Go.
Instead, I’ll show how to apply the most popular tactical patterns to shape idiomatic code the domain invariants and use strategic patterns to let a domain analysis structure an application.
This talk is the outcome of years of experience and blogs I wrote:
^ back to index
78. The internals of the context package
Abstract
In this talk, we’ll explore the internals of the context package, covering the implementation of context types, the data structures used, cancellations, timeouts, and deadlines to enable you to use it effectively in your applications and avoid common pitfalls and bad practices.
Description
Notes
This will be an extended version of a talk I already gave a few times internally in companies (https://www.damianopetrungaro.com/talks/2020-context-package-in-go/) later on summarized as a blog post (https://www.damianopetrungaro.com/posts/go-internal-context-package/)
^ back to index
79. Beyond the Basics: Elevate Your Go Testing Game
Abstract
In this talk, we will live-code into a variety of best practices around testing including some advanced strategies:
- Database integration testing with test-containers.
- The art of HTTP/gRPC with recorders and replayers.
- BDD with Gherkin as your testing language.
Description
Notes
^ back to index
80. Evil Tech: How Devs Became Villains
Abstract
Once seen as heroes, developers now face scrutiny for creating data-gathering apps, facial recognition, and GPS tracking. They grapple with the choice between malevolence and heroism. This talk explores the ethical complexities and fine line between progress and principles with dark humor.
Description
Once upon a time, developers were the unsung heroes of our world. The stereotypical developer, with glasses perched on the nose and an innate talent for science, even inspired the alter egos of superheroes.
However, today, software engineers often find themselves under scrutiny for their roles in creating data-gathering apps, facial recognition software in CCTV systems, and the constant tracking of citizens through GPS, among other issues.
From being heroes to becoming modern-day Dr. Frankensteins, tech creators face an unenviable dilemma: to embrace malevolence or strive for heroism.
During this talk, we will delve deep into the complex relationship between technology and ethics, and explore how developers navigate the fine line between progress and principles.
This is our villain’s origin story, told with a touch of dark humor.
Notes
^ back to index
81. The Art Of Scalable Intelligence: Distributed Machine Learning with Go
Abstract
Have you ever wondered unique challenges that arise when scaling machine learning algorithms across distributed systems? Well, in this talk, lets leverage Go’s concurrency model, to design efficient and fault-tolerant architectures that enable us to tackle large-scale ML problems with ease.
Description
In this deep dive talk, we will explore the world of distributed machine learning architectures using Go, a powerful and efficient programming language. Scaling machine learning algorithms across distributed systems presents unique challenges, and Go’s concurrency model and robust ecosystem make it an ideal choice for building scalable ML architectures.
Throughout the session, we will unravel the intricacies of distributed machine learning in Go, uncovering fascinating insights and thought-provoking ideas. Finally, we will examine various distributed ML architectures, including data parallelism, model parallelism, and hybrid approaches, showcasing how Go’s lightweight goroutines and channels enable seamless orchestration of distributed computations.
Key takeaways from this talk include:
- Insight into the unique challenges and opportunities of distributed machine learning architectures in Go.
- Understanding of key distributed ML techniques such as data parallelism, model parallelism, and hybrid approaches.
- Knowledge of scalable ML architectures, including data sharding, model synchronization, and fault tolerance mechanisms.
- Familiarity with the vibrant Go ecosystem for distributed machine learning, including libraries and frameworks that simplify development and deployment.
- Ability to leverage Go’s concurrency model and robust ecosystem to build scalable and efficient ML systems in distributed environments.
Notes
^ back to index
82. Test like a ninja with Go
Abstract
I aim to present you with the techniques and tools you might use to build reliable tests. We’ll use Go, which provides a great testing experience. I’ll show you overlooked techniques such as benchmarking, fuzzing, etc. Plus, I’ll introduce you to popular libraries and packages used to test Go code.
Description
If you want to test your Go source code like a master, don’t miss this session! We’ll cover a wide variety of topics that may give you a boost in your developer journey.
The session starts with an introduction to testing. Why are tests so much relevant? How should you write the source code to make it well-testable? We also cover the different kinds of tests and which one to choose.
Why Go? We’ll cover the factors behind the choice and what differentiates Go from other programming languages regarding testing. There will also be room to talk about the Go Test Runner.
Then, we’ll move into a more practical part. We’ll see the different third-party packages we can use in our test code. We’ll briefly touch on benefits and see a bunch of use cases.
We’ll also cover fundamental concepts such as test suites and mocks.
We’ll look at other testing techniques, such as benchmarking, fuzzing, and example test functions.
Finally, we’ll focus on integration tests: how to write them and use the Testcontainers technology to smoothen the process.
At the end of the session, I hope you’ll be aware of many new concepts for your new, fresh, efficient tests.
Notes
^ back to index
83. Swiss knife for Go debugging with VSCode
Abstract
Being able to debug your code in the IDE should be an easy process. We’ll take a look at how to debug several kinds of projects by only tweaking settings in the “launch.json” file. You’ll discover how many options and customizations you can apply to this file to leverage your debugging experience.
Description
One of the most important things when we’re writing code is the ability to debug it. Many IDEs have an integrated debugger that can smoothen our coding experience. The debugger for the Go source code is called Delve. It’s tightly integrated with VSCode and the Go extension. As you might know, the debugger allows us to step through our code, focus on specific sections that may deserve more attention, inspect variables’ values, stack traces, etc.
Sometimes, debugging turns into a hassle. The process supposed to help us becomes an insurmountable obstacle. Sometimes, we abandon the debugging or log directly into the code. Both of the options end up decreasing our productivity as developers.
Thus, this talk aims to provide a working solution to debug Go code in VSCode. I chose this IDE since it’s free, highly customizable, performant, and my favorite!
Since we can build different projects, I try to provide you with a working solution for each. The scenarios you’re likely to face are (list not exhaustive):
- Debug unit tests
- Debug integration tests
- Debug a package
- Attach to an already running process (both locally and remotely)
- Debug multiple microservices with the compound configuration
To overcome these challenges, you’re requested to tweak settings in the launch.json file within the hidden .vscode folder. In this file are listed what are known as profiles. These are selectable in the “Run View” area. Within this file, we have the option to set different values, such as:
- environment variables or env file
- whether to show global variables’ values
- which console to use
- and many others
Throughout this talk, I share hints on Delve and how to make the most of it. I also touch on some overlooked aspects of VSCode that can make a huge difference in your debugging experience.
Finally, I’ll give you some mind-blowing tips and tricks on debugging. If you’d like to improve your debugging skills, please don’t miss my session!
Notes
^ back to index
You're all done! ^ back to index