Sam Kroonenburg, A Cloud Guru
Pete Sbarski, A Cloud Guru
Simon Wardley, Leading Edge Forum
Deng Xiaoping once described managing the economy as crossing the river by feeling the stones – in other words have a direction but be adaptive. But in a world of constant change, how do you determine the right thing to do? In this session we will discuss why serverless is the future and why now.
Jared Short, Trek10
Sean Mare, Gemological Institute of America
The Gemological Institute of America (GIA), the world’s most trusted independent authority for Diamond grading, works with objects as old as dirt. Their infrastructure, however, is anything but. GIA and Trek10 will talk about their real-world experiences from conception to production providing a multi-region, highly-available GraphQL API. We’ll talk cloud-native, event-driven, and the challenges faced along the way.
Mark Nunnikhoven, Trend Micro
Cybersecurity is topic that comes up regularly as something you *have* to do…or should do…or are forced to look at by your security team. But why? There have been reports of serverless threats. Rumours of traditional security issues that should keep you up at night. Is any of it real?
When new technologies are adopted, cybersecurity implementations follow the same pattern. New innovations allow developers to move faster, operations gets onboard, and then security swoops in armed with a mountain of fear, uncertainty, and doubt.
We can do better.
Serverless is an unprecedented opportunity to modernize our approach to security. In this talk, we’ll look at how we can accomplish this and build strong solutions without the FUD.
Joseph Emison, Branch
Branch is a company that sells its own home, auto, umbrella, and renter's insurance products, and runs on a completely serverless infrastructure. This talk walks through Branch's architecture, how Branch has remained compliant with regulatory and security requirements, the benefits Branch has seen from being serverless, and overall lessons learned along the way.
Peter Sbarski, A Cloud Guru
Nicki Klein, AWS
There are big computational problems that may seem to be a bad fit for Serverless. After all, it’s easy to spin up an EC2 and put it to the task like converting video files or crunching millions of numbers. However, perhaps we need to think about a problem in a different way. What if you could decompose the problem in to a lot of tiny parts, process these parts in parallel (using the awesome power of Lambda) and then compose/reduce/aggregate an answer? You could arrive at an answer quicker or even cheaper than the traditional way. Better yet, you can stay Serverless and not have to worry about managing complex infrastructure. In this talk we are going to have a look at the principles of Map/Reduce and how it applies to Serverless architectures. We’ll show suggested architectures and patterns, showcase examples, and discuss how to approach big problems. In the end, the audience will walk away with a view on how to solve complex problems using the power of decomposition and parallelism available to them with functions. This is going to be an advanced talk that assumes that the audience is already familiar with the fundamentals of Serverless technologies.
Paul Chin Jr, Cloudreach
Web Socket Revival with Nicolas Cage & API Gateway
Tanusree McCabe, Capital One
4 Serverless Myths to Understand Before Getting Started with AWS
MJ Ramachandran, ForgePoint Capital
Demonstrating ROI: Convincing the VCs / bosses to invest in Serverless
David Roberts, AWS
How I added serverless monitoring to my Electric Car in less time than it takes to charge
Ben Kehoe, iRobot
Serverless is service-full, which means you've got more complicated cloud infrastructure graphs to configure. Everyone knows YAML is awful, and there are a lot of tools and frameworks that purport to make creating cloud infrastructure easier by letting you do it in your favorite (Turing-complete) programming language. Easy, right? In this talk, I'll argue that representing our infrastructure using declarative languages is fundamentally better, and that despite the many and varied flaws of YAML, using it and cloud infrastructure management services (like CloudFormation) directly has the lowest TCO. Finally, I will lay out what the path towards better declarative infrastructure management services looks like.
Subbu Allamaraju, Expedia Group
How did we get here? Our simple capitalistic pursuit of becoming faster, better and cheaper is driving us through a massive reshuffling of complexity leading to innovations like serverless. We’re now able to ship code faster, run our architectures cheaper, and rapidly adopt robust architecture patterns. Yet the end is not in sight. As we go through this change, we’re uncovering yet more abstractions to create, and yet more complexity to shuffle.
As someone that lead multiple transformational tech initiatives, in this session, I want to describe three tech adoption penalties, and how to prepare for those. These are comprehension penalty, which is the tax you pay for architecture agility through microservices and serverless adoption; survival penalty, which is the penalty as your company stays longer; and value penalty, which is the struggle you must go through to instill change.
In the absence of preparedness, these penalties will slow you down, and frustrate you. If you’re struggling to drive adoption of tech like serverless, this talk will help you set things in context and prepare you to lead the change.
Erica Windisch, Iopipe
The availability of Lambda and streaming technologies such as Kinesis make it simple for anyone to build apps that scale to processing billions of API requests per month. We’ll explore some lessons learned and about the pitfalls and challenges one will face on their serverless ingestion journey.
Some questions we’ll answer:
- How does one cost-effectively operate such an API?
- How do you pick data stores to handle this size of a data stream?
- Where are the scaling pain points?
- Why doesn’t everyone do this?
Tory Adams, Sightbox
At SightBox we built a complex application using ColdFusion. Over time this became hard to maintain and extend. That led us to investigate serverless solutions. With AWS AppSync, Lambda, (Aurora Serverless?) and many other serverless services and tooling provided by Stackery, we learned how to rapidly refactor our application into a modern, efficient GraphQL service. We quickly learned how to design what we call “black box GraphQL”. This means that each team working on our application owns and maintains their section of our GraphQL Api.
Another learning we had was that refactoring our legacy database to use something more modern (DynamoDB and Serverless Aurora). GraphQL was an easy way for us to straddle multiple databases using the strangler pattern.
Murugappan Chetty, Optum
In this session, we will talk about how how Optum is using serverless for healthcare and ITOps use cases.
Brian LeRoux, Begin.com
Building serverless applications for AWS with SAM or raw Cloudformation can be complex and time consuming. Infrastructure as code has brought us determinism but at the cost of hard to read and modify YAML files. Worse, deployments can span into the tens of minutes. When iteration times are slow natural feedback cycles grow longer and so does feature development and bug resolution.
Architect is a mature framework for delivering serverless web apps that can subvert the normal Cloudformation deployment paths. Iteration speeds are measured in seconds without trading off the determinism and extensibility of AWS best practices. Join us in this talk and learn about getting 'dirty' with SAM!
Satish Malireddi, T-Mobile USA
Nicholas Criss, T-Mobile USA
Satish will talk about T-Mobile's serverless journey, covering common serverless use-cases at the company, a few best practices and challenges faced with serverless in the enterprise. Then he will present Jazz, T-Mobile’s open source serverless platform. Jazz abstracts away the underlying complexity from developers and ensures enterprise governance/compliance for what is being built on the cloud. Serverless is proven and here to stay, but adopting any new technology is often a challenge for large organizations! Satish and Nicholas will share their experiences driving the adoption of serverless at T-Mobile so that others can learn from their success and mistakes. Finally, Nicholas will talk about how open source is driving T-Mobile’s transformation to a product and technology company.
Donna Malayeri, Google Cloud Platform
When serverless first launched, it was an operational model bundled with a programming model. You provide just your function code, and the platform does all the rest.
Today, serverless has evolved beyond this first iteration. There are serverless containers. AWS Lambda provides flexibility with Lambda Layers. Data warehouse products such as BigQuery allow user-defined functions. There are even “serverless” programming models on that run on Kubernetes -- a decidedly server-full platform. This leads to the key question: what is the core of serverless? Is it possible to split apart the operational model and the developer experience, and still be serverless? What will we see in future platforms?
This talk will explore these questions. It’s time to go beyond just FaaS and see where serverless can take us.
Sam Kroonenburg, A Cloud Guru
A Cloud Guru has been running a completely Serverless platform since 2015. During that time we have gone from one developer to 5 full dev teams. In this talk, ACG founder Sam Kroonenburg will share why he chose to build a Serverless platform, and tell the story of our transition from a Serverless Monolith to Microservices and the things we learned along the way.
David Wilson, Mutual of Enumclaw Insurance
If you’re stuck with unsupported legacy systems it can feel like none of your options are good, and building your way out can be an especially tough sell. Our experience in navigating these challenges may provide a model that others can use to move towards a serverless future.
Sander van de Graaf, Ookla / Downdetector
Sometimes the internet is having a bad day, and Facebook, Instagram or Whatsapp doesn't work. You and countless of other people rush to downdetector.com to check if that's really the case. Here's our story about the last Facebook outage, and how we made our service scale.
Serverless is so much faster, easier, and manageable than previous application methodologies that it should be the only way we build apps. And yet most developers still reach for containers or servers, despite the obvious challenges in capacity planning, scaling, and management those choices represent. In this talk I look at some of the missing pieces in the serverless portfolio that make it hard to port existing (and build new) applications "in real life" and what's needed to close those gaps.
Farrah Campbell, Stackery
Danielle Heberling, Stackery
Serverless has opened a whole new playing field and this very moment in time, is an opportunity to enter a new ecosystem, fast track your career, and make an impact on the business value working on projects that you never imagined were possible. In this session, Farrah and Danielle, (who both recently acquired their AWS certs) will share their motivations for learning Serverless, their personal experiences working in the Serverless ecosystem, the surprising ways that these new skills have benefited their careers, and why they believe that serverless can be for anyone and everyone who’s willing to learn.
Johann Schleier-Smith, UC Berkeley
The emergence of serverless cloud computing in industry has recently captured the imagination and enthusiasm of the academic research community. By providing an interface that greatly simplifies cloud programming, it represents an evolution that parallels the transition from assembly language to high-level programming languages. We analyze the motivation for serverless computing, contextualizing it in the history of cloud computing, then describe applications that stretch the current limits of serverless, identify obstacles to overcome, and review recent research development. We claim that the challenges are solvable and that serverless computing will grow to dominate the future of cloud computing.
Linda Nichols, Microsoft
As software developers, we love best practices. We have strong opinions about development environments and tools. We’re constantly improving our processes to write more efficient and maintainable code. We live and die by our formatting and static code analysis tools. All code is always stored in revision control systems and we can’t accept code updates that haven’t been unit tested. The CI/CD pipelines we design will ensure that every time we deploy, we deploy the same way.
Well… except maybe when we’re developing cloud native and serverless applications. For some reason, when we’re given a portal and the ability to create functions in the cloud, we tend to want to throw away all of the guidelines that we adhered to for so long. We write code in a little text box with no code-checking or unit testing. Maybe we push these changes back to our Github repository, but maybe not. Sometimes, updates even happen directly to functions in production environments.
Developing serverless applications can remove a lot of the complexity of development environments, languages, and frameworks. It adds layers of abstraction and creates a perfect low-ops environment for developers. But low-ops and no-ops shouldn’t mean no devops or cutting out best practices and processes in software development.
Let’s talk about how we can develop applications in a cloud environment, but using some of the same tools and processes as in classic software development. We’ll also look at new tools and frameworks to create even more robust and reliable deployment pipelines. Serverless application developers are still application developers. It is possible for us to maintain our culture and processes we love, in any ecosystem.
Rasmus Hald, MAERSK
Casper Jensen, MAERSK
At MAERSK we are set out to reduce hand-overs by automating our internal processes and inter-team dependencies and Serverless has proven a valuable tool to enable us to do just that. Join us for a talk about how MAERSK, a fortune 500 enterprise has used serverless technologies to automate internal processes within the technology organization and why these technologies are such a great match for automation.
Alex DeBrie, Serverless, Inc.
In this talk, we'll cover why DynamoDB is such a great fit for serverless architectures on AWS. We'll discuss best practices for DynamoDB data modeling and how to get the most out of your tables.
We'll also cover how and why the recommended best practices with hyper-scale DynamoDB tables don't work for all serverless applications. We'll see when to reject these best practices and how to balance performance and flexibility with DynamoDB.
Ryan Scott Brown, Trek10
Avoiding Work: Architecting for Serverless Efficiency
Rosemary Wang, HashiCorp
Serverless Still Needs Service Networking
Hanna Elszasz, Cloudreach
The limitations of glue and how to overcome
Shorta Izumi, Advanced Creative Center, Dentsu
The container-based automatic generation architecture of digital ads, using AWS Fargate
Jeff Hollan, Azure Functions
Serverless has redefined how applications are architected and built. That shift is bursting far beyond traditional serverless options, and bleeding into Kubernetes, Containers, and IoT. What does this mean for the serverless developer and team? What options and tradeoffs result?
Watch the journey of a function from the cloud, to a container, and to Kubernetes. Understand the tradeoffs and options along the way, and learn what customers are doing every day to bring the benefits of serverless into every workload and scenario.