2
Full Days

2 Full Days

2
Loaded Stages

2 Loaded Stages

54
Serverless Experts

54 Serverless Experts

Day 1

Tuesday, October 8th, 2019

Stage 1

Stage 2

8:00 - 9:00

Registration & Breakfast

9:00 - 9:15

Welcome to Serverlessconf - Introduction

Sam Kroonenburg, A Cloud Guru

Pete Sbarski, A Cloud Guru

9:15 - 9:45

Keynote - Crossing the river by feeling the stones

Simon Wardley, Leading Edge Forum

Deng Xiaoping once described managing the economy as crossing the river by feeling the stones – in other words have a direction but be adaptive. But in a world of constant change, how do you determine the right thing to do? In this session we will discuss why serverless is the future and why now.

9:45 - 10:15

Serverless in the Cloud with Diamonds

Jared Short, Trek10

Sean Mare, Gemological Institute of America

The Gemological Institute of America (GIA), the world’s most trusted independent authority for Diamond grading, works with objects as old as dirt. Their infrastructure, however, is anything but. GIA and Trek10 will talk about their real-world experiences from conception to production providing a multi-region, highly-available GraphQL API. We’ll talk cloud-native, event-driven, and the challenges faced along the way.

10:15 - 10:45

The Sky Is Falling, Run!

Mark Nunnikhoven, Trend Micro

Cybersecurity is topic that comes up regularly as something you *have* to do…or should do…or are forced to look at by your security team. But why? There have been reports of serverless threats. Rumours of traditional security issues that should keep you up at night. Is any of it real?

When new technologies are adopted, cybersecurity implementations follow the same pattern. New innovations allow developers to move faster, operations gets onboard, and then security swoops in armed with a mountain of fear, uncertainty, and doubt.

We can do better.

Serverless is an unprecedented opportunity to modernize our approach to security. In this talk, we’ll look at how we can accomplish this and build strong solutions without the FUD.

10:45 - 11:15

Break

11:15 - 11:45

Running a Serverless Insurance Company

Joseph Emison, Branch

Branch is a company that sells its own home, auto, umbrella, and renter's insurance products, and runs on a completely serverless infrastructure. This talk walks through Branch's architecture, how Branch has remained compliant with regulatory and security requirements, the benefits Branch has seen from being serverless, and overall lessons learned along the way.

5 Years of Serverless

Chris Munns, AWS

A look back at the first 5 years of Serverless since the original announcement of AWS Lambda back in November of 2014. The talk will cover the big picture of why it first launched, how its changed, and how the customers and workloads have as well. I'll touch upon the wider ecosystem as well.

11:45 - 12:15

Solving BIG problems the Serverless way

Peter Sbarski, A Cloud Guru

Nicki Klein, AWS

There are big computational problems that may seem to be a bad fit for Serverless. After all, it’s easy to spin up an EC2 and put it to the task like converting video files or crunching millions of numbers. However, perhaps we need to think about a problem in a different way. What if you could decompose the problem in to a lot of tiny parts, process these parts in parallel (using the awesome power of Lambda) and then compose/reduce/aggregate an answer? You could arrive at an answer quicker or even cheaper than the traditional way. Better yet, you can stay Serverless and not have to worry about managing complex infrastructure. In this talk we are going to have a look at the principles of Map/Reduce and how it applies to Serverless architectures. We’ll show suggested architectures and patterns, showcase examples, and discuss how to approach big problems. In the end, the audience will walk away with a view on how to solve complex problems using the power of decomposition and parallelism available to them with functions. This is going to be an advanced talk that assumes that the audience is already familiar with the fundamentals of Serverless technologies.

12:15 - 12:45

Think FaaS Lightning Sessions

Paul Chin Jr, Cloudreach
Web Socket Revival with Nicolas Cage & API Gateway

Tanusree McCabe, Capital One
4 Serverless Myths to Understand Before Getting Started with AWS

MJ Ramachandran, ForgePoint Capital
Demonstrating ROI: Convincing the VCs / bosses to invest in Serverless

David Roberts, AWS
How I added serverless monitoring to my Electric Car in less time than it takes to charge

12:45 - 1:45

Lunch

1:45 - 2:10

YAML is better than your favorite language: fightin' words about infrastructure as code

Ben Kehoe, iRobot

Serverless is service-full, which means you've got more complicated cloud infrastructure graphs to configure. Everyone knows YAML is awful, and there are a lot of tools and frameworks that purport to make creating cloud infrastructure easier by letting you do it in your favorite (Turing-complete) programming language. Easy, right? In this talk, I'll argue that representing our infrastructure using declarative languages is fundamentally better, and that despite the many and varied flaws of YAML, using it and cloud infrastructure management services (like CloudFormation) directly has the lowest TCO. Finally, I will lay out what the path towards better declarative infrastructure management services looks like.

The Future is Developerless: Ship Production Software with Fewer People

Keith Horwood, Standard Library

In this talk, we'll discuss the evolution of the Standard Library platform from serverless to developerless, and how we leverage our serverless innovations to minimize developer resource spend for organizations. Joining us will be Albert Lee, Architect at the City of Boston (Boston.gov) who is using our specific flavor of serverless to deliver order-of-magnitude efficiency gains to an underserved sector in technology -- government. We'll talk both how we think about the evolution of serverless technology as well as specific implementations that are helping development teams today.

2:15 - 2:35

Penalties and Purgatory

Subbu Allamaraju, Expedia Group

How did we get here? Our simple capitalistic pursuit of becoming faster, better and cheaper is driving us through a massive reshuffling of complexity leading to innovations like serverless. We’re now able to ship code faster, run our architectures cheaper, and rapidly adopt robust architecture patterns. Yet the end is not in sight. As we go through this change, we’re uncovering yet more abstractions to create, and yet more complexity to shuffle.

As someone that lead multiple transformational tech initiatives, in this session, I want to describe three tech adoption penalties, and how to prepare for those. These are comprehension penalty, which is the tax you pay for architecture agility through microservices and serverless adoption; survival penalty, which is the penalty as your company stays longer; and value penalty, which is the struggle you must go through to instill change.

In the absence of preparedness, these penalties will slow you down, and frustrate you. If you’re struggling to drive adoption of tech like serverless, this talk will help you set things in context and prepare you to lead the change.

What is Cloud Run and how is it different than serverless functions?

Bret McGowen, Google Cloud Platform

Take a deep dive into Google Cloud's brand-new serverless compute platform that enables you to run everything from apps to functions in almost any language or runtime you'd like. It scales invisibly and prices as-you-go, just like the current serverless systems you know and love. Come see live demos and ask questions about this new platform and approach.

2:40 - 3:05

Handling billions of requests without breaking a sweat

Erica Windisch, Iopipe

The availability of Lambda and streaming technologies such as Kinesis make it simple for anyone to build apps that scale to processing billions of API requests per month. We’ll explore some lessons learned and about the pitfalls and challenges one will face on their serverless ingestion journey.

Some questions we’ll answer:
- How does one cost-effectively operate such an API?
- How do you pick data stores to handle this size of a data stream?
- Where are the scaling pain points?
- Why doesn’t everyone do this?

Serverless from the trenches of the web

Burke Holland, Microsoft

Cecil Phillip, Microsoft

Outside of micro-services, far away from event triggers and message queues, a growing contingent of web developers are on a mission: to make the web serverless.

But we're skeptical. We weren't sure what ""Serverless"" actually means, we weren't sure if anyone else really did either or whether or not any of this is even a good idea. But we also know that once the web adopts something, the rest is history. So we did what developers do when they want answers. We built something.

Join us as we dive into an application that we’ve been building over the past year to test out this hypothesis: is serverless really a good idea for web applications? These are the hard lessons we learned from the trenches building a Serverless Web Application.

3:05 - 3:35

From ColdFusion To GraphQL: Rapidly Refactoring A Legacy Application

Tory Adams, Sightbox

At SightBox we built a complex application using ColdFusion. Over time this became hard to maintain and extend. That led us to investigate serverless solutions. With AWS AppSync, Lambda, (Aurora Serverless?) and many other serverless services and tooling provided by Stackery, we learned how to rapidly refactor our application into a modern, efficient GraphQL service. We quickly learned how to design what we call “black box GraphQL”. This means that each team working on our application owns and maintains their section of our GraphQL Api.

Another learning we had was that refactoring our legacy database to use something more modern (DynamoDB and Serverless Aurora). GraphQL was an easy way for us to straddle multiple databases using the strangler pattern.

Real-time as a First-class Serverless Citizen

Nader Dabit, AWS

In the past there have been real challenges to building real-time functionality into our serverless applications. With the introduction of AWS AppSync & GraphQL subscriptions, real-time serverless applications have been made possible without much work.

In this talk, I will take a deep look into how easily to add real-time functionality into our serverless applications using AWS AppSync & how GraphQL subscriptions work. I’ll then walk through a few demonstrations of real-time in action, both the demos & then in code. I’ll show some interesting applications of real-time data including a collaborative drawing application, a video recognition app that displays information about what the camera sees in real-time, and a collaborative drum machine.

By the end of the talk, attendees should have a solid grasp of how to build real-world & real-time serverless apps using AWS AppSync.

3:35 - 4:05

Break

4:05 - 4:25

How Optum went serverless

Murugappan Chetty, Optum

In this session, we will talk about how how Optum is using serverless for healthcare and ITOps use cases.

Managing Your Serverless Servers

Amy Arambulo Negrette, Cloudreach

The backbone of serverless architecture is using Functions as a Service to handle your logic needs. This is done by spinning up containers to run these functions on demand. For many use cases, this would be enough. Other times, you would need to manage small configurations such as time outs and memory. More often, a developer needs to use third-party libraries or a company built and maintained library.

For small changes, the timeout and memory can be adjusted through configuration settings with the cloud provider such as AWS Lambda or Google Cloud Functions.

To import third-party or other pre-built libraries, cloud providers will accept deployment packages that can be uploaded with a zip file or other deployment procedure after building the package locally. AWS has also introduced Layers as a way of having a standard way of deploying libraries to already existing runtimes. In more complicated scenarios, such as an IT Department requirement that any runtime be ‘blessed’ or maintaining a golden archive of runtimes, a combination of OpenWhisk and Docker Blackbox can allow you to choose your own runtime.

This talk will go over all of these options, how to choose which is the best for your particular use case, and how to implement them.

4:30 - 4:50

Faster Cloudformation Delivery with OpenJS Architect

Brian LeRoux, Begin.com

Building serverless applications for AWS with SAM or raw Cloudformation can be complex and time consuming. Infrastructure as code has brought us determinism but at the cost of hard to read and modify YAML files. Worse, deployments can span into the tens of minutes. When iteration times are slow natural feedback cycles grow longer and so does feature development and bug resolution.

Architect is a mature framework for delivering serverless web apps that can subvert the normal Cloudformation deployment paths. Iteration speeds are measured in seconds without trading off the determinism and extensibility of AWS best practices. Join us in this talk and learn about getting 'dirty' with SAM!

Benjamin Buttoning Serverless: Migrating from S3 + Lambda to Wordpress

Corey Quinn, The Duckbill Group

This year I migrated lastweekinaws.com from S3, a few Lambda@Edge functions, CloudFront, CodeBuild, and Pelican to... Wordpress. This is the story of why.

4:55 - 5:25

Serverless for Enterprises: A look into the Un-Carrier's serverless journey!

Satish Malireddi, T-Mobile USA

Nicholas Criss, T-Mobile USA

Satish will talk about T-Mobile's serverless journey, covering common serverless use-cases at the company, a few best practices and challenges faced with serverless in the enterprise. Then he will present Jazz, T-Mobile’s open source serverless platform. Jazz abstracts away the underlying complexity from developers and ensures enterprise governance/compliance for what is being built on the cloud. Serverless is proven and here to stay, but adopting any new technology is often a challenge for large organizations! Satish and Nicholas will share their experiences driving the adoption of serverless at T-Mobile so that others can learn from their success and mistakes. Finally, Nicholas will talk about how open source is driving T-Mobile’s transformation to a product and technology company.

What Comes After Serverless? Less "Servers" and More "Less"!

Javier Toledo, The Agile Monkeys

Nick Tchaika, The Agile Monkeys

Serverless is redefining the rules of the software industry on a near-daily basis, but there’s still work to do to achieve massive adoption and have Serverless become the new standard for writing distributed backend systems.

Serverless currently faces many challenges from the software development perspective:
- It’s still harder than we want to admit for newcomers because it requires a new mindset and deep knowledge of the cloud.
- There are no well-established and widely adopted architectures enforced by tools or standards, so developers have to design everything from scratch.
- A lack of high-level abstractions forces developers to write large configuration files and the low-level APIs that are available effectively lock us into the providers. How do we overcome these problems? How do we lower the overall learning curve? How can we, as a community, develop tools that make things easier and more repeatable? How do we build abstractions that are closer to business needs and reduce vendor lock-in? What are the next steps for Serverless and backend software development?
In this talk, Javier will explore the current state of Serverless with a critical perspective and try to answer all those questions while picturing how Serverless could be the foundation for the next step in backend software development: writing highly semantic business-logic code without having to worry about infrastructure or even cloud configuration.

5:25 - 5:30

Closing Remarks by A Cloud Guru

5:30 - 7:00

Reception hosted by A Cloud Guru

Day 2

Wednesday, October 9th, 2019

Stage 1

Stage 2

8:00 - 9:00

Registration & Breakfast

9:00 - 9:15

Introduction

9:15 - 9:45

Keynote - Serverless: An ops experience or a programming model?

Donna Malayeri, Google Cloud Platform

When serverless first launched, it was an operational model bundled with a programming model. You provide just your function code, and the platform does all the rest.

Today, serverless has evolved beyond this first iteration. There are serverless containers. AWS Lambda provides flexibility with Lambda Layers. Data warehouse products such as BigQuery allow user-defined functions. There are even “serverless” programming models on that run on Kubernetes -- a decidedly server-full platform. This leads to the key question: what is the core of serverless? Is it possible to split apart the operational model and the developer experience, and still be serverless? What will we see in future platforms?

This talk will explore these questions. It’s time to go beyond just FaaS and see where serverless can take us.

9:45 - 10:15

Serverless: From one function to 43 Microservices

Sam Kroonenburg, A Cloud Guru

A Cloud Guru has been running a completely Serverless platform since 2015. During that time we have gone from one developer to 5 full dev teams. In this talk, ACG founder Sam Kroonenburg will share why he chose to build a Serverless platform, and tell the story of our transition from a Serverless Monolith to Microservices and the things we learned along the way.

10:15 - 10:45

AS400/Mainframe to Serverless

David Wilson, Mutual of Enumclaw Insurance

If you’re stuck with unsupported legacy systems it can feel like none of your options are good, and building your way out can be an especially tough sell. Our experience in navigating these challenges may provide a model that others can use to move towards a serverless future.

Serverless Lessons Learned and Our Best Practices

Jonathan Altman, Capital One

In Capital One's journey to move out of our private datacenters and fully onto the public cloud, we have leveraged AWS' Lambda services where it makes sense. This talk will cover some of our learnings about using Lambda within our unique operating constraints and at scale.

10:45 - 11:15

Break

11:15 - 11:45

Detecting outages at scale

Sander van de Graaf, Ookla / Downdetector

Sometimes the internet is having a bad day, and Facebook, Instagram or Whatsapp doesn't work. You and countless of other people rush to downdetector.com to check if that's really the case. Here's our story about the last Facebook outage, and how we made our service scale.

Selling Serverless to CEOs: Translating Architecture Improvement to Business Value

Matt Lancaster, Accenture

As technologists, we often get bogged down in the amazing technical, operational, and engineering benefits of serverless, particularly as part of an event driven architecture strategy. The ability to tie functions to business domains and deliver real value without meaningless layers of coupled complexity and process heft has enormous business value that is often massively undersold. Serverless initiatives are often ended before they begin or pigeonholed into small efforts near the interface (e.g. replacing some badly done Kubernetes-hosted microservices), rather than taking their proper place at the core of the business. So, I will share some examples of where serverless can be used to solve very significant business problems, quantify the opportunity, and present a reusable business case that can be used to help bring serverless into core business transformation efforts.

11:45 - 12:15

Serverless State

Tim Wagner

Serverless is so much faster, easier, and manageable than previous application methodologies that it should be the only way we build apps. And yet most developers still reach for containers or servers, despite the obvious challenges in capacity planning, scaling, and management those choices represent. In this talk I look at some of the missing pieces in the serverless portfolio that make it hard to port existing (and build new) applications "in real life" and what's needed to close those gaps.

Killing Kubernetes

Ian Fuller, Freetrade.io

In this talk I look at how Serverless, and pragmatism in general can help you ship. I discuss the problems Freetrade faced in shipping our initial product. How easy it is to build a lot of tech with little value. How Serverless helped us start again and deliver something our users needed. I’ll also discuss what worked and what didn’t work as we scaled our platform and our engineering team.

12:15 - 12:45

Leveling up in Serverless

Farrah Campbell, Stackery

Danielle Heberling, Stackery

Serverless has opened a whole new playing field and this very moment in time, is an opportunity to enter a new ecosystem, fast track your career, and make an impact on the business value working on projects that you never imagined were possible. In this session, Farrah and Danielle, (who both recently acquired their AWS certs) will share their motivations for learning Serverless, their personal experiences working in the Serverless ecosystem, the surprising ways that these new skills have benefited their careers, and why they believe that serverless can be for anyone and everyone who’s willing to learn.

Securing The Serverless Journey

Ron Harnik, Palo Alto Networks

With serverless, the cloud provider is responsible for securing the underlying infrastructure, from the data centers all the way up to the container and runtime environment. This relieves much of the security burden from the application owner, however, it also poses many unique challenges when it comes to securing the application layer. In this presentation, we will discuss the most critical challenges related to securing serverless applications - from development to deployment. We will also walk through a live demo of a realistic serverless application that contains several common vulnerabilities, and see how they can be exploited by attackers, and how to secure them. We'll then go over how Prisma Cloud can be used to secure serverless workloads.

12:45 - 1:45

Lunch

1:45 - 2:10

A Berkeley View on Serverless Computing

Johann Schleier-Smith, UC Berkeley

The emergence of serverless cloud computing in industry has recently captured the imagination and enthusiasm of the academic research community. By providing an interface that greatly simplifies cloud programming, it represents an evolution that parallels the transition from assembly language to high-level programming languages. We analyze the motivation for serverless computing, contextualizing it in the history of cloud computing, then describe applications that stretch the current limits of serverless, identify obstacles to overcome, and review recent research development. We claim that the challenges are solvable and that serverless computing will grow to dominate the future of cloud computing.

Scientists to Services: ML Pipelines in a serverless world

Richard H. Boyd, iRobot

The talk covers patterns we have found to be effective for taking ML models from a proof-of-concept to production. Data Scientists often use large unrestricted datasets in their exploratory analysis but the production version of the application needs much more restrictive access. Once the ML application is in production, we want to be able to offer the output data to other services while still maintaining an understanding of how this new data affects the data sensitivity requirements of consuming applications.

2:15 - 2:35

Serverless Developers are Developers

Linda Nichols, Microsoft

As software developers, we love best practices. We have strong opinions about development environments and tools. We’re constantly improving our processes to write more efficient and maintainable code. We live and die by our formatting and static code analysis tools. All code is always stored in revision control systems and we can’t accept code updates that haven’t been unit tested. The CI/CD pipelines we design will ensure that every time we deploy, we deploy the same way.

Well… except maybe when we’re developing cloud native and serverless applications. For some reason, when we’re given a portal and the ability to create functions in the cloud, we tend to want to throw away all of the guidelines that we adhered to for so long. We write code in a little text box with no code-checking or unit testing. Maybe we push these changes back to our Github repository, but maybe not. Sometimes, updates even happen directly to functions in production environments.

Developing serverless applications can remove a lot of the complexity of development environments, languages, and frameworks. It adds layers of abstraction and creates a perfect low-ops environment for developers. But low-ops and no-ops shouldn’t mean no devops or cutting out best practices and processes in software development.

Let’s talk about how we can develop applications in a cloud environment, but using some of the same tools and processes as in classic software development. We’ll also look at new tools and frameworks to create even more robust and reliable deployment pipelines. Serverless application developers are still application developers. It is possible for us to maintain our culture and processes we love, in any ecosystem.

Building Resilient Serverless Systems with “Non-Serverless” Components

Jeremy Daly, AlertMe.news

Serverless functions (like AWS Lambda, Google Cloud Functions, and Azure Functions) have the ability to scale almost infinitely to handle massive workload spikes. While this is a great solution for compute, it can be a MAJOR PROBLEM for other downstream resources like RDBMS, third-party APIs, legacy systems, and even most managed services hosted by your cloud provider. Whether you’re maxing out database connections, exceeding API quotas, or simply flooding a system with too many requests at once, serverless functions can DDoS your components and potentially take down your application. In this talk, we’ll discuss strategies and architectural patterns to create highly resilient serverless applications that can mitigate and alleviate pressure on “non-serverless” downstream systems during peak load times.

2:40 - 3:10

Automating the Enterprise with Serverless

Rasmus Hald, MAERSK

Casper Jensen, MAERSK

At MAERSK we are set out to reduce hand-overs by automating our internal processes and inter-team dependencies and Serverless has proven a valuable tool to enable us to do just that. Join us for a talk about how MAERSK, a fortune 500 enterprise has used serverless technologies to automate internal processes within the technology organization and why these technologies are such a great match for automation.

Computing on the Edge: Bringing Serverless to You, Fast

Kas Perch, Cloudflare

Serverless is seeing explosive growth, and is being used in all sorts of places, with many applications. The one thing we can’t always solve is latency– how do we get the result to users faster? Computing on the edge is a fancy term for putting your serverless function (your compute) as geographically close to the user as possible. This doesn’t just help with the latency issue– there are a few tricks up my sleeve for how computing on the edge is a fun idea! Come watch as we de-mystify the arcane vocabulary around computing on the edge and talk about the when, and whys of this style of serverless.

3:10 - 3:40

Break

3:40 - 4:00

Using (and ignoring) DynamoDB best practices with serverless

Alex DeBrie, Serverless, Inc.

In this talk, we'll cover why DynamoDB is such a great fit for serverless architectures on AWS. We'll discuss best practices for DynamoDB data modeling and how to get the most out of your tables.

We'll also cover how and why the recommended best practices with hyper-scale DynamoDB tables don't work for all serverless applications. We'll see when to reject these best practices and how to balance performance and flexibility with DynamoDB.

Can containers be serverless?

Donna Malayeri, Google Cloud Platform

Ahmet Alp Balkan, Google Cloud Platform

For the past several years, “serverless compute” has largely been synonymous with functions-as-a-service. But, with new technologies like Google Cloud Run and Knative, there are compute platforms with serverless characteristics that go beyond FaaS.

We’ll describe how containers can be seen as just a packaging format and deployed to a fully managed infrastructure, with a pay-per-use pricing model. This provides a smooth serverless on-ramp for workloads that are already using containers.

Of course, running arbitrary stateless containers comes with tradeoffs: you are now responsible for more things such as building, securing, and maintaining that container. This talk will explore the tradeoffs of containers and functions, in a serverless environment. It's not about containers VS serverless, it's containers AND serverless.

4:05 - 4:35

Think FaaS Lightning Sessions

Ryan Scott Brown, Trek10
Avoiding Work: Architecting for Serverless Efficiency

Rosemary Wang, HashiCorp
Serverless Still Needs Service Networking

Hanna Elszasz, Cloudreach
The limitations of glue and how to overcome

Shorta Izumi, Advanced Creative Center, Dentsu Digital Inc.
The container-based automatic generation architecture of digital ads, using AWS Fargate

Rethinking Architecture in the Age of Truly Commoditized Compute

James Simmons, Fortium Partners

Much of the conversation around serverless centers on DevOps/TechOps (easier to manage, easier to deploy, easier to scale) or cost (pay for what you use), and there's no doubt these dimensions of product design, delivery, and support are being fundamentally changed by the growing adoption of serverless technologies and designs.

But in the engineering organizations I've led, and now the companies I advise, one of the most overlooked game-changers enabled by this commoditization of compute is the freedom to architect applications without being constrained by compute capacity. Leveraging cheap, practically unlimited compute enables teams to adopt fundamentally different design patterns that allow them to put products in users' hands faster and cheaper than ever before, and more smart people need to be thinking about how to build systems under this new paradigm.

In this talk I'll discuss the constraints I dealt with as Head of Product Engineering at LegalZoom, the early stages of our exploration of serverless and the impact it would have on our business, and the strategy we employed during my time as CTO of voting startup Everyone Counts that let us recover almost a year of schedule time, cut costs, and turn a nearly bankrupt startup into a successful exit that just won M&A Adviser's ""Corporate/Strategic Deal of the Year"" award. Finally, will talk about some recent projects where this has been reduced to practice, streamlining DNA sequencing and analysis for a life sciences company and others.

4:40 - 5:10

A Seamless Serverless Spectrum

Jeff Hollan, Azure Functions

Serverless has redefined how applications are architected and built. That shift is bursting far beyond traditional serverless options, and bleeding into Kubernetes, Containers, and IoT. What does this mean for the serverless developer and team? What options and tradeoffs result?

Watch the journey of a function from the cloud, to a container, and to Kubernetes. Understand the tradeoffs and options along the way, and learn what customers are doing every day to bring the benefits of serverless into every workload and scenario.

Firecracker: Secure and Fast microVM for Serverless Computing

Meena Gowdar, AWS

Firecracker is open-source and purpose-built for creating and managing secure, multitenant containers and functions-based services. Firecracker runs in user space and uses Linux’s KVM to create microVMs. The fast startup time and low memory overhead of microVMs enable you to pack thousands of them onto one machine. This talk explains Firecracker’s foundation, the minimal device model, and how it interacts with various containers. It will provide use cases on how services are using Firecracker, latest Open source development and how to get engaged with the project.

5:15 - 5:45

Serverless Journey of shop.LEGO.com

Sheen Brisals, LEGO

The Shopper Engagement Technology team at LEGO have been busy migrating the legacy monolith eCommerce platform onto a cloud based solution on AWS. This employs serverless and managed services at its core within an agile development process. In this talk I will share the experience of the team, going through the reasons for this change, the architectural principles that were laid out, the strategy, choice of the implementation tools, the patterns, the concerns, the lessons, the success and also looking into the future.

5:45 - 6:00

Closing Remarks

Check out A Cloud Guru Original Series for a recap of talks from Serverlessconf 2018.