You might not know us yet, but we are maniacally focused on solving a big hairy complex problem. Actually, it’s a set of problems.
The Rafay team has built a cool platform on top of Kubernetes and other free open source software. As a start-up, we made rapid progress with the contributions of several open source projects. We are big believers in cloud-native platforms and participatory open source software development. Open source and openness are part of Rafay’s DNA, and have been since the beginning.
We’ve written some interesting, innovative and useful code that we think will be helpful to DevOps, CloudOps, and Ops professionals. Site reliability engineers, developers, architects, and application owners will benefit as well. Along the way, we've learned a lot, and addressed some issues that others may also encounter. Having benefited from the open source community’s efforts in driving the cloud-native paradigm forward, we are now in a position to begin contributing back. We are proud and excited to share our contributions.
What is the Problem?
Paul Bakker, a Netflix software architect, wrote a blog about his team’s lessons learned after using Kubernetes in production for one year. His first hard lesson: “Container clustering, networking, and deployment automation are actually very hard problems to solve.”
Netflix's experience is hardly unique. A survey of 152 attendees at the 2018 Kubecon + CloudNativeCon conference found that operational complexity is a top challenge hindering broader adoption of Kubernetes (second only to “lack of expertise”).
Rafay is helping to reduce the complexity of deploying containerized applications. Many organizations have built their own bespoke solutions, often leveraging Kubernetes, Mesos, or similar resource management and container scheduling platforms. Our goal is to create a service that makes distribution, scaling, operations and life-cycle management easy. As a result, organizations no longer have to work at the infrastructure and scheduler level, and can instead focus on the application.
The Rafay Platform
Rafay’s SaaS platform automates the distribution, intent-based scaling and operations of containerized microservices in any environment, anywhere in the world.
Our platform is built on top of free open source projects including Docker, Kubernetes, gRPC, GNU/Linux, and more. The focus is on simplifying the developer and operations experience. The platform abstracts away many details of the underlying tools with an easy to use console to specify application preferences so we know how, where, and when you want code to run. It is also built around Kubernetes. We have incorporated a body of work to make Kubernetes easier to use across any number of environments. The platform also incorporates a number of data and metadata pipelines that deliver the right abstractions. Coupled with the right set of interfaces, application owners can run their containerized apps anywhere.
This means you don't have to: (1) instantiate server instances; (2) install, monitor, and update systems when they need attention, such as when security patches are released; (3) build the plethora of pipelines that are needed to operate an application across multiple locations; (4) determine where to run each application. Our global application orchestration solution places workloads where and when they need to be. Most importantly, the variety of components that together deliver Rafay’s comprehensive, global container operations and management platform have abstracted such that users don’t need to write Kubernetes scripts on an ongoing basis. Users can focus on business intent, and the Rafay platform will translate intent into configuration. Delivered as a service, the broad set of features sets the Rafay platform apart from Kubernetes federation and other “Kubernetes as a Service” offerings.
We will begin publishing code, and further blog posts, shortly. Our plan is to publish a number of tools we have written to aid our development and testing efforts. In addition, we also plan to publish the key Kubernetes controllers, operators, and Custom Resource Descriptors (CRDs) that turn a vanilla Kubernetes cluster into a cluster that can be easily be managed centrally via the Rafay managed service. This includes utilities that handle container life cycle management, ingress control, runtime configuration, resource isolation (essential for a multi-tenant environment), local storage management, and more.
With this and some other contributions, it will be easier for application developers and network operators to distribute and operate containerized applications across any number of locations globally.
I’ll publish the next post in a week or so with a sample application and instructions on how to try out this platform we’ve built. You can find it on our blog on https://rafay.co and on LinkedIn. As always we would love to hear from you. Feel free to email me directly, or follow us on twitter @rafaysystemsinc.
We hope you’ll join us on this exciting journey.