Cluster Schedulers: Borg and Kubernetes
Leszek Sliwko has been awarded a Ph.D. in Parallel and Distributed Computing for his work developing a novel AI-driven load balancer for Cloud systems. At present, he leads a team of Scala programmers in developing and maintaining several internal UK Government systems which are deployed within Kubernetes/Docker clusters. This presentation is analysing deployed and actively used workload schedulers’ solutions, and...
Building a CQRS + Event sourcing microservice with akka
Yeshwanth Kumar is a Data Engineer at ING’s Wholesale Banking Advanced Analytics team. Currently working in the development of the Data Analytics Platform – A platform that aims to democratize data, analytics and machine learning for the whole ING by making data accessibility seamless for data scientists and analysts.
Simplifying streaming applications with Functors
Streaming libraries such as Akka Streams can provide us with a rich toolset to help us process data from disparate systems. As these data pipelines grow in complexity so do the issues around code readability, boilerplate code and the bleeding of different domain concerns within the pipeline.
Embrace the implicit
Implicits are often regarded with suspicion by developers, with some companies going as far as banning them from their codebases. Let’s dive into how implicit conversions and parameters work in Scala. We will talk about design pattern where they can come in handy as well as scenarios where they should definitely be avoided.
Building for Longevity with Scala
With engineers moving from project to project and company to company, how do we ensure the systems we are building can survive constant organisational & personnel change? Why do some systems outlive others? And if we are honest with ourselves, are we guilty of thinking much too short-term about our software systems?
Monad Stacks or: How I Learned to Stop Worrying and Love the Free Monad In this talk, I will demonstrate various techniques, such as: Monad Transformers, Effects libraries, and Free monads. These techniques can be used to transform scala “spaghetti” code (that is embedded maps, flatmaps and pattern matching) to cleaner code that almost looks like imperative code.
Developing Data Analytics Jobs with Spark, Scala and the Spark Notebook
The Spark Notebook is a Scala-centric, interactive web app that implements the ‘notebook’ paradigm: It combines descriptions and code to explore, analyse and learn from massive data sets using Apache Spark. In this presentation, we will see how the Spark Notebook upgrades our Scala skills to address data-oriented problems and enables us to rapidly evolve through the full data application...
Paul is a software developer at the Guardian where he deals mostly with Scala on the backend. Having survived a few skirmishes with the surreal world of Scala Macros, he will be talking to us about what they are, why they are so confusing, and when to use (or not use!) them.