MEET SCALA EXPERTS
ON THE SUNNY SIDE OF THE ALPS
BeeScala 2016 is a conference for software engineers focused on the Scala programming language. Set to take place on 25th and 26th of November 2016 in Ljubljana, Slovenia, the conference is organized by the members of the Scala User Group Slovenia and has three distinct objectives:
- Spread the word about Scala and its benefits in the software industry
- Provide a stage for lectures on the Scala language, tools and libraries
- Provide an opportunity for Scala developers, aficionados and interested parties to network and create a local, organic ecosystem around the language
Enterprise Architect @ Lightbend (formerly Typesafe)
Kamon Core Team Member & Kamino Co-Founder
Full-Stack Team Lead @ OverOps (formerly Takipi)
Principal Software Engineer @ IBM Spark Technology Center
Can you guess[T]?
Scala Developer @ VirtusLab & Nexem
Senior Associate @ VirtusLab, CTO @ Nexelem
Independent Software Consultant, Reactive Systems Specialist
CTO @ Cake Solutions
Independent Apache Spark Consultant
Software Engineer @ NEXTSENSE GmbH
Senior Software Developer @ UniCredit
Principal Researcher @ Oracle Labs
Computer Scientist | Software Anarchitect | Overengineer
Software Developer, Analytics @ Celtra, Inc. AI researcher, Jozef Stefan Institute
25th - 26th of November 2016
Full agenda for both days is ANNOUNCED! Find Apache Spark workshop details HERE!
In the beginning there was Nothing and then he said let there BeeScala. A zoom-in/zoom-out journey on how this project was brought to life.
Today, there exists a gap between high-level distributed computing frameworks and low-level distributed programming models. On one side of the spectrum, we have high-level frameworks such as Map-Reduce, Spark, distributed file-systems and databases, and peer-to-peer networks. On the other side, we have low-level distributed programming models, such as remote procedure calls (RPCs), and actors, which are the basis for building distributed systems. There does not seem to be a strong middle ground — a set of reusable intermediate components is missing. High-level frameworks are complex systems, built from low-level primitives during countless engineer hours, whose efforts are repeated every time a new distributed system is created.
Since the appearance of the actor model some 30 years ago, this gap between the high-level and the low-level distributed computing did not significantly decrease. While sequential programmers today build their programs from iterators, monads, zippers, generic collection frameworks, parser combinators, I/O libraries, and UI toolkits, distributed systems engineers still think in terms of low-level RPCs and message passing. While sequential programming paradigms realized the importance of structured programming and high-level abstractions long ago, distributed computing has still not moved far from message passing — its own assembly. This underlying cause for this situation is the following: existing low-level distributed programming models expose primitives that do not compose well.
In this talk, I present the recently proposed reactor programming model. I will focus on its main strengths — modularity and composability, and show how to build reusable message protocols and the distributed computing stack from a handful of simple, but powerful programming primitives. I will demonstrate that these primitives serve as powerful foundation for the next generation of distributed computing.
In this talk I will introduce you to the concept of a finite-state machine. Why is it worth to be used? It allows the developer to design and code a process manager in a very simple and expressive way. We will see a real life example of a business process implemented with it. We will also make the process fail-proof by using persistence. All of it will be done using Akka Persistence.
This talk will first touch on a few historic bugs, and how various QA techniques might have helped avoid them. Afterwards there will be a short overview of Scala static analysis tools, along with tips on how to configure and include them in your development process, and how you can help improve these tools in the future.
In this talk we will describe an experience of spending several months using Scala.js in real project. Why Scala.js was chosen, what worked well and what obstacles were encountered, what Scala.js ecosystem already has today and what’s still missing. It is a pragmatic session for people considering Scala.js for their project or people interested in Scala.js in general.
We will start from the very basics and learn how Akka actor model applies properly in business logic, software infrastructure as well as in managing UI. In the end we will take a look at some of the features under development and what we are trying to achieve.
In this talk I will show how to use Apache Spark and Scala to implement scalable data processing applications. Concepts will be illustrated with the following use case: analyzing user interactions with 150M mobile ads per day. We will also discuss how object-oriented and functional programming lures developers into writing software that is easy to maintain and enables adding new features quickly.
The Scala language and its environment have been evolving quite significantly over the past few years. The adoption of the language is slowly growing and it can now even be found in use in rather conservative enterprise settings. At the same time there have been quite a few criticism of the language, its ecosystem and its practicability in larger teams. Many developers are still avoiding to have a more serious look at Scala and its ecosystem for a variety of reasons ranging from the fear of good tooling support to the apprehension of advanced category theory principles. This talk is a reflection upon six years of working professionally with Scala in projects of various size and shape. It aims at conveying some of the learnings and practical insights gained during that time as well as to debunk some of the many preconceptions that surround the language and its ecosystem.
Being able to monitor your application’s behavior is nice; knowing that everything is being measured and reported somewhere makes you feel like you are doing the right thing, but, are you? Simply measuring everything like there is no tomorrow doesn’t bring any good unless you are analyzing that data! In this talk we will learn how to interpret the metrics data collected by Kamon and how to apply this knowledge when troubleshooting real world performance problems.
This talk will start with a quick introduction to the two different building blocks of distributed computing in Apache Spark, as with the relative performance differences. This talk will cover on the performance impacts of Datasets, which are becoming the core building block of much Apache Spark starting with Spark 2.0, as well considerations the RDD API. This talk will finish up with exploring the new structured streaming API. Prior knowledge of Spark isn’t required, but a background with Spark will make it more exciting.
Almost all web & mobile applications need some kind of *session support*: after logging in, state should be maintained which allows to identify the user on the server during subsequent requests in a *secure* way, so that the data cannot be tampered with.
`akka-http` is a great toolkit for building reactive mobile/web backends, using an elegant DSL; `akka-http-session` builds on top of that to provide secure session management.
We’ll discuss how session storage can be implemented, what are the security challenges (with an emphasis on cookies) and what kind of solutions `akka-http-session` provides. We’ll also do a quick introduction to `JWT` (Json Web Tokens), one of the supported formats for encoding session data.
Finally, no presentation can be complete without a **live demo** showing how using `akka-http-session` looks like in practice.
Event Sourcing (and CQRS) has become a hot topic. But what does it really means, why should we care and which new possibilities it opens for us? In this session we will introduce you to the main principles of CQRS and Event Sourcing. You will learn how to model your domain in terms of Commands and Events and how to build a reactive applications in Scala using Fun.CQRS and its reactive Akka backend.
Spark SQL is now the de-facto driving force behind Apache Spark 2.0’s success. It comes with enough cool features to keep you busy for few days and made Spark MLlib even more pleasant to use. In Spark 2.0, Spark SQL comes with Datasets, encoders, logical and physical plans. They are the frontends to the other low-level components called Catalyst optimizer and Tungsten that are supposed to make your queries be faster. During this presentation you will find out how your structured queries end up as Datasets, the difference between Datasets, DataFrames and RDDs, and finally how Spark SQL’s Catalyst optimizer could make your queries faster when properly structured.
Evolutionary algorithms open windows to where machines and biology meet. In this talk we’ll explore how evolutionary algorithms mimic and borrow from the way that Mother Nature solves problems – the road, from solving puzzles, to social sciences, to designing new kinds of satellite antennas. We’ll see how we can use plain Scala to code evolutionary algorithms, and see the existing libraries that can help us save some time.
The goal of the presentation is to have a quick introduction to Slick in version 3.x. Lots of things have changed since version 2.x so even if you are familiar with previous version it still may be useful to take a look at how things have changed. Presentation is to be pragmatic, so after going through it (together with code samples) you should be able to start using it in your project with no problems. We will rather focus on how basics of Slick work and how you can build relevant queries / operations / patterns rather than Slick internals.
This talk will be an all-encompassing tour of ScalaCheck. I’ll start with a brief introduction for those who have never used the tool before. I’ll then illustrate some interesting ways to design properties, to make sure you get the most out of the library, showing how it differs from other unit testing frameworks like JUnit, Specs2 or ScalaTest. I’ll also talk about how ScalaCheck integrates with other libraries, specifically some from the Typelevel suite, and I’ll finish by introducing a new library to help ScalaCheck work with dates and times, and show some techniques for working with that. By the end of my talk you’ll definitely have all the ammunition you need to be using ScalaCheck from the outset on your current project!
CHECK OUT WHO MAKES THIS POSSIBLE!
FACULTY OF COMPUTER AND INFORMATION SCIENCE
Večna pot 113, 1000 Ljubljana
FREE SHUTTLE SERVICE
Free shuttle service will be available on both conference days offered by our Gold Sponsor the low cost shuttle transfer company, GoOpti! The service will be available from 07:30 to 09:30 (pick up point is intersection of the Cankarjeva ulica and Beethovnova ulica) in direction to the conference venue and from 18:00 until 20:00 in direction to the city center.
Trains connections from Trieste, Zagreb, Vienna, Klagenfurt (check Slovenske Železnice for details)
Below find an affiliate link to the GoOpti low cost shuttle transfer – they have generously offered 10% discount for BeeScala attendees. Make sure you book well in advance for the lowest fare possible!
Best Western Premier Hotel Slon
Grand Hotel Union Business
- (aka Blind Bird)
For those who recognize the value of the moment and know how to seize it.
Only available until 23rd of Sep 2016!
- (aka General Admission)
For those who specu-late.
Only available until 24th of Nov 2016!
- (aka Early Bird)
For those who need to see the sea to paint it.
Only available until 7th of Nov 2016!