Posted in AWS

Dynamo DB Gotchas

A few gotchas when dealing with Dynamo DB that a newcomer may not expect. DynamoDB is sold for its simplicity, high throughput and low latency. Its ideal use case is for an application that needs fast lookups on a key that will return a single item from big data sets such as for storing web site user preferences. The idea being that you have a hashkey that is distributed well across the data set. Its not particularly suited if the lookup key is not known at the time of schema design, if the lookup key value can change over time or if multiple different query attributes are needed on the same data sets. Even being aware of these limitations there some gotchas .

Provisioned Throughput and Hot Partitions

Provision throughput is split across the number of partitions for a table. What does this means in practice? If you have a table that is split across 10 partitions and have provisioned the table for 50 read units of capacity then each partition will have only 5 read units of capacity. This won’t typically be a problem if your reads are distributed evenly across the table but if you have a large number of reads on a small set of the data you can end up with a single partition receiving a disproportiate amount of traffic.

There isn’t very good monitoring on partition level states to monitor this in Amazon Cloudwatch. It’s difficult to even know how many partitions a table has. The symptom you will see is that you will start see throttled requests even though you are well below the provisioned capacity for table.

There is no way to manage the partitioning and only split the hot partition (like with Kinesis) so your easiest option is to implement a caching layer in the application to reduce frequent requests to the same rows.

Partition Splits Are One Way

Increasing capacity for read/write throughput or storage may cause dynamo db to require more partitions for a table. This will trigger a split of paritions. Reducing capacity does not reduce the number of partitions as partitions are never merged. This is a problem when trying to handle a burst of traffic. If you temporarily increase the provisioned throughput to accomodate something like a bulk load of data this will cause the partitions in dynamodb to split but reducing the capacity afterwards won’t reduce your partitions. So the number of partitions on a table is based on the maximum storage or throughput configured ever. This can be a contributing factor to the Hot Partition problem above.

See Guidelines for Working with Tables for more information on who this works.


There has been a few blogs on the cost of dynamodb compare to other db layers. The summary is that you are effectively paying 50x the cost of Aurora or an RDS for the convenience of Dynamo DB. When you compare it to Aurora for larger data sets. In my recent projects DynamoDB as become a major portion of hosting costs.

Other references:


Posted in Scala

Multithreading with Shared Data in Scala

Scala has many built in options that make it extremely easy to parallelize work and generally I have a preference to not share data between threads. Mostly scala’s preference for immutability and functional style programming helps reason problems in a scalable way without requiring you to think about concurrency or shared mutable data.

Occasionally I still find the need to consider how to use share data across multiple threads. So here’s an explanation of some options available in Scala.

Consider this example of a pub/sub pattern (or Observer pattern). Below is a trait that provides classes with the ability to have clients listen for changes.

trait SynchronizedPublisher[T] {
  private[this] val eventListeners = mutable.Map[AnyRef, (T) => Unit]()

  def subscribe(ref: AnyRef, fn: (T)=>Unit) : Unit = {
    eventListeners += (ref -> fn)

  def unsubscribe(ref: AnyRef):Unit = {
    eventListeners -= ref

  def fireEvent(event: T): Unit = {



This can be used by classes by extending the trait and gives them the ability to notify observers of new events.

The intention of the trait is to be used in a singleton pattern so there can possibly be multiple threads using functions concurrently but in this case the core data structure is a mutable collection that is not designed to support multi-threaded use.

Lets look at options for making this thread safe…


Based on the Java synchronized block, this is the simplest method. You can get a mutex lock on any object while executing a block of code. You could update the publisher implementation like below…

trait SynchronizedPublisher[T] {

  private[this] val eventListeners = mutable.Map[AnyRef, (T) => Unit]()
  private[this] val rwLock = new Object

  def subscribe(ref: AnyRef, fn: (T)=>Unit) : Unit = {
    rwLock.synchronized {
      eventListeners += (ref -> fn)

  def unsubscribe(ref: AnyRef):Unit = {
    rwLock.synchronized {
      eventListeners -= ref

  def fireEvent(event: T): Unit = {
    rwLock.synchronized {



The advantage of this approach is its fast and simple to use.

The disadvantage of this approach is its an exclusive lock on that object which means only one thread can hold the lock. This has two problems:

  1. Access to these functions is effectively single threaded so it’s not a very good choice if most threads will be reading and not changing the state as it still only supports 1 reader at a time.
  2. You need to be careful not to hold the lock too long and not to grab other locks in synchronized blocks in case you create a deadlock situation.

Atomic Reference

Leverage the existing powerful java concurrency options. There are java collections that support concurrency like java.util.concurrent.ConcurrentHashMap<K,V>.

A general purpose approach is to use the atomic reference which is built like a wrapper on a volatile variable with the additional functionality of a compareAndSet operation that is guaranteed to be atomic.

A sample implemention could look this…

trait SynchronizedPublisher[T] {

  private[this] val eventListeners = new AtomicReference(Map[AnyRef, (T) => Unit]())

  def subscribe(ref: AnyRef, fn: (T) => Unit): Unit = {
    update(_ + (ref -> fn))

  def unsubscribe(ref: AnyRef): Unit = {
    update(_ - ref)

  protected[this] def fireEvent(event: T): Unit = {
    eventListeners.get().values.foreach(_ (event))

  private[this] def update(fn: (Map[AnyRef, (T) => Unit]) => (Map[AnyRef, (T) => Unit])): Unit = {
    while(true) {
      val listenerMap = eventListeners.get()
      if (eventListeners.compareAndSet(listenerMap, fn(listenerMap)))
        return // success



The advantage of this approach is that it’s lockfree and thread-safe. It performs well, particularly if the access is mostly read and there isn’t much contention for write.

The disadvantage of this approach is that it might not work as well if access is mostly write operations and there is high contention. It’s also designed for use with single objects so might not be appropriate if you need change a number of objects in a transactional way.

Reentrant Read Write Lock

Another general purpose option from the java concurrency library is  java.util.concurrent.locks.ReentrantReadWriteLock which you can add some Scala sugar to let you use it similar to synchronized block and so you don’t worry about forgetting to close it. (This could probably be implement as an implicit wrapper on ReentrantReadWriteLock object for an even lighter api)

object Util {
  def withReadLock[B](rwLock: ReentrantReadWriteLock)(fn: =>B): B = {
    try {
    } finally {

  def withWriteLock[B](rwLock: ReentrantReadWriteLock)(fn: =>B) : B = {
    try {
    } finally {


Then the updated publisher trait would look like this

trait SynchronizedPublisher[T] {

  private[this] val eventListeners = mutable.Map[AnyRef, (T) => Unit]()
  private[this] val rwLock = new ReentrantReadWriteLock()

  def subscribe(ref: AnyRef, fn: (T)=>Unit) : Unit = {
    Util.withWriteLock(rwLock) {
      eventListeners += (ref -> fn)

  def unsubscribe(ref: AnyRef):Unit = {
    Util.withWriteLock(rwLock) {
      eventListeners -= ref

  def fireEvent(event: T): Unit = {
    Util.withReadLock(rwLock) {


This approach has a big advantage over the synchronize blocks as the ReadWriteLocks support multiple readers and only requires an exclusive lock when a thread needs a write lock. The semantics also are not any harder to use than regular synchronize blocks. If the potential blocking nature of the code is a problem, a variation could be to wrap any potentially blocking call (like acquiring a lock) in a Future to avoid blocking the caller.

Akka Actors

The Akka Actors take a different approach to the problem by removing the need for developers to think about locks or threading. An actor exists as a self contained entity that is responsible for its internal data structure. You don’t call functions on the actor, instead, you get a reference to an Actor that allows callers to put a message on a message queue for the Actor asking it do some work. The actor has a single thread running in a continual loop that takes messages off the queue and processes them and responds if necessary (or sends a message to another actor). There is no blocking from the callers perspective which provides for a great model for concurrent programming that is compatible with reactive style programming.

An example of an actor using the observer pattern might look like this…
(this is a direct translation, the call backs could be messages to other actors instead)

object PublisherActor {
  def props[T] = Props[PublisherActor[T]]

class PublisherActor[T] extends Actor {
  case class NewEvents(events:T)
  case class Subscribe(ref: AnyRef, fn:(T)=>Unit)
  case class UnSubscribe(ref:AnyRef)

  val eventListeners = mutable.Map[AnyRef, (T) => Unit]()

  def receive = {
    case NewEvents(newMsg) =>
      for(listener <- eventListeners) {
    case Subscribe(ref, fn) =>
      eventListeners += (ref -> fn)
    case UnSubscribe(ref) =>
      eventListeners -= ref


To use the actor you need a actor reference (see Akka documention). Then you send a message (? = ask that expects a response as a future, ! = send that doesn’t expect a response)

myActorRef ! Subscribe(ref, newMsgFn)


This approach has a big advantage over the approaches above in that from a developer perspective it can be less error prone as it’s easy to reason as the interactions become more complex. The big power comes when you have multiple actors communicating with each other as you don’t need to worry about locks and potential of causing a deadlock situation. It’s also compatible with message driven approach in reactive programming and doesn’t introduce any blocking code into your application.

A big disadvantage in the actor approach is that it is still effectively a single threaded event queue so only one read operation can occur at one time which forces you to be quiet disciplined on the actor thread. The standard approach for any big compute or IO is to copy the actor state to local variables and use a Future to free up the actor thread. If the actor thread can’t process the messages faster than it receives new ones then there can be a build up of messages in the messages and this will lead to increased latency, memory issues and failures in the system.

The second big disadvantage is the loss of type safety. There is no compile time checking that the messages being passed are compatible with the actor so if you are not disciplined this can cause problems with messages not being acted on. The strong typing is a major advantage of scala so it’s not a good sacrifice to have to make.

Scala STM

STM stands for Software Transactional Memory and the concept is similar to what seen in other storage like databases. Scala STM is an implementation that gives ACID style protection on in memory data structures. It is based on optimistic locking which means it doesn’t hold a lock before executing an atomic block but instead it keeps a snapshot of the data structure before the changes and checks if changes occurring in atomic blocks are interleaved with changes from other threads. If this happens then it will automatically rollback the atomic block from one thread. If most of the time there is no conflict then you can get better throughput as there is no locking required.

Scala STM’s general abstraction is Ref which is a wrapper on the data structure that is being protected and any access must be done in an atomic block. Scala STM does also provide transactional data structures which are better for the Publisher example.

trait SynchronizedPublisher[T] {

  private[this] val eventListeners = TMap[Any, (T) => Unit]()

  def subscribe(ref: AnyRef, fn: (T) => Unit): Unit = {
    eventListeners.single += (ref -> fn)

  def unsubscribe(ref: AnyRef): Unit = {
    eventListeners.single -= ref

  protected[this] def fireEvent(event: T): Unit = {
    val listenersToNotify = eventListeners.single.toList
    listenersToNotify.foreach(_._2(event)) // don't use the eventListeners directly as small potential for replay



The advantage of this approach is that it’s very clean. The developer is insulated from much of the complexity.

The big disadvantage is that STM does introduce a performance overhead which might balance out if compared to other option in a highly multithreaded scenario with high contention on writes but often will perform worse if that criteria is not met. It is particular slower in environments where there is little contention on the write or mostly single threaded access. The other disadvantage is you have to be aware that the code can be rolled back at any time so you shouldn’t be making any changes to any data models outside of the Scala STM protection (e.g. database writes or IO that changes state). The code should always be rerunnable.


All options here are reasonable choices and have their advantages. The AtomicReference approach would be my preference for this example the publisher is mostly reading the collection of listeners and AtomicReference performs really well for this scenario. If the scenario was a bit more complicated that I couldn’t model it with AtomicReference then I would look at either Reentrant R/W Locks or Actors depending the scenario.

Posted in Kotlin

Kotlin Sequences

Kotlin sequences are an ordered collection of elements that are potentially unbounded in size. The values are evaluated lazily. They are great at representing collection where the size isn’t known in advance like reading lines from a file. Java 8 and Scala both have the concept of streams which is the same idea, Kotlin has chosen to use sequence to avoid naming conflicts when running on a Java 8 jvm.

The api documentation is here

I haven’t seen a lot of example usage so here a couple of examples that I am keeping for reference.

Simple Arithmetic and Geometric Progressions

val nums = generateSequence(1) {it + 1} // sequence starting at 1 incrementing by 1
val powersOf2 = generateSequence(1) {it * 2} // sequence of powers of 2

// Take creates a new sequence (so values are not yet evaluated)
// toList() causes the 10 elements in the sequence to be evaluated
println(nums.take(10).toList()) // prints [1,2,3,4,5,6,7,8,9,10]

Map and Filters

Map, fold and filter functions can be applied like for any other collection and are only evaluated when the value is evaluated

val squares = generateSequence(1) {it + 1}.map {it * it}
val oddSquares = squares.filter {it % 2 != 0}

println(oddSquares.take(5).toList()) // prints [1, 9, 25, 49, 81]

Mapping Java Readers to a Sequence

val = ...
val lines = generateSequence {reader.readLine()}.takeWhile {it != null}

Gives you a nice little collection that you can you forEach, map or fold operations on but you don’t have to read the whole file into memory upfront.

Advanced Examples

Kotlin lazy evaluation of values is a bit limited in that evaluation can only be done on the previous element in the sequence. It’s fine if the next element is a simple computation on the previous element but can be quite difficult if you need to know a number of previous elements.

Fibonacci Sequence

The Fibonacci numbers is a sequence of numbers where the next value is found by adding together the previous to values. An easy way to do this in Kotlin is to start with a sequence of Pairs that represent the two previous values so its available to next element calculation. Then apply a map to the sequence to only have the first element in each Pair as the resulting sequence.

val fibonacci = generateSequence(1 to 1) {it.second to it.first+it.second}.map {it.first}
println(fibonacci.take(10).toList()) // prints [1, 1, 2, 3, 5, 8, 13, 21, 34, 55]


A prime is defined as a number that is only divisible by 1 and itself (not including 1) and all numbers can be defined as a product of their primes.

To implement this as a sequence of primes a clean way is to define the next prime as the next integer that is not divisible by any of the previous numbers in the stream. This could be solved using the Pairs approach as the fibonacci example but here is alternative recursive style approach.

Option 1 – Pairs Approach

val primes = generateSequence(2 to generateSequence(3) {it + 2}) {
  val currSeq = it.second.iterator()
  val nextPrime =
  nextPrime to currSeq.asSequence().filter { it % nextPrime != 0}
}.map {it.first}
println(primes.take(10).toList()) // prints [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]

Option 2 – Recursive Approach

The recursive approach might seem a bit overkill for this example but it does provide a useful lazy plus operator for other problems.

Define a plus operator on the Sequence that allows adding a sequence generator function that lazily evaluated. This allows the calculation for the next element of the sequence to be recursively defined without being eagerly evaluated. This function is not specific to the primes example and can be used for any similar case.

public operator fun <T> Sequence<T>.plus(otherGenerator: () -> Sequence<T>) =
  object : Sequence<T> {
    private val thisIterator: Iterator<T> by lazy { this@plus.iterator() }
    private val otherIterator: Iterator<T> by lazy { otherGenerator().iterator() }
    override fun iterator() = object:Iterator<T> {
      override fun next(): T =
        if (thisIterator.hasNext())


      override fun hasNext(): Boolean = thisIterator.hasNext() || otherIterator.hasNext()

So now to get all primes you can define a recursively defined sequence where the a number is prime if it is not divisible by any previous prime in the sequence.

fun primesFilter(from: Sequence<Int>): Sequence<Int> = from.iterator().let {
  val current =
  sequenceOf(current) + { primesFilter(it.asSequence().filter { it % current != 0 }) }

val primes = primesFilter(generateSequence(2) {it + 1})
println(primes.take(10).toList()) // prints [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
Posted in Game Development, Kotlin

Game Development with Kotlin and Libgdx

I’ve been using libgdx for a couple of simple games. Its a great platform as it provides cross-platform game development and is compatible with any JVM language. I’ve been using Kotlin and it works fantastically with Libgdx. So you get the power of a modern language that doesn’t add much bulk to an Android app and still have access to a great toolkit. The one caveat with using Kotlin (or any other JVM language) on LibGdx is that you can’t target HTML 5 as a target platform as GWT operates on the Java source code (not the byte code). You can still target desktop, android and iOS so its not much of a limitation for me.

I’m not going to give a tutorial as there are already great ones out there but I’ll keep a collection of useful links I discover as I go.

Some useful links to get you started:


Example projects written in Kotlin-Libgdx:


Posted in Kotlin, Scala

Scala vs Kotlin

It’s been a long time since I’ve updated this blog. Over the year I’ve moved away from Scala as my preferred language and towards Kotlin. I’ve found Kotlin a refreshing approach as its borrowed a lot of the good things I liked about Scala but kept it simple and practical by avoiding a lot of the gotchas and ambiguity that can exist in Scala.

Here is a collection of things I like about Scala and Kotlin and also a comparison of how these features are accomplished in each language.

Type Declaration and Inference

Something I love about both these languages is they both have static typing with type inference. This gives you the power of compile time type checking with out the declarative boiler plate. Largely it works the same in both languages. Both languages also have preference to immutable type declaration as well with the optional type declaration being placed after the variable name.

Example, the below code is the same in both languages:

Declare a immutable variable named age of type Int:

val age = 1

Declare a mutable variable of type String:

var greeting = "Hello"

Both languages support lambda functions as first class citizens that can be assigned to variables or passed as function parameters.


val double = (i: Int) => { i * 2 }


val double = {i: Int -> i * 2}

Data / Case Class

Both Scala and Kotlin have a similar concept of a data class which can be use to represent a data model object.

Scala’s Approach

Scala calls this a case class and it can be defined like:

case class Person(name: String, age: Int)

This gives you the following main advantages over a normal class:

  • Has an apply method (You don’t need to use the ‘new’ word to construct instances)
  • Accessor methods are defined for each property (If property are defined as var then setters are also defined)
  • toString, equal and hashCode is sensibly defined
  • copy function
  • Has an unapply method (which allows use in match expressions)

Kotlin’s Approach

Kotlin calls this a data class and it’s defined like:

data class Person(val name: String, val age: Int)

Key Features

  • Accessor methods are defined for each property (If property are defined as var then setters are also defined). This is not unique to data class and works on any class in Kotlin.
  • Sensibly defined toString, equal and hashCode
  • copy function
  • component1..componentN functions. Similar use to unapply.
  • Implements JavaBean getter and setters defined so native Java frameworks (Hibernate, Jackson) without change

Kotlin doesn’t need a special ‘apply’ method as it doesn’t require a ‘new’ keyword to instantiate class constructors. So this is a standard constructor definition like any other class.


Generally data and case classes are similar.

This example usage works the same in Kotlin or Scala:

val jack = Person("Jack", 1)
val olderJack = jack.copy(age = 2)

Generally I’ve found data and case classes interchangeable in day to day use. Kotlin does enforce some restriction on extending a data class with inheritance but its done for good reasons when you consider the implementations of equals and componentN functions and prevents the gotcha moments.

The Scala case classes can be more powerful in a match statements compared to Kotlin’s handling of data classes in ‘when’ statements which is something I miss.

Kotlin approach works a lot better when being used from existing Java frameworks as it will look like a normal java bean.

Both languages support supplying parameters by name and allow for default values.

Null Safety / Optionality

Scala’s Approach

Scala’s approach to null safety is the option monad. Simply an option can be one of two concrete types; Some(x) or None.

val anOptionInt: Option[Int] = Some(1)


val anOptionInt: Option[Int] = None

You can operate on the option using functions on the option class like “isDefined” and “getOrElse” (to provide a default value) but more commonly you would use monad operations like map, foreach or fold which will treat the option as a collection containing 0 or 1 elements.

For example to sum two Optionally defined Ints you could do:

val n1Option: Option[Int] = Some(1)
val n2Option: Option[Int] = Some(2)
val sum = for (n1 <- n1Option; n2 <- n2Option) yield { n1 + n2 }

The variable sum will have the value Some(3). This is leveraging Scala’s for comprehension which can be foreach or a flat map function depending on the use of the yield keyword.

Another example of chaining could be:

case class Person(name: String, age: Option[Int])
val person:Option[Person] = Some(Person("Jack",Some(1)))
for (p <- person; age <- p.age) {
 println(s"The person is aged $age")

This will print “The person is aged 1”

Kotlin’s Approach

Kotlin’s approach borrows from groovy style syntax and is very practical in every day use. In Kotlin all types are non-nullable and must be explicitly declared nullable using ‘?’ if it can contain null.

The same example could be written

val n1: Int? = 1
val n2: Int? = 2
val sum = if (n1 != null && n2 != null) n1 + n2 else null

This is much closer to Java syntax except Kotlin will enforce compile time checks so its not possible to use a nullable variable without checking it is not null first so you won’t fear NullPointerExceptions. Its also not possible to assign a null to a variable declared as non-nullable. The compiler is quite smart in checking branch logic so you don’t have the situation of over guarding that you see in Java where the same variable is checked for null multiple times.

An equivalent Kotlin code for the second example of chaining is:

data class Person(val name: String, val age: Int?)
val person:Person? = Person("Jack", 1)
if (person?.age != null) {
  println("The person is aged ${person?.age}")

An alternative is also available using “let” which could replace the if block with

person?.age?.let {
  println("The person is aged $it")


I really prefer the Kotlin approach. It’s a lot easier to read and understand what’s going on and multiple levels of nesting is easy to handle. The scala approach has symmetry in that other monads can be acted on the same as option can (e.g. futures) which some people like but I’ve found it can get complicated really fast once there is a little bit of nesting. There are also a lot of gotcha’s with for comprehension as under the covers they are maps or flat maps but you don’t get the compile time warnings if you do something wrong like mix monads or do a pattern match without covering alternative paths which leads to runtime exceptions that are cryptic.

Kotlin’s approach also bridges the gap when integrating with Java code as they can default to nullable types where as Scala still has to support null as a concept without null safety protection.

Functional Collections

Scala of course supports many functional goodies. Kotlin is a little more restrictive but the basics are covered.

There isn’t much difference in the basic fold and map functions.


val numbers = 1 to 10
val doubles = {_ * 2}
val sumOfSquares = doubles.fold(0) {_ + _}


val numbers = 1..10
val doubles = {it * 2}
val sumOfSquares = doubles.fold(0) {x,y -> x+y}

Both support the concept of lazy evaluated sequences. For example printing first 10 even squares.


val numbers = Stream.from(1) // all natural numbers
val squares = {x => x * x}
val evenSquares = squares.filter {_%2 == 0}


val numbers = sequence(1) {it + 1} // all natural numbers
val squares = {it * it}
val evenSquares = squares.filter {it%2 == 0}

Implicits Conversion vs Extension Functions

This is an area where Scala and Kotlin diverge a little.

Scala’s Approach

Scala has a a concept of implicit conversion that allows you to add extra behaviour to a class by automatically converting to another class when needed. An example of this

object Helpers {
 implicit class IntWithTimes(x: Int) {
   def times[A](f: => A): Unit = {
    for (i <- 1 to x) {

Then later in the code you can do:

import Helpers._

This will print “Hello” 5 times. How this works is when you try to use the “times” function which doesn’t exist on the Int the object will be automatically boxed into an IntWithTimes object and the times function will executed on that.

Kotlin’s Approach

Kotlin has the concept of extension functions that can be used to accomplish a similar job. In the Kotlin approach you define a normal function but prefix the function name with a type to extend.

fun Int.times(f: ()->Unit) {
  for (i in 1..this) {

5.times {println("Hello")}


Kotlin approach fits the use case that I generally would use this Scala capability for and has the advantage of being a little simpler to understand.

Scala Features Not Present in Kotlin that I won’t Miss

One of the best part of the Kotlin language for me is not the features it has but more the features from Scala that are not in Kotlin

  • Call by name – This destroys readability. If a function is being passed its a lot easier when its visible that its a function pointer in a basic review of the code. I don’t see any advantage this gives over passing explicit lambdas
  • Implicit parameters – This is something I’ve really hated. It leads to situation of code changing drastically based on a change of import statement. It makes it really hard to tell what values will be passed to a function without good IDE support
  • Overloaded FOR comprehension – To me this is a clunk to get around the problem with dealing with multiple monads
  • The mess with optional syntax on infix and postfix operators – Kotlin is little more prescriptive and means that the code is less ambiguous to read and not as easy for simple typo to become a non-obvious error
  •  Operator Overload to the Max – Kotlin allows basic operator overloads for the basic operators (+, – etc.) but Scala allows any bunch of characters to be used and it seems to have been embraced by library developers. Am I really meant to remember difference between “~%#>” and “~+#>”?
  • Slow compile times
Posted in Scala

Receiving Mail in Scala

On recent project I had the need to read emails from my app so I thought I’d share a sample. This problem fits neatly into the construct of an Akka Actor.

import javax.mail.internet.MimeMessage
import javax.mail.{Folder, MessagingException, NoSuchProviderException, Session}

import actor.MailReceiverActor.Check
import{Props, Actor}

class MailReceiverActor(host: String, port: Int, user: String, password: String, inboxName: String) extends Actor {

  override def receive: Receive = {
    case Check =>
      val props = System.getProperties
      props.setProperty("", "imaps")
      val session = Session.getDefaultInstance(props, null)
      val store = session.getStore("imaps")
      try {
        store.connect(host, port, user, password)
        val inbox = store.getFolder(inboxName)
        inbox.getMessages map {
          case message:MimeMessage => processMessage(message) // define this function to handle the message
          case _ => // do nothing or log that you only process MimeMessages
      } catch {
        case e @ (_:NoSuchProviderException|_:MessagingException) => // log the error
      } finally {

object MailReceiverActor {
  case object Check

  def props(host:String, port:Int, user:String, password:String, inboxName:String) = {
    Props(new MailReceiverActor(host, port, user, password, inboxName))


Then to use this Actor you can just create a schedule for how often the mail will be checked. In play you can place this in the onStart of Global.

val system = Akka.system(app)
val mailActor = system.actorOf(MailReceiverActor.props(host, userId, user, password, inboxName))

implicit val executionContext = // user defined or use the default play context
system.scheduler.schedule(5.seconds, 20.minutes, mailActor, MailReceiverActor.Check)
Posted in Scala

Registration and Login with Play 2.3 Revisted (Silhoutte)

My early attempts for play login/rego were using SecureSocial. SecureSocial looked promising. It was providing good support for the features I wanted and getting a simple implementation up and going worked relatively easily.

Its only when I start going past the initial simple implementation that it started to give me real problems. The version for Play 2.3 is still immature and there isn’t enough documentation on the interfaces that you need to implement. Customising the user services to get user login and registration working with an existing backend takes too long and leads to hard to debug errors. In conclusion, it just isn’t worth the headache. The library makes life harder.

So I’ve now moved to Play Silhoutte. It is actually a fork of SecureSocial. The author behind it forked it for a lot of the same reasons that I dislike SecureSocial. Silhoutte is more modular, simpler to customise and has good documentation. There is also plenty of activator seed projects that prove how it works and how to customise it.

The Silhoutte Slick Seed provides a good example project that shows how to integrate with your own backend.