briantheprogrammer

Home of the O2 Programming Language

A Chat Server In O2

Posted by Brian Andersen on June 19, 2012

In my prior post I introduced O2′s basic operators for distributed computing using the example of a simple tcp Echo Server. The echo server received messages from connected clients and spat those same messages back to the same clients. Now it’s time to use those same operators to build something slightly harder. Today’s demo is going to be a point-to-point chat server. Like the echo server, the chat server will accept client connections followed by messages from each connected client. But instead of responding back to the same client that sent the message, in the chat demo the messages are addressed specifically to another connected client. So messages travel from one client to a different client rather than back to the same one that sent the message.

This example is slightly more complex than the echo server. In the echo demo the client bots only used a single fiber to send requests and get responses back. Today’s client will use two fibers, one to send messages (the outbox) and another to receive messages asynchronously from other clients (the inbox). The chat server will still use a single fiber, but it will use the blackboard in order to keep track of state between requests. So in this demo you will see how the concurrency operators from the prior series and the new messaging operators can work together.

The Code

{
  server:
  { deliver:
    { waiter:(#wait #chat + $R) dispatch 1l
      :$waiter.id reply eval {to:$waiter.to from:$waiter.from text:$waiter.text}
    }

Let’s start with the definition of the server bot. The server requires a subroutine that will be invoked in two places. We will call this operator “deliver”. There are going to be two types of messages added to the blackboard. #wait messages indicate that a client is waiting for the next message to be delivered via its inbox, as discussed above. #chat messages are produced by a client outbox and are waiting to be delivered to a different client. In order to deliver a message we need for there to be two events queued in the blackboard, a #wait message and a #chat message. Every #wait event needs to be paired with exactly one chat event and vice-versa. We use dispatch to pair the two requests and to ensure that none of the requests are processed multiple times. Then we reply to the id which will have been previously recorded by the waiter when it sent it’s #wait request. All of this should become more concrete as I continue.

    chat:
    { message:$L accept 1l

This is the main loop in which the server processes both #wait and #chat events. Accept one request at a time from the listening server.

      :$message.body.verb switch
      { chat:
        { : (#chat + $message.body.to) write eval
            {i:++ to:$message.body.to from:$message.body.from text:$message.body.text}
          : ((#wait + $message.body.to) peek 1l) switch {:deliver $message.body.to}
          :$message.id reply {ack=true}
        }

Each message body contains a “verb” which is either #wait or #chat depending on whether it is sent from the inbox or the outbox respectively. In this section we handle the #chat requests. First, write a line in the blackboard whose symbol indicates that there is a chat message awaiting delivery “to” to a specific user. The to symbol is in both the body of the data and the symbol by which it is indexed in the blackboard. This is so that it can be easily dispatched when that user comes along to retrieve their message. Next, we check to see if the “to” user is already waiting using the peek operator. If they are already waiting we invoke the deliver operator as discussed above. The server also replies directly to the chat request with an ack message. It is a good practice for the server to ack all client requests and for the client to wait for those acks.

        wait:
        { : (#wait + $message.body.user) write eval {i:++ id:$message.id}
          : ((#chat + $message.body.user) peek 1l) switch {:deliver $message.body.user}
        }

Now let’s consider what happens when the server receives a #wait request from the outbox. Again the fact of the request is recorded in the blackboard, this time using the name of the user who is waiting for the request. The body of the blackboard data contains the id which will be later used to reply to the the #wait request. Following a similar pattern the the #chat block, use peek to see if there is already a message pending delivery and invoke the deliver operator if this is the case.

      }
      <-$L chat $R + 1l
    }

Reenter the chat operator using a tail recursion to loop forever.

    :fiber {<-(listen $L) chat 0l}
    <-wait 0l
  }

Now we are back in the main block of the server bot definition. This is where the chat fiber is launched. Use the wait 0l idiom to wait forever in the main fiber. This concludes the definition of the server bot.

  client:
  { outbox:
    { to:$R.others at randoml 1 0l append count $R.others
      text:"hey " + (string $to) + ", it's " + string $R.me
      :cwrite (string $R.me) + ":" + $text
      :receive $L send eval {verb:#chat to:$to from:$R.me text:$text}
      <-$L outbox $R
    }

This is the client code, beginning with the outbox that I have been promising all along. The client code is configured with a block in its right argument. The block has two variables $me which is the name for the current bot and $others is the possible list of recipients for chat messages. In the first line of the outbox the client selects another client to direct his message to. The selection is random. Then it sends the message using the the verb #chat to indicate to the server how this message should be processed, along with the to and from information. As discussed in the echo demonstration, the send operator yields a symbol which can be used as a token to the receive operator, which will then wait for a response to the given request. The outbox loops forever.

    inbox:
    { message:receive $L send eval {verb:#wait user:$R.me}
      :cwrite (string $R.me) + ": got a message from " + (string $message.body.from)
      <-$L inbox $R
    }

Finally the inbox. The inbox sends a message to the server with the verb #wait and the name of the user waiting. This request will remain outstanding until someone sends a #chat message directed at that user. The inbox, like the outbox, loops forever.

    :fiber {<-(open $L) inbox $R}
    :fiber {<-(open $L) outbox $R}
    <-wait 0l
  }

This is where the client bot launches it's two inbox and outbox fibers. In this case I created separate connections for the inbox and outbox fibers. But they could just as easily share a single connection.

  :bot {<-#tcp,9090l server #hub,0l}
  names:#client + symbol 0l to 4l
  :{ config:eval {me:$R others:$names at find $names != $R}
     <-bot {<-#tcp,localhost,9090l client $config}
   } each $names
  <-wait 0l
}

This last section of code configures and starts the client and server bots. The server bot is set to use the tcp protocol on port 9090. Each client bot gets a right argument that contains its name in the $me variable and a vector containing the names of the others in the $others variable. The main program fiber then waits forever while the bots chat with each other. Here is the complete program.

{
  server:
  { deliver:
    { waiter:(#wait #chat + $R) dispatch 1l
      :$waiter.id reply eval {to:$waiter.to from:$waiter.from text:$waiter.text}
    }
    chat:
    { message:$L accept 1l
      :$message.body.verb switch
      { chat:
        { : (#chat + $message.body.to) write eval
            {i:++ to:$message.body.to from:$message.body.from text:$message.body.text}
          : ((#wait + $message.body.to) peek 1l) switch {:deliver $message.body.to}
          :$message.id reply {ack=true}
        }
        wait:
        { : (#wait + $message.body.user) write eval {i:++ id:$message.id}
          : ((#chat + $message.body.user) peek 1l) switch {:deliver $message.body.user}
        }
      }
      <-$L chat $R + 1l
    }
    :fiber {<-(listen $L) chat 0l}
    <-wait 0l
  }
  client:
  { outbox:
    { to:$R.others at randoml 1 0l append count $R.others
      text:"hey " + (string $to) + ", it's " + string $R.me
      :cwrite (string $R.me) + ":" + $text
      :receive $L send eval {verb:#chat to:$to from:$R.me text:$text}
      <-$L outbox $R
    }
    inbox:
    { message:receive $L send eval {verb:#wait user:$R.me}
      :cwrite (string $R.me) + ": got a message from " + (string $message.body.from)
      <-$L inbox $R
    }
    :fiber {<-(open $L) inbox $R}
    :fiber {<-(open $L) outbox $R}
    <-wait 0l
  }
  :bot {<-#tcp,9090l server #server,0l}
  names:#client + symbol 0l to 4l
  :{ config:eval {me:$R others:$names at find $names != $R}
     <-bot {<-#tcp,localhost,9090l client $config}
   } each $names
  <-wait 0l
}

The Output

Let's look at some example output.

Welcome to O2, ask brian if you need help
O2>eval fload "../../demo/messaging/chat.o2"
#client,0l:hey #client,4l, it's #client,0l
#client,1l:hey #client,3l, it's #client,1l
#client,2l:hey #client,3l, it's #client,2l
#client,3l:hey #client,2l, it's #client,3l
#client,4l:hey #client,0l, it's #client,4l
#client,0l:hey #client,2l, it's #client,0l
#client,1l:hey #client,4l, it's #client,1l
#client,2l:hey #client,0l, it's #client,2l
#client,3l: got a message from #client,1l
#client,2l: got a message from #client,3l
#client,3l:hey #client,2l, it's #client,3l
#client,4l: got a message from #client,0l
#client,0l: got a message from #client,4l
#client,4l:hey #client,0l, it's #client,4l
#client,0l:hey #client,2l, it's #client,0l
#client,1l:hey #client,3l, it's #client,1l
#client,2l:hey #client,1l, it's #client,2l
#client,3l: got a message from #client,2l
#client,2l: got a message from #client,0l
#client,3l:hey #client,1l, it's #client,3l
#client,4l: got a message from #client,1l
#client,0l: got a message from #client,2l
#client,4l:hey #client,1l, it's #client,4l
#client,0l:hey #client,3l, it's #client,0l
#client,1l:hey #client,4l, it's #client,1l
#client,1l: got a message from #client,2l
#client,2l:hey #client,4l, it's #client,2l
#client,3l: got a message from #client,1l
#client,2l: got a message from #client,3l
#client,3l:hey #client,1l, it's #client,3l
#client,0l: got a message from #client,4l
#client,4l:hey #client,1l, it's #client,4l
#client,4l: got a message from #client,1l
#client,0l:hey #client,2l, it's #client,0l
#client,1l:hey #client,0l, it's #client,1l
#client,1l: got a message from #client,3l
#client,2l:hey #client,1l, it's #client,2l
#client,3l: got a message from #client,0l
#client,2l: got a message from #client,0l
#client,3l:hey #client,0l, it's #client,3l
#client,4l:hey #client,3l, it's #client,4l
#client,4l: got a message from #client,2l
#client,0l: got a message from #client,1l
#client,0l:hey #client,3l, it's #client,0l
#client,1l:hey #client,2l, it's #client,1l
#client,1l: got a message from #client,4l
#client,2l:hey #client,0l, it's #client,2l
#client,2l: got a message from #client,0l
#client,3l:hey #client,4l, it's #client,3l

Ok you get the idea. The server allows clients to send messages to directly to each other by name.

Discussion

In this post you got to see how bots can make productive use of multiple fibers and how the blackboard can be used in conjunction with O2's messaging facilities to create distributed data servers and clients. One aspect of this design that merits discussion is the way the client inbox continually requests messages from the server. In the past when I have developed messaging systems this way I have been accused of polling, a strategy which is considered undesirable in these types of systems. It's undesirable because polling is busy; clients keep checking for results and consuming resources repeatedly. Also, polling is slow because you might have received a message already but you won't see it until the next polling cycle. In this application I'm using more of a "long-polling" design where clients make a request once and then the requests can remain outstanding for any length of time. I dislike the term "long-polling" because it contains the word "polling" which to me means busy, and this is not a busy waiting system.

The alternative to this approach is one where the outbox would send a single request and that request could get multiple responses back. The server implements subscriptions that persist across multiple responses. I have implemented this a number of times and I do not like it at all. First, it is deceptively hard to ensure that your server subscription state remains coherent in the face of dropped connections, firewalls that time connections out and hung clients. A common remedy for these problems is to have the client send regular "keep-alive" messages which indicate to the server that the client still cares about the subscription. But a solution where the client sends a new request for each update is just as easy to implement AND robust as a polling solution AND just as real-time as a stateful subscription. Neither does it have the busy waiting problems of a classic polling architecture. The only case where this does not work is for extreme low latency systems where you cannot afford to wait for the message to travel from client to server to start processing the next request. But in these situations it is normally better to move your client closer to the source of the data anyway.

So that about covers the chat server. In my next post we will increase the complexity one more notch and do a pubsub server!

Leave a Comment

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>