Keechma author Mihael Konjević on

Breaking changes in Keechma from v0.3.0

Since the version 0.3.0 Keechma introduces some breaking changes to it's core API. Previously we've used protocols and records to implement controllers, but this stopped working in the newer ClojureScript versions. The previous implementation worked "accidentally", so we had to find an alternative.

The change

Controllers in Keechma have default behavior implemented, and previously we've used the extend-type to implement this behavior. We also used partially implemented protocols to override the default behavior:

(defrecord Controller []
  (params [this route-params]
    (get-in route-params [:data :page])))

It turns out that partially implemented protocols are not supported by ClojureScript - you have to implement all listed functions. From the version 0.3.0 controller behavior is implemented with multimethods:

(defrecord Controller [])

(defmethod controller/params Controller [controller route-params]
  (get-in route-params [:data :page]))

Function signatures and the behavior is the same, and the code update requires just the mechanical code change. Currently some of our demo apps still use the old API, but these will be fixed in the coming days.

Keechma author Mihael Konjević on

Route driven apps rock!

Last weekend I gave a talk about Keechma at the great ClojuTRE conference. The title of the talk was "Developing (with) Keechma" - which was intentionally left open ended because I wanted to be able to take the content in any direction possible. While I was preparing the talk, it was clear that there was one part of Keechma that is different from the other frameworks - you could say it's Keechma's secret sauce - the combination of the router and the controller manager.

Here's the video, watch it if you haven't seen it yet:

Going deeper

ClojuTRE talk format is 20 minutes, which is a very short time to cover the subtler aspects of the router and the controller manager. In this blog post I want to expand on this theme. If you watched the video, you've seen this slide:

Keechma Architecture

This slide lays out the Keechma architecture. The flow is simple:

  1. The router converts the URL to the data
  2. The controller manager receives the URL data and orchestrates the controllers
  3. Controllers react to the route data change and mutate the AppDB
  4. AppDB changes are reflected in the UI

(there is another part to the story - the yellow arrow going up - controllers can also receive the commands from the UI, but this is out of the scope of this article)

Here is another way to render this architecture:

Keechma Architecture - inside out

Although it's similar to the previous illustration, this one shows another property of the Keechma parts - each one is the super set of the previous one.

This point is important. If we ignore the async nature of the frontend apps, we could encode this image in the following way

(-> url

If you look closely at this code, you can see that each of the layers has only one responsibility - do some task based on what was returned from the previous step.

If the only thing we care about is what happened, what is missing? We're missing, why, how and when.


We can't really remove the why from the equation, but Keechma allows you to isolate that part on the topmost layer - the router.

Let's try to apply the 5 whys to Keechma apps:

  • Why (are we rendering this UI)? - because the data is present in the AppDB
  • Why (is this data present in this AppDB) - because controllers placed it in there
  • Why (did the controllers place this data in the AppDB) - because they were started
  • Why (were these controllers started) - because their params functions returned the non nil value
  • Why (did the controllers' params function return the non nil value) - because the route contained the data these controllers were interested in

There you have it, five steps to enlightenment :). Jokes aside, this is the Keechma's secret sauce. You can follow the flow from the outside in, and from the inside out. There is only one why in the Keechma apps.


Like why, how, can't be really removed from the equation, but you guessed it - we can isolate it. What do I mean?

  • Router cares only about one how - how to convert the URL to the data
  • Controllers care only about one how - how to load the application data into the AppDB - based on the router data
  • UI cares only about one how - how to render the data in the AppDB

As you can see each of the layers has very isolated responsibilities - controllers don't care about the route patterns, and the UI doesn't care how the data got into the AppDB, it only cares about what is in there.

As a bonus, let's look at how the UI layer generates the URLs. When you want to generate the application URL from the URL layer, you'll use the keechma/ui-component.url function:

[:a {:href (keechma.ui-component/url ctx {:param "value"})} "This is my link"]

This function only cares about the what - what is the data that you want to represent in the URL - how it's going to be represented (how the URL will look like) is deferred to the router.


Like in the previous points when is not really removed, but it's seriously simplified. When is handled by the Keechma itself.

This point is the subtlest one from the list. We'll need an example to proceed, so let's imagine the UI that everyone used at least once (here I count on the possibility that you've used Outlook sometime in your life).

We have a master - detail view. There is an email list, and when you click on the email, the detail view (thread) will be rendered on the side.

Controller Manager

Let's define the routes:

  • /emails - renders the list of emails
  • /emails/:id - renders the list of emails and in the detail view the email thread (based on the :id param)

We will also enable the users to use pagination, which means that all of these URLs are valid too:

  • /emails?limit=10
  • /emails?offset=20
  • /emails?limit=10&offset=20
  • /emails/this-is-the-email-id?limit=10
  • /emails/this-is-the-email-id?offset=20
  • /emails/this-is-the-email-id?limit=10&offset=20

So far, so good - we know when we need to load what. Let's also pretend that we're using server - side like router to render the screens:

(defurl "/emails"

(defurl "/emails/:id" [id]
    (load-and-render-email-by-id id))

(this is pseudo code)

If we're living on the server side, this makes sense because we always need to load all of the data used in the rendering. We match the route pattern, load the data, and render something based on it. But on the frontend, we don't want to reload everything if we have the data already loaded in the memory. This complicates the behavior, let's go through some of the possible situations:

  1. When the user lands on /emails - we want to load the list of emails, and use the default limit
  2. When the user lands on /emails/:id - we want to load the list of emails, and use the default limit, and load the email by the id
  3. When the user lands on /emails/:id?offset=10 - we want to load the list of emails, start from the 10th email, and use the default limit, and load the email by the id

I could list more cases, but I guess you get the picture. But, let's complicate it some more. On the frontend, we need to handle these distinct ways in which users loads the URL:

  1. A full refresh - user has refreshed the page and we need to load all of the data - for /emails we load the emails list and for /emails/this-is-an-email-id we load the emails list and the email by id
  2. Incremental route change - user was on /emails and transitioned to /emails/this-is-an-email-id
  3. Incremental route change - user was on /emails/this-is-an-email-id and transtioned to /emails/this-is-an-email-id?offset=20

First case is clear, we know what to load. The other two ones are tricky, this is where the when part comes in. Let's map out the best behavior for these:

  1. Incremental change /emails -> /emails/this-is-an-email-id
    • We already have the emails list loaded on the frontend
    • We check if the email with the id this-is-an-email-id exists in the list of the loaded emails
      • If it exists we render it immediately
      • If it doesn't, we load it from the server and render
  2. Incremental change /emails/this-is-an-email-id -> /emails/this-is-an-email-id?offset=20
    • We want to keep the email with the id this-is-an-email-id in the memory
    • We want to load a new list of emails (using the offset param)

There is a lot of implicit whens here because we rely on the result of the previous route (when we go from the state A to the state B).

Keechma solves all of these cases for you. Let's take a look at another slide from the talk:

Controller Manager decision table

Using the controller manager's decision table, we can easily encode this behavior with two controllers:

(defrecord EmailList [])

(defmethod controller/params EmailList [_ route-params]
  (when (= "emails" (get-in route-params [:data :page]))
    {:offset (or (get-in route-params [:data :offset]) 0)
     :limit (or (get-in route-params [:data :limit]))}))

(defrecord EmailById [])

(defmethod controller/params EmailById [_ route-params]
  (when (= "emails" (get-in route-params [:data :page]))
    (get-in route-params [:data :id])))

These controllers are completely independent, and will be started by the controller manager when the controller/params function returns a non nil value.

I've promised that we will be removing the when from the equation, so here it is:

  • AppDB doesn't care when the controller that loads the email list was started - and when it will load the data
  • AppDB doesn't care when the controller that loads the email by id was started - and when it will load the data

Controllers only care about the what - what are the route params that I care about, when part is taken care of by Keechma.

This behavior is formalized in the Dataloader library which is an optional library for Keechma. You get all the right behavior - route driven data loading - out of the box, without the boilerplate code. Check out the blog post about the dataloader here.


Keechma is the framework that allows you to care about what. On each of it's layers, it isolates you from the previous layer and gives you a chance to build a deterministic and predictable app.

Even if you don't use Keechma, route - driven apps rock!

Keechma Workshop

There will be a Keechma Workshop in Zagreb on Oct 5th, where we'll cover the whole Keechma architecture. This is a great opportunity to get started with Keechma. The workshop is held as a part of the WebCamp conference which has a great talk lineup this year. See you there!

Keechma author Mihael Konjević on

Hello world app in Keechma (with routing)

In this blogpost we'll implement a simple - "hello world" level - app with Keechma that will introduce you to the Keechma router. Instead of printing the static "Hello World!" string, we'll write the app that can greet you by name.

App functionality can be defined like this:

  1. User can enter their name in the input box
  2. As the user is entering their name it's stored in the URL (like ?name=user-name)
  3. Application reads the value from the URL and displays the message ("Hello user-name")

Routes in Keechma are used as the minimal representation of the application state, and route data in Keechma is reactive - whenever the URL changes the router converts the URL into a Clojure map and stores it in the app DB.

Since we haven't defined any route patterns, the router will serialize the route params into the URL query params - that's why the URL will look like ?name=user-name.


Here's the complete component code:

(defn hello-world-routing-render [ctx]
  (let [current-name (or (get-in @(ui/current-route ctx) [:data :name]) "")]
     [:label {:for "name"} "Enter your name"]
       {:id "name"
        :on-change (fn [e] (ui/redirect ctx {:name (.-value (.-target e))}))
        :value current-name}]]

     (when (seq current-name)
       [:h1 (str "Hello " current-name)])]))

First thing that you may notice is that the renderer function accepts an argument called ctx. This argument is passed to each component (unless it's a pure component, but we'll get to that later) and it's purpose is to connect the component with the rest of the app.

Here we can see one of the core Keechma principles in action - no globals. Instead of depending on a (shared) global variable to communicate with the rest of the app, Keechma provides each component with their own view of the world. When the application is started, each component gets the ctx argument partially applied. Whenever your component does something that's affecting the rest of the app, it does so with the help of the ctx argument.

There are many different things that a component can do with it's context, but for now we'll focus on two functions ui/current-route and ui/redirect.

Reading the current route data

On the 2nd line you can see the code that looks like this: (get-in @(ui/current-route ctx) [:data :name]). Let's decompose the code and go through it part by part:

@(ui/current-route ctx) does a few things:

  1. It gets the current route subscription from the component context
  2. It dereferences the current route subscription and reads it's value

Subscription performance and caching

If you are an experienced Reagent user, you might notice that we're using the "Form-1" component here which should create a new subscription on each component re-render. Fortunately Keechma is caching it's subscriptions so this doesn't happen - every time the component is re-rendered it gets the same current-route subscription.

After we have read the current route value we extract the name from it's :data map. Whenever you read the route you will get a map that looks like this:

{:route "pattern-that-was-used-to-match-the-url"
 :data {:key "value"}}

Most of the time, you'll only want to read the values in the :data attribute.

If our URL looked like this: ?name=Mihael the route map returned by the ui/current-route function would look like this:

{:data {:name "Mihael"}}

In this case the :route attribute is missing, since we haven't defined any route patterns (yet!).

Storing the name in the route

Next part that interests us is the input field. Here's where the action happens:

 {:id "name"
  :on-change (fn [e] (ui/redirect ctx {:name (.-value (.-target e))}))
  :value current-name}]]

This input field is a standard controlled React component. This means that both the value and the on-change props are defined and they control the current value of the input field.

Input's :value is set to the current route's :name, which will be an empty string when we start the app (remember the 2nd line of our component). Whenever the input value is changed, the on-change handler is called. Let's figure out what's going in there:

(fn [e] (ui/redirect ctx {:name (.-value (.-target e))}))
  1. First we read the value of the event target (the input field in this case)
  2. Then we call the ui/redirect function and pass it the params that we want to represent in the route

This means that on each input change the URL will change to reflect that change. URL will be converted to a Clojure map and stored in the app DB. Since we're dereferencing the ui/current-route subscription in the component, Reagent will re-render the component with a new route value.

Route Circle

Adding pretty routes

Right now the app is using query params to serialize the route params, but let's say that we want the route to look like this: name/Mihael. We want to have the same functionality, but we want to change the route so the URLs look nicer.

It turns out it's trivial to do so. Remember, in Keechma route params are just data, and how they are serialized into the URL is of no concern to the rest of the app.

(def app-definition
  {:components   {:main hello-world-routing-component}
   :html-element (.getElementById js/document "app")})

(def pretty-route-app-definition
  {:components   {:main hello-world-routing-component}
   :html-element (.getElementById js/document "app")
   :routes       ["name/:name"]})

If you compare these two app definitions, you'll notice that the only difference is in the :routes attribute. Keechma uses patterns defined in this attribute to serialize and deserialize the route params. The rest of the app stays the same!

Route defaults

If we want to go a step further we can add a default value to the route param. Let's change the route definition so it looks like this:

[["" {:name "Student"}]
 ["name/:name" {:name "Student"}]]

Now our application is greeting people even if they don't enter their name. In this case we're defining multiple route patterns:

  1. ["" {:name "Student"}] - This pattern will match an empty route (for instance when you load the app for the first time), and it will set the :name param to "Student"
  2. ["name/:name" {:name "Student"}] - This pattern will match any route that starts with name/ even if the :name param is not set - in that case it will set the :name to "Student"

Keechma principles

In this post we wrote a pretty small app, but nonetheless it helped us to uncover two important Keechma principles:

  1. In Keechma routes are just data, your components never care how the URL looks, they only care about the data that the URL contains
  2. Routes are reactive - they get stored in the app DB, and you can treat them like any other part of your state

Keechma author Mihael Konjević on

Writing a RealWorld app with Keechma

RealWorld is a Medium clone example app written in various frontend and backend technologies. Think of it as a TodoMVC on steroids. I've recently written a Keechma version of the app, and in this blog post I'll walk you through the architecture and implementation.

Every RealWorld implementation adheres to the same API contract, which means that you can mix and match frontends and backends. Since the app is considerably more complex than TodoMVC, it gives a better overview of the patterns used in various frameworks. Today, I want to focus on dataloader and pipelines, and how I used them to build the Keechma version.

This article assumes that you're familiar with dataloader and pipelines. If you're not, now is a good time to check these blog posts.

The architecture


Let's start with the review of the datasources that the app consumes:

  1. Articles - being a Medium clone, articles are front and center in the app, and there are multiple params we can apply to the articles service, and each of these can be paginated
    1. General Feed - chronological list of articles in the system
    2. User Feed - articles posted by the authors followed by the currently logged in user
    3. Articles filtered by tag
    4. User's posted articles
    5. User's favorited articles
  2. Current Article - Article detail view
  3. Current Article Comments
  4. Tags - list of popular tags
  5. Current User Profile - currently logged in user's profile data
  6. User Profile - (any) User detail view

This is how it looks in the app:

Datasources Architecture

There are more places where some of the datasources are used, but this is the general layout.

Although all of the implementations should work in the same way, I've used some artistic freedom to make the implementation more in line with the Keechma best practices. Practically, it means that I've pushed more state to the route. If you compare Keechma implementation to the default one, you'll notice that (unlike the default one), Keechma version changes the route when you click on the page or tag. It is not necessary to use the route to trigger the dataloader, but it made more sense.

Now after we have the datasources defined, let's pair them with the routes:

  1. Articles
    1. General Feed - /, /home
    2. User Feed - '/home/personal
    3. Articles filtered by tag - /home/tag/:tag
    4. User's posted articles - /profile/:username
    5. User's favorited articles - /profile/:username/favorites
  2. Current Article - /article/:slug
  3. Current Article - /article/:slug
  4. Tags - /, /home, /home/personal, /home/tag/:tag
  5. Current User Profile - any page
  6. User Profile - /profile/:username, /profile/:username/favorites

My favorite part about the dataloader is how easily are the UI needs translated to code, and how obvious the result is. Each datasource checks the route and returns the loading params when the route is right (in this app, loader function will not load the datasource if the datasource's params function returns nil). If a route needs multiple datasources loaded, they will be loaded in parallel (like General Feed articles and Tags - both are needed on the homepage).

If you take a look at the articles datasource, you'll notice that it's actually loading five different sets of articles. But, since they are all handled by a single datasource, the component that renders articles can subscribe to only one subscription - articles. This makes the UI layer super simple, the component doesn't care why and how are the articles loaded, it only cares about the rendering. Another advantage of this approach is that you have a truly unidirectional data flow, articles are "pushed" from the app-db to the articles component, instead of being "pulled" from the component. The route is the main source of truth.

Unidirectional data flow

The code

Let's start with the loader function, which takes in the requests from each datasource and makes the HTTP to get the data.

(def api-loader
   (fn [req]
     (when-let [params (:params req)]
       (let [app-db (:app-db req)
             get-from-app-db (or (:get-from-app-db params) (fn [_] nil))]
         (or (get-from-app-db app-db)
             (api/dataloader-req params)))))))

The loader function is wrapped with the map-loader helper because loader will get a vector of all datasource requests it can resolve at once. Then, for each datasource request we check if the params contain the :get-from-app-db function. Loader function has full access to the current app-db value, which we can use to check if the requested data is already in the app-db. If it's not, we make the actual HTTP request. This api loader function is used by all listed datasources. If the params don't exist, loader will return nil which will cause the dataloader to remove the previously loaded data (for that datasource) from app-db.

The simplest datasource is tags, it's loaded only on the homepage, and it always loads the same data:

(def tags-datasource
  {:target [:edb/collection :tag/list]
   :params (fn [_ {:keys [page]} _]
             (when (= "home" page)
               {:url "/tags"}))
   :processor api/process-tags
   :loader api-loader})

The :target attribute says that the returned data should be stored as an EntityDB collection under the entity :tag in a collection named :list. Second argument to the :params function is route (which is destructured here - we only need the :page attribute), which is used to check if we're on the homepage, and if we are it returns the params which are passed to the loader function.

The datasource that wasn't mentioned yet, because it's different from the others, is the jwt datasource. The RealWorld app allows the user to be logged in, and it requires the user's JWT token to be stored in the browser's local storage. This datasource is a bit specific because it can be placed in the app-db by non - dataloader mechanisms. For instance, when the user registers or logs in, JWT token will be put into the local storage and app-db by the code that handles registration or login. This is one of the advantages of dataloader, it doesn't require exclusive management of the data. You can mix and match dataloader with your own logic.

(def ignore-datasource-check :keechma.toolbox.dataloader.core/ignore)

(def jwt-datasource
  {:target [:kv :jwt]
   :loader (map-loader #(get-item local-storage "conduit-jwt-token"))
   :params (fn [prev _ _]
             (when (:data prev) ignore-datasource-check))})

Let's take a look at the :params function. The first argument to the params function is a value that is currently present in the app-db. In this case, we check if that value exists, and if it does we return :keechma.toolbox.dataloader.core/ignore. This tells the dataloader that whatever is in the app-db is good enough and that it shouldn't do anything about this datasource - the loader function will not be called. If the previous value is missing, params function will return nil and the :loader function will be called. The loader will try to load the JWT from the local storage.

After we've covered the jwt datasource, we can move to the most complex datasource in the system - articles. To reiterate, articles datasource loads one of the five variants (and each one of them can be paginated):

  1. General Feed - /, /home
  2. User Feed - '/home/personal
  3. Articles filtered by tag - /home/tag/:tag
  4. User's posted articles - /profile/:username
  5. User's favorited articles - /profile/:username/favorites

One of those variants is different from the others. Can you guess which one? If your answer is "User Feed" you're right - it requires the user to be logged in, and it's loaded from a different API endpoint with the Authorization header present. This means that the articles datasource needs a way to get the JWT token from app-db. Dataloader supports the :deps attribute for cases like this. Dataloader will reload (automatically) reload a datasource whenever the route or any of the datasource's dependencies change.

Let's take a look at the code:

(defn add-articles-tag-param [params {:keys [subpage detail]}]
  (let [tag (when (= "tag" subpage) detail)]
    (if tag
      (assoc params :tag tag)

(defn add-articles-pagination-param [params {:keys [p]}]
  (if p
    (let [offset (* (dec (js/parseInt p 10)) settings/articles-per-page)]
      (assoc params :offset offset))

(defn add-articles-author-param [params {:keys [page subpage detail]}]
  (if (and (= "profile" page) subpage)
    (if (= "favorites" detail)
      (assoc params :favorited subpage)
      (assoc params :author subpage))

(defn auth-header
  ([jwt] (auth-header {} jwt))
  ([headers jwt]
   (if jwt
     (assoc headers :authorization (str "Token " jwt))

(def articles-datasource
  {:target [:edb/collection :article/list]
   :deps [:jwt]
   :params (fn [_ route {:keys [jwt]}]
             (let [page (:page route)
                   subpage (:subpage route)
                   personal-feed? (and (= "home" page) (= "personal" subpage))]
               (when (or (= "home" page)
                         (= "profile" page))
                 (-> {:url (if personal-feed? "/articles/feed" "/articles")}
                     (assoc :headers (auth-header jwt))
                     (add-articles-author-param route)
                     (add-articles-pagination-param route)
                     (add-articles-tag-param route)))))
   :processor api/process-articles
   :loader api-loader})

As you can see, :jwt is listed as a dependency, and the :params function receives a map with all of its dependencies as the third argument. The :params function will first check if we're on the home or on the profile page - which is where the articles are rendered. After that it checks if we're rendering the general or the user feed (based on the route's :subpage attribute). This will determine which endpoint will be used to retrieve the articles. Rest of the code in the :params function adds the optional params based on the route - pagination, tag, favorited and author filters, and the Authorization header if the JWT is present.

This is all that's needed to implement a pretty complex datasource, all the logic is in one place, and you can easily determine what will be loaded based on the route and presence of the JWT token.

There are a few important points here that I want to make:

  1. Most applications are read heavy (instead of write heavy), and it's important to be able to reason about the data that is loaded for each screen. Dataloader gives you this ability by grouping all of the logic in one place.
  2. Dataloader allows you to think about the business concepts in your UI level - instead of the concrete implementations. UI component that renders articles doesn't care about how and when they are loaded (and with which params) - it only cares about the rendering
  3. Dataloader is not locking you into one approach, if you need more flexibility you can always combine it with your own code and logic.
  4. Dataloader is not coupled with the storage mechanism (like Relay and GraphQL) - you can load data from anywhere - it took under 20 lines of code to integrate with the existing (RealWorld) API
  5. Dataloader introduces a level of indirection between the what and how - :params function is a synchronous, pure function which makes it easily testable

User actions

With the dataloader in place, we can move on to the user actions. In the RealWorld app each user can do the following:

  1. Login
  2. Logout
  3. Register
  4. Create an article
  5. Edit an article
  6. Delete an article
  7. Favorite/unfavorite an article
  8. Follow/unfollow a user

Login, logout, register, creating an article and editing article features are implemented with the new forms library in the Keechma toolbox. I will write about the need for a new form library - different from Keechma Forms, in the next blog post. For now, I'll just say that the new library has a better integration with Keechma, while the original version is a better fit for non-Keechma apps based on Reagent. Their philosophy is the same, and the new library is using some of the features implemented by the Keechma Forms library.

In this post, I'll focus on favorite/unfavorite article feature (follow/unfollow user is almost the same in its implementation). Let's write down how the feature should work:

  1. If the user is not logged in - the button should be shown, but instead of changing the favorited status of an article, it should take the user to the registration page
  2. If the user is logged in - the button should change the favorited status of an article.
  3. The button should work both on each article in the list, and on the article detail view - when only one article is shown on the page.

This is an interesting problem because it requires a combination of a global and local state. The component gets the current user from the app-db (by declaring a subscription dependency) and article through the arguments.

(ns realworld.ui.components.favorite-button
  (:require [keechma.ui-component :as ui]
            [keechma.toolbox.ui :refer [sub> <cmd]]
            [keechma.toolbox.util :refer [class-names]]))

(defn render
  ([ctx article] (render ctx article :small))
  ([ctx article size]
   (let [favorited? (:favorited article)
         fav-count (:favoritesCount article)
         current-user (sub> ctx :current-user)
         action (if current-user
                  #(<cmd ctx :toggle-favorite article)
                  #(ui/redirect ctx {:page "register"}))]
      {:on-click action
       :class (class-names {:btn-outline-primary (not favorited?)
                            :btn-primary favorited?
                            :pull-xs-right (= :small size)})}
      [:i.ion-heart] " "
      (if (= :small size)
        (str (if favorited? "Unfavorite" "Favorite") " Post (" fav-count ")"))]

(def component
  (ui/constructor {:renderer render
                   :subscription-deps [:current-user]
                   :topic :user-actions}))

The component checks if the current user exists, and based on that determines how to handle the click. If the user is present, it will send the :toggle-favorite command to the :user-actions controller, and if it's not it will redirect the user to the registration page. Notice how this component doesn't care if the user already favorited the article, this logic is in the controller.

(ns realworld.controllers.user-actions
  (:require [keechma.toolbox.pipeline.core :as pp :refer-macros [pipeline!]]
            [keechma.toolbox.pipeline.controller :as pp-controller]
            [realworld.edb :refer [insert-item get-named-item remove-item]]
            [promesa.core :as p]
            [realworld.api :as api]))

;; some code is omitted in this example

(defn toggle-favorite [article app-db]
  (let [jwt (get-in app-db [:kv :jwt])
        slug (:slug article)]
    (when jwt
      (if (:favorited article)
        (api/favorite-delete jwt slug)
        (api/favorite-create jwt slug)))))

(def controller
   (fn [_]
   {:toggle-favorite (pipeline! [value app-db]
                       (toggle-favorite value app-db)
                       (pp/commit! (insert-item app-db :article value)))}))

The controller checks if the article was favorited by the user, and based on that creates or deletes the article. :favorited status is present in the article, which means that you'll get a different result if you load the article with or without the authorization header. Dataloader takes care of that because it depends on the :jwt datasource, so you'll always get the right data.

When the toggle favorite promise is resolved, the article is placed back in the app-db. This app is using EntityDB to store it's data, which means that when we insert the item into the app-db, the changes will automatically propagate to all places where the article is rendered.

Redirecting from unavailable pages

There are some pages in the app which are available or unavailable based on the presence of the current user. For instance, if the user is logged in, they shouldn't be able to go to the registration page. If the user is not logged in, they shouldn't be able to access the settings or the editor. This kind of feature is tricky to implement because user loading is asynchronous, and you want to avoid loading user twice just because you need it in two places. Also, this shouldn't be a responsibility of the component, because it makes your component sideffectful.

Current user is loaded by the dataloader, so in an ideal world, we should be able to wait until the dataloader is done, before making a decision. You probably guessed it, dataloader does provide you with the ability to do so. Let's take a look at the controller code:

(ns realworld.controllers.redirect
  (:require [keechma.toolbox.pipeline.core :as pp :refer-macros [pipeline!]]
            [keechma.toolbox.pipeline.controller :as pp-controller]
            [keechma.toolbox.dataloader.controller :as dataloader-controller]
            [realworld.edb :refer [get-named-item]]))

(defn get-redirect [route app-db]
  (let [page                   (:page route)
        subpage                (:subpage route)
        current-user           (get-named-item app-db :user :current)
        current-article        (get-named-item app-db :article :current)
        current-article-author (if current-article ((:author current-article)) nil)
        personal-page          {:page "home" :subpage "personal"}
        home-page              {:page "home"}]
      (and (= "login" page) current-user)                                        personal-page
      (and (= "register" page) current-user)                                     personal-page
      (and (= "home" page) (= "personal" subpage) (not current-user))            home-page
      (and (= "editor" page) (not current-user))                                 home-page
      (and (= "settings" page) (not current-user))                               home-page
      (and (= "article" page) (not current-article))                             home-page
      (and (= "editor" page) (not current-article))                              home-page
      (and (= "editor" page) subpage (not= current-user current-article-author)) home-page
      :else                                                                      nil)))

(defn redirect! [route app-db]
  (let [redirect-to (get-redirect route app-db)]
    (when redirect-to
      (pp/redirect! redirect-to))))

(def controller
   (fn [{:keys [data]}]
   {:start (pipeline! [value app-db]
             (redirect! value app-db))}))

This controller returns the whole route map from it's params function. This means that this controller will be restarted on each route change. In the :start pipeline we can see the (dataloader-controller/wait-dataloader-pipeline!) function call. This function will return a promise which will be resolved when the dataloader is finished. This greatly simplifies the logic. In the get-redirect function we have access to the whole app-db, and we can make the right decision. Again, it is great to have this kind of logic in one place, you always know what will happen based on the route and the loaded data.


The RealWorld app is a great example of how Keechma works. It's implemented in a "modern" way - with the dataloader and pipelines. Dataloader completely changed the way in which I architect and reason about the apps that I'm building. Using the route as the main source of the truth makes your app more deterministic, and simpler.

Keechma author Mihael Konjević on

Why Keechma Controllers

Every frontend framework tries to answer the same question - what is the best way to manage the application state. There are many approaches - MVC, MVVM, Flux, Redux... - and Keechma also has it's own.

Each of this approaches is giving the answers to the following questions:

  • How to communicate with the rest of the world (calling an API)?
  • How to respond to user interactions (mouse clicks, key presses...)?
  • How to mutate the application state?

Keechma apps implement this kind of code in controllers. Controllers are a place for all the dirty, impure parts of your app code, and they act as a bridge between your (pure) domain code and code that has side effects (storing a user on the server).

How are controllers different?

While philosophically close to Redux actions and reducers, Keechma controllers differ significantly in the implementation.

  • Keechma controllers have enforced lifecycle
  • Keechma controllers are route driven
  • Keechma controllers can implement a long-running process that can react to commands

Drivers of change

Changes in the application state happen for a few different reasons:

  1. Page reload
  2. Route change
  3. User action

Keechma treats page reload and route change as tectonic changes - a lot (or all) of the data in the application state will probably change when one of these happen. User actions are more of an incremental change, they will probably affect a small amount of data (for instance, the user might favorite a post - this is a small, incremental change to the application state).

Keechma controllers have their lifecycles controlled and enforced by the URL. Each controller implements a param function which tells the controller manager if that controller should be running or not. Controller Manager (internal part of Keechma) has a set of rules which determine what should happen when the route changes. Whenever the URL changes, Controller Manager will do the following:

  1. It will call the params function of all registered controllers
  2. It will compare the returned value to the previous value (returned on the previous URL change)
  3. Based on the returned value it will do the
    1. If the previous value was nil and the current value is nil it won't do a thing
      1. If the previous value was nil and the current value is not nil it will start the controller
    2. If the previous value was not nil and the current value is nil it will stop the controller
    3. If the previous value was not nil and the current value is not nil but those values are same, it won't do a thing
    4. If the previous value was not nil and the current value is not nil but those values are different it will restart the controller

Controller manager ensures that the same controllers will always run for the same URL - it doesn't matter if it's a route change or a full page reload. This makes reasoning about the application state easier, you can treat every route change as if it was a full page reload. This was inspired by the React's way of reasoning where you don't care how the DOM is changed, you can mentally treat it as a full re-render.

Minimal layer of abstraction

Redux and similar frameworks model state changes as a combination of actions and reducers.

The only way to change the state tree is to emit an action, an object describing what happened. To specify how the actions transform the state tree, you write pure reducers.

Redux documentation

While I like the simplicity of this approach, I feel that it's pushing the state management complexity to the application layer. If you model every state change as a (synchronous) action, every interaction that talks to the outer world will require multiple actions. This makes the flow hard to follow.

Instead of abstracting that kind of code, Keechma gives you complete control of how and when you change the application state. Controllers get full (read / write) access to the application state, and you can use any approach that fits your application.

Here's an example of a non-standard controller:

(defrecord Controller []
  (params [_ route]
    ;; This controller is active only on the order-history page
    (when (= (get-in route [:data :page]) "order-history")
  (start [this params app-db]
    ;; When the controller is started, load the order history
    (controller/execute this :load-order-history)
  (handler [this app-db-atom in-chan out-chan]
    ;; When the controller is started connect to the websocket.
    ;; This way we can receive the messages when something changes
    ;; and update the application state accordingly.
    ;; connect-socketio function returns the function that can be
    ;; used to disconnect from the websocket.
    (let [disconnect (connect-socketio in-chan)]
      (go (loop []
            (let [[command args] (<! in-chan)]
              (case command
                ;; When the controller is started, load the order-history
                :load-order-history (load-order-history app-db-atom)
                ;; When we get the order-created command from the websocket,
                ;; create a new order
                :order-created (order-created app-db-atom args)
                ;; When we get the order-updated command from the websocket,
                ;; update the order in the entity-db
                :order-updated (order-updated app-db-atom args)
                ;; When we get the order-removed command from the websocket,
                ;; remove the item from the entity-db. This will automatically
                ;; remove it from any list that references it
                :order-removed (order-removed app-db-atom args)
                ;; Disconnect from the websocket
                :disconnect (disconnect)
              (when command (recur)))))))
  (stop [this params app-db]
    ;; When the controller is stopped, send the command to disconnect from
    ;; the websocket and remove any data this controller has loaded.
    (controller/execute this :disconnect)
    (edb/remove-collection app-db :orders :history)))

Source Code

This controller contains logic for a data source that receives updates over a websocket. Here's what's going on:

  1. Controller will be started when the route's page attribute is order-history
  2. On controller start, it will load the order history from server
  3. On controller start, it will connect to a websocket and listen to events (connect-socketio function returns a function that disconnects a websocket connection).
  4. On controller stop, it will disconnect itself from the websocket and remove any loaded data from the application state

Important thing is that all of this functionality lives in the same place, and you can easily figure out how it works. There is no need to jump around and play the event ping pong.

Abstractions on top of controllers

Low level of abstraction is great because it doesn't force you to fit your problem into the approach that is implemented by the framework. The bad thing about the low level of abstraction is that you have a lot of boilerplate code to solve the simple stuff.

This is why the pipelines were introduced. You can read the full blog post about them here, but in a nutshell - they exist to make the simple problems easy to solve.

Pipelines embrace the asynchronous nature of frontend development while allowing you to keep the related code grouped together. Let's take a look at an example that is familiar: You want to load some data from the server, and you want to let the user know what is the status of the request. You also want to handle any errors that might happen:

(pipeline! [value app-db]
    (pp/commit! (assoc app-db :articles-status :loading))
    (pp/commit! (-> app-db
                  (assoc :articles-status :loaded)
                  (assoc :articles value)))
   (rescue! [error]
     (pp/commit! (assoc app-db :articles-status :error))))

This approach was inspired by the Railway Oriented Programming talk, and the nice thing about it is that it was possible to implement a system like this because controllers give you a full access to application state. Pipelines are not a core Keechma feature, they are implemented in a separate library.


Controllers give you a full control over your application. They don't presume that you can fit your problem into any pattern or way of thinking. Their abstraction level is intentionally low, and you have a complete access to the application state. This makes it possible to solve non standard, specific problems with them. When you need an easy way to handle standard problems (like data loading, or user interaction) - use pipelines.

Keechma author Mihael Konjević on

Introducing Keechma Toolbox (Part 2) - Dataloader

Loading data from the back-end is a problem that every front-end app has to solve. Complex apps have to load data from multiple (sometimes even more than the magical number seven) endpoints for some screens.

In such situations, several problematic question arise:

  1. How to define dependencies between data sources?
  2. When to invalidate loaded data?
  3. How to keep the data loading performant?

Why are these questions important at all? Why not do it all manually?

The right way to load the right data at the right time (with very little ceremony)

The foundational idea of Keechma is that state should be route driven. The route can be thought of as the minimal representation of state. This makes the process of loading components, controllers and data predictable and deterministic.

1) Data source definitions

Most Keechma controllers (in our experience) end up having a lot of boilerplate code for loading data. It would be better if there was a specialized construct for defining a way to load data. That way you could define all you data requirements upfront and keep your controllers nice and clean.

2) Dependency management

A screen can depend on multiple sources of data to render itself. Sometimes you need to wait for a piece of data before you can load the next one. The asynchronous nature of these requests lends itself to explicit dependency management. Requests can be managed manually, but why not introduce some structure into this process?

3) Data invalidation

Sometimes, an event that doesn't change the route can create "tectonic" changes in the app. For instance, the user logs in. Now you have to fetch all user related data anew and replace old data with current user's data. You want to load the relevant data along with all dependencies and you also want the old data to die.

4) Request boxing

The fact that a screen depends on multiple data sources (ideally) shouldn't require sending multiple requests. Before sending out the requests, we could box them into one request and unpack the response. Wanting to make as few requests possible is a no brainer proposition.

Keechma Dataloader vs. other approaches to data-loading

Relay and similar frameworks approach this problem on the component level:

Relay couples React with GraphQL and develops the idea of encapsulation further. It allows components to specify what data they need and the Relay framework provides the data. This makes the data needs of inner components opaque and allows composition of those needs. Thinking about what data an app needs becomes localized to the component making it easier to reason about what fields are needed or no longer needed.

Other approaches like which don't couple to a back-end technology (GraphQL), but follow the similar philosophy where components load the data they need.

I decided to take a different approach with Dataloader. Instead of components being responsible for the declaration of the data they need, this information is derived from the route.

Choosing an approach implies that there is a trade-off (as there always is):

  • In the component/query collocation case (Relay, you can easily determine the data that the each component needs
  • In the route case (Keechma) you can easily determine the data that the application needs as a whole

Dataloader's goal is to give you the best of the component based thinking - declarative approach to data loading - while being able to easily reason about the whole app.

The way of Dataloader

At its heart, Dataloader is route driven - it will automatically run on each route change. You can also run it manually if there is a need to do so. It is an optional addition to Keechma, and you can combine it with an imperative approach.

Dataloader requires you to define your data-sources and when and how they should be loaded.

Here's a simple datasource example:

(def datasources
        {:target  [:edb/collection :restaurants/list]
     :loader  (map-loader
                   (fn [req]
                     (when (:params req)
                     (GET "/restaurants"))))
     :params  (fn [prev {:keys [page]} deps]
                  (when (= "restaurants" page) true))}

On each route change (or manual run), Dataloader will call the :params function for each datasource and check if the returned params are different from the previously returned value. If it is, Dataloader will call the :loader function which makes the actual request for the data, and store it wherever the :target attribute points to.

From the shown example we can determine the following:

  • :params function will return true only if the route's :page param equals to "restaurants"
  • :loader checks if the params function returned a truthy value, and if it did it makes an AJAX request to the /restaurants endpoint
  • returned data will be stored as an EntityDB collection named :list, under the :restaurants entity

:loader function will be called even if the :params function returns nil - it is up to the loader to give meaning to the value returned from the :params function. If the :loader function returns nil for a datasource - the currently loaded data will be removed from the app state.

Although this is a super simple example, it still has all of the important elements of data loading - when and how the data should be loaded.

Another thing that is important to point out is that the loader function is wrapped with the map-loader helper. map-loader is a function that calls the loader function for each data source request.

When Dataloader loads the data it will try to load as much data as possible at once. First it will collect all the requests (which are determined from the :params function return value), and then it will call the :loader function with a vector which contains all the requests that can be made at once. This allows you to combine multiple data requests into one HTTP call if your backend supports it - as is the case with GraphQL. We'll talk more about this feature later in the article.

Data dependencies

Depending on your app, and it's business logic, you might have dependencies between the data sources. JWT or some other token auth systems is one of the examples where you have dependencies between datasources.

In the following example we can see how datasources look when a datasource needs a token to make the request:

(def access-token-loader
   (fn [_]
     (get-item local-storage "whenhub-access-token"))))

(def ignore-datasource

(def datasources
  {:access-token {:target [:kv :access-token]
                  :loader access-token-loader
                  :params (fn [prev _ _]
                            (when prev

   :schedules {:target [:edb/collection :schedule/list]
               :deps [:access-token]
               :params (fn [prev route deps]
                         (when (and (= "edit" (:page route))
                                    (nil? (:id route)))
               :loader (map-loader
                         (fn [req]
                           (when-let [access-token (get-in req [:params :access-token])]
                             (load-schedules access-token))))}})

In the previous example, we can see that the :schedules datasource has a :deps attribute. :deps allow you to list dependencies which need to be resolved before the datasource is loaded. In the :schedules datasource case it's :params function checks if the :page route param equals to "edit" and if the :id route param is nil. In that case it just returns the dependencies object. The :loader function checks if the :access-token is present in the params map, and if it is, it makes the request to load the schedules.

There is more interesting stuff to see in this example. For instance, the :access-token datasource loads it's data from the local storage - Dataloader doesn't care where you load the data from.

Another interesting thing to notice is that :access-token :params function returns :keechma.toolbox.dataloader.core/ignore if the previous value exists. This tells the Dataloader that it shouldn't do anything with that datasource, whatever is stored in the app state is good enough.

Optimizing the loader

GraphQL and Dataloader are a match made in heaven. Since GraphQL allows you to request multiple data sources in one HTTP request, you can easily write a loader function which is very efficient.

Here's an example of that behaviour:

(ns graphql-starwars.datasources
  (:require [graphql-builder.parser :refer-macros [defgraphql]]
            [graphql-builder.core :as gql-core]
            [promesa.core :as p]
            [keechma.toolbox.ajax :refer [POST]]
            [clojure.string :as str]))

(defgraphql graphql "resources/graphql/queries.graphql")

(def gql-endpoint "")

(defn gql-results-handler [unpack]
  (fn [{:keys [data errors]}]
    (if errors
      (throw (ex-info "GraphQLError" errors))
      (unpack data))))

(defn gql-req [params]
  (->> (POST gql-endpoint
             {:format :json
              :params (:graphql params)
              :response-format :json
              :keywords? true})
       (p/map (gql-results-handler (:unpack params)))))

(defn graphql-loader [reqs]
  (let [params (map (fn [req] (when (:params req) (assoc (:params req) :id (keyword (gensym "req"))))) reqs)
        clean-params (remove nil? params)]
    (if (seq clean-params)
      (let [queries (reduce (fn [acc p] (assoc acc (:id p) (:query p))) {} clean-params)
            variables (reduce (fn [acc p] (assoc acc (:id p) (:variables p))) {} clean-params)
            composed-fn (gql-core/composed-query graphql queries)
            req-promise (gql-req (composed-fn variables))]
           (map (fn [param]
                  (when param
                    (p/map #(get % (:id param)) req-promise))) params))

(defn result-extract [resource]
  (let [query-name (str "all" (str/capitalize resource))]
    (fn [res]
      {:meta {:count (get-in res [query-name :totalCount])}
       :data (get-in res [query-name (keyword resource)])})))

(defn make-params [resource]
  (fn [_ {:keys [columns]} _]
    (when (contains? (set columns) resource)
      {:query (str "Load" (str/capitalize resource))
       :variables {}})))

(def datasources
  {:films {:target    [:edb/collection :film/list]
           :params    (make-params "films")
           :loader    graphql-loader
           :processor (result-extract "films")}

   :species {:target    [:edb/collection :species/list]
             :params    (make-params "species")
             :loader    graphql-loader
             :processor (result-extract "species")}

   :starships {:target    [:edb/collection :starship/list]
               :params    (make-params "starships")
               :loader    graphql-loader
               :processor (result-extract "starships")}

   :people {:target    [:edb/collection :person/list]
            :params    (make-params "people")
            :loader    graphql-loader
            :processor (result-extract "people")}

   :planets {:target    [:edb/collection :planet/list]
             :params    (make-params "planets")
             :loader    graphql-loader
             :processor (result-extract "planets")}

   :vehicles {:target    [:edb/collection :vehicle/list]
              :params    (make-params "vehicles")
              :loader    graphql-loader
              :processor (result-extract "vehicles")}})

Watch the video if you’re interested in the thorough explanation of the code, but the important thing is that you get the request optimization for free - datasources don’t have to know that their data request will be combined with others.

Dataloader and Keechma architecture

Dataloader is a pretty new addition to the Keechma Toolbox library, but it had a very profound effect on the architecture of the apps that we build.

Instead of the manual management of data loading, which required a lot of boilerplate code and a lot of controllers, Dataloader allows us to extract this code and contain it in one place. It also allows us to have a high level overview of application's data needs which becomes extremely important as the application grows in size.

Dataloader is probably the most important library I've released after Keechma itself, and in my opinion it gives you the best of both worlds - Relay and Redux like architectures.

Keechma author Mihael Konjević on

Introducing Keechma Toolbox (Part 1) - Pipelines

In this blog post I want to introduce the Keechma Toolbox Library - a set of tools I've been using while developing Keechma apps. While Keechma the framework is pretty agnostic when it comes to implementation, the Toolbox lib is heavily opinionated and contains code that I'm using every day.

There is a possiblity that some of the stuff will end up in the separate packages, but right now it's all together because it's easier to develop and manage.

Today I'll talk about one of the sub-libraries - controller pipelines.

The Problem

When I was developing Keechma, I was very careful to leave the controllers open ended. Controllers are the connection point between your domain code and the UI so I didn't want to build too much structure or restrictions in them, I wanted you to be able to implement any features you want.

When you implement the handler function, you get access to the app-db atom, and two channels - in-chan, which is used to receive the messages, and out-chan, which is used for communication with other controllers. This flexibility is great, but the resulting code is not something I would call pretty:

(defn update! [app-db-atom updater]
    (reset! app-db-atom (updater @app-db-atom)))

(defn load-restaurant [app-db-atom slug]
  ;; Before making the request for the restaurant save the empty item with
  ;; the meta defined - {:is-loading? true}
  (update! app-db-atom
           #(edb/insert-named-item % :restaurants :current {} {:is-loading? true}))
    ;; Load the restaurant and save it in the entity-db
    (let [req (<! (http/get (str "/restaurants/" slug)))
          meta {:is-loading? false}
          [success data] (unpack-req req)]
      (update! app-db-atom
#(edb/insert-named-item % :restaurants :current data meta)))))

example command handler from the "Place My Order" app

There is a lot of stuff happening in this function:

  1. We mark the current restaurant as loading (by storing {:is-loading? true} as it's metadata)
  2. We make an AJAX request to load the restaurant data
  3. We extract the data from the response (unpack-req function)
  4. We finally store the data in the app-db

* There is one thing missing in this function - error handling.

This function is pretty short, but it suffers from a serious problem: it complects three different types of functions into one opaque blob:

  1. Side-effect functions - functions that mutate the app-db
  2. Pure functions - like unpack-req
  3. Async functions that communicate with the API

I would also add that the code is not exactly clear in it's intent. A lot of things happen at once, some of which implicitly. That makes it hard to understand.

Let's rewrite this code to use pipelines (and pretend that the http/get function is returning a promise instead of a channel):

(pipeline! [value app-db]
    (pp/commit! (edb/insert-named-item app-db :restaurants :current {} {:is-loading? true}))
    (http/get (str "/restaurants/" value))
    (unpack-req value)
    (pp/commit! (edb/insert-named-item app-db :restaurants :current (last value) {:is-loading? false}))
    (rescue! [error]
        (pp/commit! (edb-insert-named-item app-db :restaurants :current {} {:error error}))

While this code complects all of the steps, it looks much nicer:

  1. You can easily follow the flow between steps
  2. You don't have to care about the async functions, it's handled for you automatically
  3. Side-effect functions are clearly marked (pp/commit!)
  4. Error handling is easy to implement
  5. Each of the functions in the pipeline is doing only one thing, which makes the intent clear

Now that you've seen the final result, let me explain how pipelines work.


I implemented pipelines out of frustration. I was writing the same boilerplate code over and over again. Most of the controller actions have the same form:

  1. Mark some item or collection as loading
  2. Make a request to the API to load the data
  3. Extract the data from the response
  4. Put the data into the app-db
  5. Handle any errors that might have happened

So, just to load an item from the server we have to update the app-db two times, handle an async operation and handle any potential errors. Over and over again. This kind of code is hard to extract and generalize so I had a bunch of similar functions littered in the code base.

Pipelines allow me to write this code in a clear, declarative fashion and they are very easy to use when you understand how they work.

The implementation

  1. Pipelines are built from a list of functions
  2. Each function can be either a side-effect or a processor function
  3. value is bound to the command arguments or the return value of the previous processor function
  4. app-db value is always bound to the current state of the app-db atom
  5. Side-effects can't affect the value - their return value is ignored
  6. If a processing function returns a promise, pipeline will wait until that promise is resolved or rejected before proceeding to the next function
  7. If a processing function returns nil the value argument will be bound to the previously returned value
  8. Any exception or promise rejection will cause the pipeline to jump to the rescue! block

I'm sure you're interested in how all of this works. Let's take a look at the pipeline code again:

(pipeline! [value app-db]
    (pp/commit! (edb/insert-named-item app-db :restaurants :current {} {:is-loading? true}))
    (http/get (str "/restaurants/" value))
    (unpack-req value)
    (pp/commit! (edb/insert-named-item app-db :restaurants :current (last value) {:is-loading? false}))
    (rescue! [error]
        (pp/commit! (edb-insert-named-item app-db :restaurants :current {} {:error error}))

pipeline! is a macro and it transforms it's body block to something that looks like this:

 {:begin [(fn [value app-db])
                 (pp/commit! (edb/insert-named-item app-db :restaurants :current {} {:is-loading? true}))
             (fn [value app-db])
                 (http/get (str "/restaurants/" value)))
             (fn [value app-db])
                 (unpack-req value))
             (fn [value app-db]
                 (pp/commit! (edb/insert-named-item app-db :restaurants :current (last value) {:is-loading? false})))]
  :rescue [(fn [value app-db error]
                  (pp/commit! (edb-insert-named-item app-db :restaurants :current {} {:error error})))}

This code will be passed to the pipeline runner which knows how to handle this structure (if you're interested in how it works - check out the code).

Pipelines use the great Promesa library to handle promises, and each pipeline returns a promise.


If you have a pipeline that needs to implement some kind of branching (maybe you want to load the item only if it's not loaded yet) you can nest the pipelines:

;; on the pipeline start `value` will hold whatever was passed to the command as the argument
(pipeline! [value app-db]
    (when (nil? (edb/get-named-item app-db :restaurants :current))
        (pipeline! [value app-db]
            (pp/commit! (edb/insert-named-item app-db :restaurants :current {} {:is-loading? true}))
            (http/get (str "/restaurants/" value))
            (unpack-req value)
            (pp/commit! (edb/insert-named-item app-db :restaurants :current (last value) {:is-loading? false})))
            (rescue! [error]
                (pp/commit! (edb-insert-named-item app-db :restaurants :current {} {:error error})))

Pipelines allow you to implement features that require a series of steps to run in succession without forcing you to play the event ping pong.

Pipeline Controller

To actually run the pipelines, you must use the pipeline controller which is also a part of the Keechma Toolbox library, so the full example would look like this:

(ns pipelines.example
    (:require [keechma.toolbox.pipeline.core :as pp :refer-macros [pipeline!]
                [keechma.toolbox.pipeline.controller :as controller]))

(def controller
        (fn [_] true) ;; this is controller's `params` function
        {:load-restaurant ;; pipeline key is the command it responds to
            (pipeline! [value app-db]
                (when (nil? (edb/get-named-item app-db :restaurants :current))
                    (pipeline! [value app-db]
                        (pp/commit! (edb/insert-named-item app-db :restaurants :current {} {:is-loading? true}))
                        (http/get (str "/restaurants/" value))
                        (unpack-req value)
                        (pp/commit! (edb/insert-named-item app-db :restaurants :current (last value) {:is-loading? false})))
                        (rescue! [error]
                            (pp/commit! (edb-insert-named-item app-db :restaurants :current {} {:error error})))}))


Async Notifications

While I was doing research for pipelines, I wanted to see what other approaches exist. In that research I've encountered this thread on Stack Overflow which explains how to implement a notification system with Redux.

The idea is to have the notification appear and then automatically disappear after five seconds.

This is one of my favorite pipeline examples, because it's super simple but it still demonstrates the elegance of pipelines:

(ns pipelines.example
    (:require [keechma.toolbox.pipeline.core :as pp :refer-macros [pipeline!]
                [keechma.toolbox.pipeline.controller :as controller]
                [promesa.core :as p))

(defn delay-pipeline [msec]
    (p/promise (fn [resolve _] (js/setTimeout resolve msec))))

(def controller
        (fn [_] true) ;; this is controller's `params` function
        {:show-notice ;; pipeline key is the command it responds to
            (pipeline! [value app-db]
                (pp/commit! (assoc app-db :notice value)) ;; store the notice in the app-db
                (delay-pipeline 5000) wait 5 seconds
                (pp/commit! (dissoc app-db :notice)))}))

That's it - clear, simple and obvious.

Live search

As I've mentioned before, pipelines return a promise. This allows them to be cancelled at any time (Promesa is using the Bluebird library which implements promise cancellation). In the next example, we'll take advantage of this property.

The task is to implement a live search - on each keypress call the command that will perform the search, wait 300 milliseconds and kick off the search request, if the command wasn't called again in the meantime. You've probably used a feature like this many times.

The problem with this feature is that you have to make sure that you don't have race conditions. If the search request started, and then the user enters another letter, you want to cancel that request and kick off a new one. Otherwise the first request could finish after the second one in which case you would show the wrong results to the user.

Pipelines have the exclusive function to help with cases like this. You wrap the pipeline with it, and then it will ensure that only one pipeline is running at the time, and if it's called again while the pipeline is running, it will cancel the current pipeline which will also cancel the AJAX request. Keechma toolbox comes with a thin wrapper around the cljs-ajax library which wraps the AJAX request functions with promises and implements request cancellation.

The final code looks like this

(ns pipelines.example
    (:require [keechma.toolbox.pipeline.core :as pp :refer-macros [pipeline!]
                [keechma.toolbox.pipeline.controller :as controller]
                [keechma.toolbox.ajax :refer [GET]
                [promesa.core :as p]))

(defn delay-pipeline [msec]
    (p/promise (fn [resolve _] (js/setTimeout resolve msec))))

(defn movie-search [value]
    (GET (str "api/url?search=" value)))

(def search-controller
   (fn [] true)
   {:search (pp/exclusive
             (pipeline! [value app-db]
               (when-not (empty? value)
                   (pipeline! [value app-db]
                  (delay-pipeline 300)
                  (movie-search value)
                  (println "SEARCH RESULTS:" value)))))}))

This is the workflow:

  1. Make sure that the value is not empty
  2. Wait 300 milliseconds
  3. Make the request
  4. Print the results

There you have it, a live search implementation in ~20 lines of code.


Pipelines are one of my favorite parts of the Keechma toolbox library. I've been using them for months and I think that 90% - 95% of my controller code is in pipelines. They make my code clearer and easier to understand, and I hope you'll find them as useful as I do. Please let me know if you have any feedback.

Keechma author Mihael Konjević on

Keechma Developer Tools Preview

Keechma is getting close to v1 release. Most of the work is done, what's left is documentation update and the release of the developer tools.

Here's a short screencast that shows off some of the features built into the dev tools. If you're missing some context around the example application make sure to check the walkthrough.

Keechma author Mihael Konjević on

Announcing Keechma Forms Library

Today, I’m excited to release the Keechma Forms library. Although it’s released under the Keechma brand, you don’t have to use it with the rest of Keechma ecosystem, it can be used with any Reagent based application.

What is Keechma Forms and why does it exist?

Keechma Forms is a library that will help you build forms with delightful UX. It is UI agnostic (as long as you use Reagent), but it gives you a way to display validation errors in the right time.

If you want to learn more about the validations and when they should be rendered I can recommend these links:

Although some of these articles are pretty old, I still encounter these kind of problems on a daily basis. Implementing forms is hard.

Keechma Forms solves two of the hardest problems when dealing with forms:

  • Validation of arbitrarily nested data
  • Keeping track of dirty key paths in arbitrarily nested data

Data Validation

Most form libraries push the data validation to the component level, which is great for simple use cases, but it starts to fall appart as soon as you have anything remotely complex.

Keechma Forms take a different approach, and validation is always performed on the data. Validators take in the data and return the nested map with the errors.


;; Define a value validator - first element is validator name, second is the
;; validator function
(def not-empty [:not-empty (fn [v] (not (empty? v)))])

;; Define a form validator. Each attribute takes a vector of value validators
(def user-validator (forms.validator/validator {:username [not-empty]
                                                :password [not-empty]}))

;; Define a form validator with nested fields
(def article-validator
  (forms.validator/validator {:title [not-empty]
                              :user.username [not-empty]}))

;; Define a form validator which validates a list of objects
(def team-validator
  (forms.validator/validator {:name [not-empty]
                              :players.*.name [not-empty]
                              :players.*.number [not-empty]}))

Go to the API documentation to see more examples. Validator API can handle any kind of document structure you throw at it.

Dirty key paths tracking

This feature is crucial if you want to build forms with the great UX. Badly implemented live validation is worse than validating the whole form on submit.

The “before and while” method not only caused longer completion times, but also produced higher error rates and worse satisfaction ratings than the other inline validation variations we tested.

from the Inline Validation in Web Forms article

Keechma Forms allows you to avoid these problems by keeping track of any changes user made to the form. That way you can show the right error messages at the right moment.


I've implemented a demo to showcase the features of Keechma Forms. These are the features I've implemented:

  • Form should have input fields for: username, password, name and email
  • Form should allow user to add any number of social network accounts
    • User should add at least one account
    • Each account should have a select dropdown for the network and input for the username
  • Form should allow user to add any number of phone numbers
  • If the field is in valid state, validation should be performed on blur
  • If the field is in invalid state, validation should be performed on change (keypress) so the error message is removed as soon as possible
  • When the user submits the form, all fields should be validated
  • If the user adds a social network account or phone number after they tried to submit the form, new fields should be in valid state

Check out the demo to see how it looks. I think this is the form with the best UX I've ever implemented. Although you can feel differently, the important thing is that this kind of heavily customized behavior is possible with Keechma Forms. Check out the annotated source code here.


Building forms with delightful UX is possible and easy with Keechma Forms, and I hope you'll find it as useful as I do. As always if you have any questions you can ping me on Twitter or you can send me an email

Keechma author Mihael Konjević on

Road to v1.0.0

Today I’m releasing the new Keechma site and I wanted to use this opportunity to share my plans for v1.0.0.

Keechma is now slightly older than three months, and in this time I talked a lot with the people who were trying it out, did a lot of experimentation and built a number of smaller apps with Keechma. This gave me some ideas on things that need to be improved before a solid v1 release.

One of the first things I did with Keechma was extracting EntityDB and Router from the main project. Although this allows usage of these libraries in the non-Keechma projects, it created some logistic problems. The biggest problem was that the build system (and design) were created with a single repo in mind. That’s why the new site was the first and most important step to v1.0.0.

New site is built with bunch of tools (Marginalia, Codox, Make, NodeJS), but the final build is performed by the Lektor CMS which allows me to build the site both from the content generated by the documentation tools, and to add custom content (like this news) in one system. Expect more articles, news and content around Keechma in the future.

Future of Keechma

Convenience layer

When designing and building the first version, I was very careful to avoid adding stuff to Keechma just for the sake of convenience. I believe that adding a convenience layer too early makes it easy to design a bloated system. It resulted with clean, tight core, but it also resulted with an API that is more verbose than necessary.

In the future, I want to add a convenience layer on top of Keechma that will remove a lot of boilerplate code from the typical app. This convenience layer will be completely optional and contained in a separate package. I want to keep the amount of code in the main Keechma project minimal. I will share more news about this project when I start working on it, but please let me know if you have any ideas or feedback.


Another area I want to focus on is documentation. Keechma is a new project and the community around it is starting to form. I believe that the best thing I can do to kickstart the growth of the community and Keechma adoption is to create more documentation, more tutorials, more content around Keechma. Right now, writing documentation will have bigger impact than writing code, and that's something I'm ready to embrace.

To be able to write great docs and tutorials, I need your help. I have some ideas about the improvements that can be made, but more feedback is always welcome.

How You can help me:

  • Try Keechma - you will have questions, and by answering these questions I'll be able to see which parts need better docs
  • Ask questions - you can find me on Clojurians Slack (in #keechma channel), on Twitter or you can send me an email.
  • Show me examples of interfaces that were hard to architect - If you have an example of the interface that is hard to architect, send it to me and I'll try to replicate it with Keechma and write a blog post about it. It doesn't matter if it's implemented in a different framework, or if you found it somewhere in the wild.

I want to make Keechma really easy to use, and it all starts with good documentation. Your feedback and questions will allow me to write it.

Tangential projects

Keechma solves a small subset of problems we encounter while building apps. There is a lot of stuff that could be solved in a nicer way, and I'll work on it as need arises.

The first project that will be released in the near future (after I write the documentation) is Keechma Forms. It will allow you to model and validate complex forms with ease. Until the documentation is written you can check out the tests to see how the API will look.

Like EntityDB and Router, the Forms library is in no way coupled with Keechma. It can be used with any Reagent project but also with any ClojureScript project (with a little effort).


I believe that Keechma has sound fundamentals, and in the future, I want to make it the easiest to use and the best-documented framework out there (at least in the ClojureScript world). To do that, I'll need your help, so please let me know if you have any feedback. You can ping on Twitter or you can send me an email.