Rules & Phases Steps & Tokens Step head
Converters Flatteners Mappers


Hello, world! Data Access Starter Kit

See also


Dap 0.1.5 Language Quick Tour

Dap syntax was developed with two main concerns:

  1. to be easily parsable
  2. to be concise, yet human-readable

Dap syntax is pretty much different from what web-developers are used to. It might seem complicated and cumbersome when reading syntax specification, but when it comes to actual coding — it turns out to be quite easy and handy.

I'd recommend that you see the 'Hello, world' tutorial before getting into dap syntax details. That would help you keep track of what is it all about.

Rules and Phases

All dap rules consist of steps, steps consist of tokens, tokens consist of parts. Very simple:

rule ::=step{; step}
step ::=token{ token}

That is, steps within a rule are separated by semicolon+space; header and tokens within a step — by a single space; multiple aliases and converters are separated by commas. All token parts are grammatically optional. Empty strings are valid names, but in many cases they are reserved and have a special meaning.

A dap-enabled node may have none, any or both of the following rules, corresponding to node life phases:

  • Data or Down phase: d-rule

    When a dap node is populated, its content is generated using its d-rule, according to data provided to it and current status of its scope. Execution of d-rule of a node may change its scope's status, and theese changes will be observable by the nodes's descendants.

  • User or Up phase: u-rule

    When a dap node is activated (usually, by a user-related event), its own u-rule and u-rules of its ancestors are executed, starting from the node and bubbling up. Changes in node's scope status are automatically reflected in all dap-nodes observing that scope (dependent nodes). If a dependent node detects changes, which influence its content, it gets updated automatically to reflect those changes.

Steps and Tokens

Each rule consits of one or more steps, wich are executed consequently on the rule run. A step consists of tokens. The first token of a step is 'the head'; the rest are data tokens (or simply 'tokens'). Data tokens define data involved in the step execution; step head specifies actions to perform on that data.

Zero or more data tokens may participate in execution of each step. A token is a structure, that defines the datapiece passed to a mapper. Data tokens are not just identifiers, but execution units wich provide some simple data manipulations, such as data binding, assignments and conversions. Token may reference a field in a datarow received from a data source, or a node's scope status variable or specify a literal value. Token also specifies converters for the datapiece, and aliases, under wich the datapiece is fed to the mapper.

Each token may contain:

  • datafield reference
  • status variable reference
  • aliases
  • converters
  • literal value

Token parts are separated by their respective prefixes: $ (dollar sign) — for a status variable name, @ (at sign) — for alias chain, : (colon) — for conversion chain, = (equal sign) — for value. Datafield name, if present, is not prefixed, but is placed as first part of a token. All token parts are optional, but their order is mandatory. Full token looks like:


and its meaning is:

  1. read value from the node's associated datarow's datafield or take literal value if datafield is omitted or not provided in the datarow.
  2. store it in a status variable,
  3. sequentially apply all the converters,
  4. tag the executed token as aliasOne,aliasTwo.

Omitting any token parts modifies token behavior in a straightforward manner:

  • if no converts specified, data is fed as is
  • if neither datafield nor literal value specified, the status variable's own value is taken
  • if no status variable specified, none is changed
  • if no alias provided, the result is named after the status variable or the datafield (or named with empty name, if none of those are specified).

Status variables as sources and as targets

As can be seen from the token structure description above, a status variable's value can be set (by specifying a literal value or a datafield) and can be read (by omitting literal value and datafield).

Setting a status variable's value has different semantics for up and down phases:

  • at the down-phase, a new status variable is created in current scope; should a one with the same name already exist in outer scope, it is hid from the current scope (yet remains visible for other scopes)
  • at the up-phase, the addressed status variable is first searched in outer scope, then, if not found, created new.

Reading a variable always assumes that it has already been defined for the scope, otherwise rule fails.


Converters allow to transform the result of token execution. IMPORTANT: converters only change the result, not the source of value. This changed value is fed to flattener or mapper, but not written back to the variable or datafield it was obtained from.

Dap core provides the following converters:

Custom converters may be defined in extension libraries.

Step head

Step head is the first token of a step. Step head itself doesn't fetch datapieces; it defines what to do with the rest of the token list. It has the same grammatical structure as an ordinary token, except that its datafield specifies a dap mapper, and its value specifies a token list flattener.


A flattener may be applied to the token list to convert the whole list into a single datum, wich is fed to the mapper instead of individual tokens of the step. Aliases for the flattened datum are specified in the step head's @aliases part.

The most basic dap core flatteners are:

  • concat

    Concatenates values of all tokens into a single string

  • space

    Concatenates values by a single space

  • url

    Builds an URLEncoded query string. Anonymous tokens are appended unencoded.

  • ?, !

    ANY(returns first non-empty value from the token list) / LACK (returns true if at least one token is false/empty)

    In conjunction with !, these converters become NONE / ALL respectively

  • eq, asc, dsc

    Check respectively for: equality, monotonous ascend or monotonous descend of the token list
    Equality check may be performed upon both numbers and strings, ascend and descend — only upon numbers.

  • Examples

    !=concat =abra @foo=cada @bar=bra

    !=space =abra @foo=cada @bar=bra
    abra cada bra

    !=url =abra @foo=cada @bar=bra =hmaputra

    !=? @empty= @foo=cada @bar=bra

    ?=eq =abra @foo=cad @bar=abra

    ?=eq =abra @bar=abra

    ?=asc =-1 =7 =12 =13

    ?=asc =-1 =7 =15 =13

    ?=dsc =-1 =-7 =-12 =-13

Custom flatteners can be defined in external libraries.


Mappers to dap are what functions for javascript are. They take arguments and perform actions. Since version 0.1.5, all dap mappers are unary — they deal with only one token at a time. Multiple tokens in a step are executed as a 'for each token' sequence.

Since all mappers are unary, the following signature is used for their description:

mapper @aliases=value

Each token comes to a mapper as an @alias1,aliasN=value datapiece. Aliases are specified by token's @aliases part (may be a single alias, or multiple aliases comma-separated), and value is the result of token execution — a datafield value, a variable value, or a literal; the value is also subject to conversions, if specified. If the token's alias is empty (like =value, or datafield@, or datafield$status@:convert=value, etc.), the token is said 'anonymous'.

The basic dap core mappers are:

Field-modifying mappers

Field-modifying mappers are used to modify a target' s fields. The target may be a status variable, or the node's datarow.

Custom mappers can be defined in external libraries.

See also: