Breeze DataServer

API Docs - How It works

This resource page details how the Breeze Dataserver can be used to share information with client applications and the inventory management system. This guide is arranged with each available endpoint linked on the sidebar and within each endpoint page related Reference Documentation and set of Tabs detailing:

  • A sample of the xml/json dataset typically returned.
  • Parameter properties which can be used in the filter criteria.
  • Response structure and property details including datatypes.
  • "Try It" option to manually test requests to the endpoint.

HTTP Methods and Request Structure

The Breeze API is a REST/SOAP server and responds to HTTP Web requests with XML or JSON formatted responses. The Breeze server works with HTTP GET and POST methods and each endpoint will respond differently based on its specification. Typically GET is used to retrieve a Dataset from a resource while POST is used to push or INSERT/UPDATE a Dataset to the server.

The HTTP request has THREE basic components as below:

HTTP part Description Used by Breeze Server
Header Contains details regarding the format of the message Optional Credentials
URL The destination resource or endpoint and additional parameters Required for endpoint identification
Message Body An optional message body e.g. file, query data, or query result Optional POST details

GET Method

In order to request information from an endpoint the Breeze server will respond to the GET method and returns information in the message body as json or xml depending on the URL. It is possible to push information to some endpoint resources using the GET method if data is accepted via the parameters passed in the URL. Check specific Endpoint references to see if this method is supported. An example URL is shown below:

http://api.datapel.com:8080/xml/ENDPOINT?filter~myfilterstring&authorization~code&auth_token=token 

POST Method

In order to insert or update information to an endpoint the Breeze server responds to the POST method and returns status information in the message body. Typically the information being sent to the endpoint is carried in the message body however most endpoints support the passing of small datasets via the URL as a single parameter. Specific restrictions depend on the endpoint functionality, generally the raw dataset can be passed via the URL if not otherwise specified.

The Breeze server has a predefined structure for the POST REQUEST message body as below:

"<?xml version="1.0" encoding="utf-16"?>  
<PostRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">    
<updatetype>Insert</updatetype>    
<filter />    
<updateObject xsi:type="xsd:string">
{"JsonOrXML":true}
</updateObject>  
</PostRequest>" 

The POSTREQUEST is defined as below:

Property Description Example
UpdateType Either INSERT/UPDATE/DELETE/QUERY INSERT
Filter A valid filter criteria using properties and operators filter~basestockid=5
UpdateObject JSON or XML encoded Dataset <see endpoint POST samples>*

The SIMPLIFIED POST option allows the Message Body to be JUST THE UPDATE OBJECT and the Update type is assumed INSERT. This also applies to POST with UpdateObject as URL parameter line in the following format: filter~UpdateObject where the message body must be an empty string.

Authentication and User Access Control

To manage access to your workspace all information requests must present credentials to be authenticated by the Breeze Data Server.

Authentication Workflow

In order to access the Data Server you must have an API Username and Password. The master API Username and Password will be configured by your system administrator within your Breeze Data Server config.xml file. The Authentication process works as follows:

  1. GET Request is made to the /token endpoint passing AUTHORIZATION = base64_UTF8_encoded(Username:Password)
  2. If Username and Password are valid a Session Token is returned.
  3. Session Tokens are valid for 24 hours only and must be considered as part of the overall integration design pattern.
  4. To read a resource BOTH the AUTHORIZATION code and TOKEN must be passed to the Breeze Data Server.

Providing security credentials can be done in one of two ways:

  • Encoding Authorisation information in the HTTP header (or)
  • Passing Authorisation token keys in the URL

SSL Certificates

The Breeze Data Server can be configured to operate over HTTP or HTTPS (SSL).

With HTTPS the information transmitted from sender to receiver is encrypted and is the most secure method of ensuring data transmitted will not be decrypted or tampered with. In order to use HTTPS you will require an SSL Certificate and this usually relates to the domain name from where the server is referenced, for example: api.datapel.com. You will need to install the certificate on the physical operating system hosting the Breeze Data Server.

To specify HTTP or HTTPS you must setup in the Breeze Data Server config.xml and ensure the service is restarted for changes to come into effect.

Specifying the Request Header

Every request for Breeze Data Server resources must contain the following attributes in the header.

Field Description Example
Content-Type MIME type of the request body. application/xml
Host The host for handling request; Data Server host name. api.datapel.com
Accept-Encoding Enables encoding of the request. gzip,deflate
Optional Additional Header Security attributes:
Field Description Example
Authorization Required parameter for authorising the request. See below - Authorization
Auth_Token Required parameter issued for your session by serever. See below - Authorization

Specifying Credentials as URL Level Parameters

In addition to (or instead of) the above Request Header encoding the session token and authorisation key can be passed directly via the URL as parameters.

  • The URL must take on the standard form of /type(json/xml)/endpoint?filter~filterstring

An example detailing URL security encoding is shown below.

URL encoding Authorisation Example

The following example describes the HTTP GET query for obtaining a SESSION TOKEN and then making an endpoint request.

First you are required to create the Authorization key from base64 encoded master api Username and Passord in the following form:

  • Assuming master api username:password pair of "superuser:password" generates the Authorization Key = c3VwZXJ1c2VyOnBhc3N3b3Jk
  • To get the session token:
    
    http://api.datapel.com:8080/json/token?authorization~c3VwZXJ1c2VyOnBhc3N3b3Jk
    
    Returns a session token as below: 
    [
      {
        "token": "oUMf4ifG0UgAAAAAAAAAAAAAAAAAAAAA" 
      }
    ]
    
  • You may now use the credentials in the URL or HEADER to perform GET/POST requests to service endpoints.
    
    Example: GET current api timestamp with URL authorization
    
    http://api.datapel.com:8080/json/timestamp?filter~&authorization~c3VwZXJ1c2VyOnBhc3N3b3Jk=&auth_token=oUMf4ifG0UgAAAAAAAAAAAAAAAAAAAAA
    
    Returns a response as below: 
    {
      "ServerTimeStamp": "2014-11-04 20:23:51" 
    }
    
  • For all future ENDPOINT requests your URL must be in the following form....
    
    http://api.datapel.com:8080/json/ENDPOINT?filter~myfilterstring&authorization~url_encoded_base64(user:pass)&auth_token=url_encoded_server_access_token
    
    
  • Tokens must be refreshed within 24 hours or they will be rejected so it is recommended that a daily task of refreshing the session token be added to your system design.
  • Always make sure the token is URL ENCODED before encoding the entire URL otherwise because base64 contains characters like + etc that will be decoded on the server side and cause authentication to fail.

Endpoint Parameters and Operators

When requesting a dataset from an endpoint a criteria filter can be set to limit the number of results or find a specific record subset. Generally the parameters are limited to the properties of the dataset. This reference shows the valid parameters for each endpoint resource under the Paramters tab.

Multiple parameters can be combined to produce a logical criteria however there are restrictions on the syntax and only a limited set of operators are valid as below:

Operator Description Example
= Equal to itemname = 'Jacket'
<> Not Equal to quantity <> 0
>, <, Greater than/Less than accountbalance > 0
>=, <=, Greater or Equal/Less or Equal basestockid >=20
in () parameter in set basestockid in (5,6,7,10,50)
AND required criteria colour='blue' and quantity>0
OR optional criteria udf1='dealeronly' or udf='promo'

The endpoint URL must always begin with filter~. Brackets are not support and may cause incorrect results - for example:

filter~(itemname='Jacket' AND colour='blue') OR udf1="pants" 

Will NOT work under the current criteria parser and returns: _500 Query filter could not be parsed._

If you require complex criteria contact Datapel technical support for suggestions on how best to manage these via the Breeze API.

OnPremise or CloudMirror Configuration Considerations

The Breeze Dataserver API has two very distinct configuration options as described below:

  • OnPremise : In this case the Breeze API server is installed locally on the customer server or same local network as the SQL Server instance and Datapel application software - this configuration is termed "On Premise". In this configuration any changes made by users are immediately accessible to the Breeze Dataserver - and any posted sales can be reviewed in real-time.
  • CloudMirror : The Datapel Public Cloud is a high performance internet server with the latest Breeze Dataserver API server. You will be allocated a public URL that references your companies Cloud Services. For example http://OurCompany.datapel.net. In this configuration any information from your on premise Datapel application is "mirrored" to the cloud server on a periodic basis, typically every 5 - 15 minutes. This configuration is preferable as your "On Premise" server is protected from public access via the internet. Generally the Datapel Cloud Mirror Service communicates with your On Premise Datapel application via SSL connection that can be IP Address authenticated - thereby securing your connection, server and master database. As the CloudMirror is just a copy of your server public attacks on this server will not impact the local on premise operations or system data integrity. As the CloudMirror requires relaying of information there is a 5 - 15 minute lag between the real-time system and the cloud data view. In most cases this is easily managed by establishing stock buffers or isolating "online" vs "onpremise" operations using system defined Locations.

Best Practice Read Workflow Example

To ensure a RESPONSIVE design the REST interface should not introduce long DELAYS while data is being served from an endpoint query. It is generally bad practice to select an excessive number of records in a single query as a large volume of data will cause your web page and the breeze server to appear "lockedup" while it processes and returns the raw data.

Where possible data derived from long lists should be "paged" and retrieved as datasets with each block representing a number of records. The number of records to read will depend on the number and size of properties being returned from the endpoint. In general you determine the page size based on the expected BANDWIDTH of the connection you have between the client browser / application and the Breeze Dataserver.

For example you may have a typical internet connection (ADSL) 1MB/s equivalent to approximately 128kb/s. It will take approximately 1 second to download 128k of information so using the ITEMS endpoint a typical JSON record can require about 2k per item master record. If your dataset has 5000 items then to read ALL RECORDS the response time would be around 78 seconds or 1 minute / 18 seconds to download. So presenting a new item list in real-time direct from the server is not practical. In reality a remote "copy" of the item master list would be held at the web server/remote application - and ideally only changes would be updated directly from the data server on demand or as they happen.

If your application and Dataserver are hosted on the same physical machine or local network then transfer rates would be much higher and result sets transferred in under 1 second - making live access to data feasible. So the use of paged datasets needs to be part of your design planning.

Using TimeStamp Endpoint and Property as Changed Record Trigger

To assist in minimising the amount of redundant information being transferred between the client application and server the TIMESTAMP endpoint and a TIMESTAMP property is available for most ENDPOINT query resources. Typically the workflow for obtaining ONLY changed records is as follows:

  1. Start list update
  2. Authenicate Connection
  3. Get Server TIMESTAMP as NEW-TIMESTAMP
  4. Read Endpoint with ?FILTER~TIMESTAMP > 'LASTACCESSED-TIMESTAMP'
  5. Update local client / app records
  6. LASTACCESSED-TIMESTAMP = NEW-TIMESTAMP
  7. Wait a fixed period of time or on demand for resource loop to start

By following this general endpoint query pattern the resulting dataset will always return just the records that have changed after the last endpoint read hence minimising the amount of data being transferred from the Dataserver to the browser/client application. This approach assumes you have an existing list inplace - if not the first pass will return ALL records from the endpoint resource as the TimeStamp would be NULL.

Paging Hints (PAGING)

When working with large data sets it is useful to return partial result sets. This technique is commonly referred to as response PAGING. Endpoints that support PAGING will have the PAGING tag on the endpoint verbs bar within the "Try It" pages. The default pageSize is 500 records and can be adjusted in the DataServer configuration management panel. This example shows a pageSize = 100 records.


Use the WITH PAGING hint to enable page referencing of partial data sets as shown below:

http://api.datapel.com:8080/JSON/sql?filter~SELECT * FROM BST_BaseStock WITH PAGING

Response result set will now contain a transaction cacheid reference and always returns the first page of results:

...
        "Status": "OK",
        "cacheid": "ca79e867-36dd-47fc-8113-2c70f7c73653",
        "RowChanges": "100",
        "PageNumber": "1",
        "RowTotal": "198" 
...

By specifying the WITH PAGING hint further pages can be requested using the cacheid reference as shown below.


Use the WITH PAGING ON cacheid PAGE XX to return the relevant page number from the initial data set.

http://api.datapel.com:8080/JSON/itemslist?filter~* WITH PAGING ON ca79e867-36dd-47fc-8113-2c70f7c73653 PAGE 2

Response result set will contain the cacheid reference:

...
        "Status": "OK",
        "cacheid": "ca79e867-36dd-47fc-8113-2c70f7c73653",
        "RowChanges": "98",
        "PageNumber": "2",
        "RowTotal": "198" 
...

If the page number contains no records an EMPTY response will result.

Paging via Queries and SQL Endpoint

Paging is not supported on all endpoints via the Breeze Dataserver and may require implementation via a client side run of queries. Where this is required it should be implemented as per the example shown below. In this case you require an indexing column against which the dataset will be sorted.

In order to complete the query you should maintain a @PAGE_SIZE being the number of records per page and the @PAGE_NUMBER being the offset into the dataset that you require. In the example below we show the main Item Master table and use the Item Master ID as the ASCENDING index.

SELECT * FROM BST_BaseStock WHERE BST_ID IN 
 (SELECT TOP @PAGE_SIZE BST_ID FROM BST_BaseStock WHERE BST_ID NOT IN 
 (SELECT TOP @PAGE_SIZE*(@PAGE_NUMBER-1) BST_ID FROM BST_BaseStock ORDER BY BST_ID) 
ORDER BY BST_ID) ORDER BY BST_ID

In the below example we retrieve all Item Details with the PAGE_SIZE = 20 and are requesting PAGE_NUMBER=3. Note Security URL parameters are omitted for clarity.

http://api.datapel.com:8080/JSON/sql?filter~SELECT * FROM BST_BaseStock WHERE BST_ID IN 
                                            (SELECT TOP 20 BST_ID FROM BST_BaseStock WHERE BST_ID NOT IN 
                                            (SELECT TOP 40 BST_ID FROM BST_BaseStock ORDER BY BST_ID) ORDER BY BST_ID) ORDER BY BST_ID

When possible please use the built in Paging Hints as these hints remove dependencies on primary key index fields and will improve backward compatibility of your client application with future DataServer releases.

Server Side Cache and Diff Hints (DIFFX)

In order to maximise performance of the Breeze Dataserver a number of endpoints support server side caching and the Datapel DIFFX engine. The DIFFX engine allows the caller to reference a previous transaction and request ONLY THE DIFFERENCES. This can dramatically improve the response times of your client application. Endpoints that support the DIFFX engine will have the DIFFX tag on the endpoint verbs bar within the "Try It" pages.


Use the WITH CACHE hint to create a server-side response copy as shown below:

http://api.datapel.com:8080/JSON/itemslist?filter~* WITH CACHE

Response result set will now contain a transaction cacheid reference:

...
"cacheid": "8af515f5-c786-49cd-a179-edd250cf0207" 
...

By specifying the WITH CACHE hint the Breeze DataServer will keep a copy of the response server-side. To request ONLY changed data specify the WITH DIFF ON cacheid hint and reference the previously cached response.


Use the WITH DIFF ON cacheid to return only data changes since between the request responses.

http://api.datapel.com:8080/JSON/itemslist?filter~* WITH DIFF ON 8af515f5-c786-49cd-a179-edd250cf0207

Response result set will contain a NEW transaction cacheid reference:

...
"cacheid": "0aee6203-d0e1-4b5f-8eca-80775de787e1" 
...

If no changes have occurred then an EMPTY response will result - continue to use the existing cacheid and WITH DIFF hint.

DIFFX Cache Persistence

The DIFFX response cache typically remains for 3-5 days after the initial cache event. Any DIFF ON hints that return a difference will create a new cache entry. Generally you should always use the latest cacheid from the most recent response. There is no limit to the number or size of cache requests you can make.

Keep in mind that using DIFFX adds more load to the DataServer as it persists more response information and adds overhead to the processing of a response while it processes the differences between current and previous response messages. Generally the response size is reduced to 10-20% compared to standard response messages when using DIFFX hints. Furthermore, the more frequent the requests the smaller the responses tend to be.

If the cacheid DOES NOT EXIST the server will just return the full response, automatically cache the data and return a new valid cacheid. Generally there is little risk of "missing" data changes due to server side cache expiration.

Endpoint Versions

Generally as the Breeze Dataserver is upgraded the endpoint resources will return properties that are backward compatible with previous versions. This allows client applications to remain unchanged as the inventory management system and Breeze Dataserver improve and are upgraded. In some cases however functionality of a specific endpoint may need to change dramatically and potentially cause existing applications to break as returned information or filters are no longer valid.

To minimise the work required by third parties to maintain compatibility with Breeze Dataserver the endpoint interface supports a VERSION path. This allows an external application to bind directly to a specific version of an ENDPOINT so the QUERY response complies with return properties consistent with the API version that the client application was developed against. Endpoint versions will be published direct to developers as and when specification and requirement change dictates endpoint versioning.

At all times the default endpoint will support the latest functionality for the Breeze Dataserver.

  • To access a specific ENDPOINT version the request URL must be in the following form....
    
    http://api.datapel.com:8080/json/vX/ENDPOINT?filter~myfilterstring&authorization~base64(user:pass)&auth_token=server_access_token
    
    WHERE vX is v1, v2, v3 etc the version provided by Datapel required to support your client application interface.
    
    The older the application the lower the version number; v1 would be the oldest deprecated endpoint version.
    
    

Best Practice Order Write Workflow

POST as with the READ/GET process from the API needs to ensure RESPONSIVE design and not incur DELAYS while data is processed or consumed by an endpoint service. In the case of pushing transaction information via Breeze API there are currently two methods available:

  • The "in process" ORDER POST (sales endpoint) or
  • QUEUED POST of an ORDER (salesqueue endpoint)

The "in process" option will attempt to commit the transaction immediately and any exceptions will be sent back via the API requiring the client to handle the response. The QUEUED POST pushes the order into a list of orders that will be processed by the server as part of a batch or to be initiated by a local administrator or administrator process. Furthermore the QUEUE can be cleared or FLUSHED via the API and forces the server to consume the posted orders.

Currently we are recommending that Sale Orders be POSTED via the SalesQueue Endpoint and triggered for processing on the server-side using a SalesQueueCount&FLUSH query with the appended FLUSH parameter as shown.

  • POST sale xml to the salequeue endpoint as:
    http://api.datapel.com:8080/xml/salesqueue?filter~&authorization~c3VwZXJ1c2VyOnBhc3N3b3Jk=&auth_token=oUMf4ifG0UgAAAAAAAAAAAAAAAAAAAAA
    
  • Valid Response will be a QID code: e.g. QID-1234-5678
  • GET sale queue count with Flush to commit orders as:
    http://api.datapel.com:8080/xml/salesqueuecount?filter~FLUSH&authorization~c3VwZXJ1c2VyOnBhc3N3b3Jk=&auth_token=oUMf4ifG0UgAAAAAAAAAAAAAAAAAAAAA
    
    NOTE: Flush is not required when working with CLOUD MIRROR configurations - flush will occur automatically based on cloud mirror cycle time. 
    
  • Check order status using GET status endpoint.
    If you supplied the invoice number you can use this as search field:
    
    http://api.datapel.com:8080/xml/status?filter~invoicenum='remoteinvoicenum'&authorization~c3VwZXJ1c2VyOnBhc3N3b3Jk=&auth_token=oUMf4ifG0UgAAAAAAAAAAAAAAAAAAAAA
    
    It is recommended the QID is used to reference the transaction as shown below:
    
    http://api.datapel.com:8080/xml/status?filter~qid='QID-1234-5678'&authorization~c3VwZXJ1c2VyOnBhc3N3b3Jk=&auth_token=oUMf4ifG0UgAAAAAAAAAAAAAAAAAAAAA
    
    

Note that there maybe some delay between the sale queue flush and the ability to check status - generally via the API each sale order takes 5 - 10 seconds to process and become visible to the status endpoint. If you need to query the status immediately use the FLUSH_WAIT parameter instead which causes the API to wait for the orders to complete importing and then returns a response.

POST Order Sample

The Order POST supports XML encoding only at the time of release.

Sample XML Order Message


<NewDataSet>
   <tREMOTETransHeader>
      <Company_ID>DATAPEL</Company_ID>
      <StoreCode>PRESTON</StoreCode>
      <PostingDate>2015-08-31T02:17:08Z</PostingDate>
      <TransID>#2689009</TransID>
      <TransDate/>
      <Salesperson>lyndenmcdonald@yahoo.com.au</Salesperson>
      <CardIdentification>THELNSW</CardIdentification>
      <SaleType>S</SaleType>
      <Special2>ROAD</Special2>
      <ShippingMethod>Star Track</ShippingMethod>
      <ClosedYN>Y</ClosedYN>
      <OriginalSaleType>O</OriginalSaleType>
   </tREMOTETransHeader>
   <tREMOTETransCustomer>
      <ShipToName/>
      <ShipToAddress1>273a Peel Street</ShipToAddress1>
      <ShipToAddress2/>
      <ShipToSuburb>Tamworth</ShipToSuburb>
      <ShipToPostCode>2340</ShipToPostCode>
      <ShipToState>NSW</ShipToState>
      <Address1>273a Peel Street</Address1>
      <Suburb>Tamworth</Suburb>
      <PostCode>2340</PostCode>
      <PhoneNo1>02 6766 9288</PhoneNo1>
      <EmailAddress>shop@lemonhouse.com.au</EmailAddress>
   </tREMOTETransCustomer>
   <tREMOTETransSaleTenders>
      <TenderAmount>0</TenderAmount>
   </tREMOTETransSaleTenders>
   <tREMOTETransSaleLines>
      <SKU>G0751ESS15</SKU>
      <SaleQty>1</SaleQty>
      <SaleUnitAmountIncTax>95.00</SaleUnitAmountIncTax>
      <SaleTaxByHost>Y</SaleTaxByHost>
      <SaleTaxRate>10</SaleTaxRate>
      <SKUDescription>Blue/tan SYKE SMALL BAG (ETA 15-08)</SKUDescription>
      <SalePriceByHost>N</SalePriceByHost>
   </tREMOTETransSaleLines>
</NewDataSet>

XML Order Property Reference

The following table describes each property, type and example value.

Message Text/String Encoding

Preferred string encoding is UTF-8. The fields in the xml file should NOT use cdata and should not contain any of the following characters without encoding ~ < > & TAB CR LF ( ascii 10 / 13).

Special Character encoding is required as shown below.

Value Encoding
~ ~tl~
< ~lt~
> ~gt~
& ~am~
Chr(9) ~09~
Chr(10) ~10~
Chr(13) ~13~
' Single quotes do not require any encoding

Data Dictionary

For access to the WMS data dictionary please contact Datapel Developer Support

For authorised users please download or review using the following link.

Read our Data Dictionary Release 2018

Need more help?

If you need more help try our forums at FORUMS / DEVELOPER API...

(c) Copyright 2018 Datapel Systems Pty Ltd.