Pipette Analysis - Service Development

Last Updated: Oct 10, 2015 04:48PM PDT

Summary: Pipette Analysis allows Pipeline Pilot protocol developers to deploy protocols to end users through an intuitive web-based pipelining interface. To allow the use of existing protocols in Pipette Analysis, the protocols must meet some basic criteria and be located in a specific locations in the protocol database (XMLDB). This article provides details for both on-prem and ScienceCloud deployment.

Covered Topics

Clicking a link will scroll the page to the relevant section.

Service Types

The types of protocols that are exposed to users in Pipette Analysis Pipelining include:

  • Pipelining services – Exposed to end users in the middle section to build pipelines.
  • Outputs – Visible on the right as places users can drag and drop results for sending over to new reports, other applications, and to generate output files.


Service Templates

A convenient starting point when developing a new service for Pipette is the New Protocol from Template feature on Pipeline Pilot's File menu.


The following service templates are currently available:

  • Reader Service – A protocol with an unconnected Pass port and no input ports. This protocol is typically used to start pipelines.
  • Calculator Service - A protocol with both an unconnected input port and an unconnected Pass port. This protocol is also used to construct pipelines.
  • Output Service – A protocol with an unconnected input port, but no unconnected Pass port. This protocol can be used to create a service that outputs data to reports, applications or files.

Note: These templates are intended to provide you with examples to use as starting points for developing your own services. You can design other types of protocols, including ones that use filters, lookup services, aggregators, and more.

Authoring Services


The default locations for Pipette Analysis service depends on how it's deployed. The locations include:

Deployment Location
On-prem Protocols/Web Applications/Pipette/Services
ScienceCloud Sandbox Protocols/ScienceCloud/<YOUR COMPANY>/Pipette Analysis/Services


  • A Pipeline Pilot administrator can change this location and also add more locations if necessary. When running Pipette Analysis on-prem, see Pipette Analysis Administration.
  • For the current location on your server, consult your administrator.
  • Services can be grouped into subfolders at this location. Subfolder names will be used as searchable tags for the service. Multiple subfolders will be used as multiple separate tags.


Pipette Analysis services are standard Pipeline Pilot protocols that must have certain features to integrate correctly.

Required Optional

One or more components with an open and unconnected Pass port.

  • One component with an open and unconnected input port.
  • One or more components with an open and connected Fail port.

Any data flowing out of the unconnected Pass port appears at the step output in the Pipette Analysis pipeline. The order of records is preserved.

Any data flowing out of the unconnected Fail ports is flagged as a failed step output in the Pipette Analysis user interface. Ensure that you remove Fail ports from components that should not output records to the user interface.

Services without an open input port are called Generator Services. They typically belong at the start of a Pipette Analysis pipeline and often run a database query, web service lookup, or just read from file. They ignore any records that are input from the previous step.

Services with an open input port often act as calculators, filters, or perform lookup based on the data records that flow into them. You can create and remove data records at your discretion.

There is nothing to stop a generator service from having an open input port. Examples where this is useful include:

  • A generator service that should add additional records to those flowing in to the pipeline.
  • A generator service that should replace existing records, such as replacing each input compound with one record per container that contains the compound.

Examples of generator and calculator services are available as protocol templates in Pipeline Pilot.

Linking Data to External Systems

It is possible to link individual properties on data records to external systems. These links appear in Pipette Analysis: Pipelining.

There are two types of property link supported - images and links. These both behave the same currently, but this is likely to change in the future. To have a property link appear in the user interface, add metadata to data record properties with the 'image' or 'link' key.


It is important to update the protocol help text (summary and help description) when you create services because your users will see this information when they select and run services.

The help summary is particularly useful because it is searchable when users are looking for a service. Be sure to include useful keywords that will help users find what they are looking for.

Supported Parameters

Pipette Analysis supports common protocol parameters and adds two new parameters for entering structures and reactions. The full list of supported parameters is described in the table below.

Parameter Notes
StringType Shown as a single line text field.
TextType Shown as a multi-line text field.
MoleculeType Uses the Pipette Sketcher to allow structures to be drawn.
ReactionType Uses the Pipette Sketcher to allow structures to be drawn.
PipetteDataStore v2.1.0+ Allows the user to select from their currently saved Pipette Analysis data stores. Passes the user scoped cache id to the server for reading.
PropertyName Select from the properties currently available on the input data records. PropertyNames can be used in place of flagging PropertyName as an array, to aid use in the Pipeline Pilot Professional Client.

The StringType parameter supports the array flag, showing a multi-line text field. Each row in the text field is considered a separate item in the array.

The PropertyName parameters supports the array flag, allowing selection of multiple properties.

Default values are shown in the web client as expected.

Parameter groups are not shown.

(See Parameter Scripts described below.)

Required and Optional Parameters

Standard with Pipeline Pilot, parameters that are flagged as required need to be specified before the service will run. The service will run without showing the settings dialog in the pipelining web client, if one of the following conditions are met:

  • All required parameters have defaults
  • There are only optional parameters
  • There are no service parameters

Parameters can be modified while the service is running. This cancels and reruns the service.

Parameter Scripts

StringType parameters can have LegalValues or LegalValueScripts.

The IDs of the data stores flowing into the current step are passed to the LegalValueScript when it runs. These IDs correspond to the IDs of user scope caches. These caches can be read from, but must not be changed or removed. Usually, this is a single ID, but it can be multiple IDs if this is the first step and multiple input data stores were defined. The IDs are passed as the _InputDataIds parameter. Protocol developers add this as an implementation parameter to the protocol.

Pipeline Pilot Reporting Based Settings Dialogs

Reporting Protocols can be used to build settings dialogs for services. These are associated with the service using the Protocol Form parameter as has classically been done with Web Port.

Complex Reporting functionality such as forms created dynamically using JavaScript may not work correctly.

Unlike Web Port, the settings from the service are applied to the form. This allows the form to be reloaded with your previously saved settings. Some controls, such as submit buttons may be removed from the report to aid integration.

Supported Record Types

Pipette Analysis Pipelining will preview data records with the following types of data on them:

  • Compound
  • Reaction
  • Image (including multiple images per data record)


A cleaned version of any error returned by a protocol that fails can be viewed by clicking on the "Failed" link in the top-right of the step.

To see the full error, run the service with your browser's developer tools (typically, this can be opened by pressing F12) and capture network requests.

If your service can return sensitive information in an unhandled error, be sure to trap all errors in your service and return a generic error message to the user with the PilotScript Error() function.

Developer Mode

Additional buttons can be displayed in the user interface for service development.

To indicate you are a service developer:

  • Have your administrator give you the Pipette Analysis/DevelopService permission in the Pipeline Pilot Administration Portal.

- OR -

  • Add the _developer flag as a URL parameter (http://<myserver:9944>/pipette/?_developer=true)

On each step, the following should be displayed:

  • Develop Service button – Open the current service in Pipeline Pilot (9.2 or later).
  • Rerun button – To rerun the service. This is useful after you save a new version in Pipeline Pilot.

Best Practices

  • Try to use natural language property names because they wrap up in the user interface and are easiest for users to read. For example, "Num H Atoms" is preferred over "Num_H_Atoms" or "NumHAtoms".
  • Trap common errors in your service protocols and provide error messages that are useful to users.
  • Ensure that your services have useful keywords in the summary, as this is searched when users enter text in the Add Service dialog.
  • Group your services into subfolders so these subfolder names get added as searchable tags.

Authoring Outputs

Outputs are a specialized type of service. As such, the parameters and help information for services is also relevant (see above).


The default locations for Pipette Analysis output depends on how it's deployed. The locations include:

Deployment Location
On-prem protocols/Web Applications/Pipette/Outputs
ScienceCloud Sandbox Protocols/ScienceCloud/<YOUR COMPANY>/Pipette Analysis/Outputs


  • The administrator can change this location and also add more locations if necessary. For details, see Pipette Analysis Administration Guide.
  • For the current location on your server, consult your administrator.
  • Outputs can be grouped into subfolders at this location.
  • Outputs are shared between Pipette Analysis Pipelining and Pipette Analysis Charting.


Pipette Analysis services are standard Pipeline Pilot protocols that must have certain features for proper integration.

Required Optional

Write a file to the top-level of the job directory: $(jobdir).

  • One component with an open and unconnected input port.
  • Write multiple files to folders under the job directory.
  • Use Reporting components.  

The file written to the job directory will be returned to the user's browser. If the browser is able to open this format, (e.g., HTML reports and PDF documents), the file opens in a new browser tab. If the browser cannot open the format, (e.g., SD files), then the user is prompted to download it.

The data that the user has dropped on the output will be passed to the service through the open input port (if available).

In addition, the @id global contains the ID of the user-scoped cache that contains this data, and the @indexes global contains the selected data IDs (if specified). This cache can be read, but must not be changed or deleted. Take a copy of the data if you need to change it.

Testing Services

After authoring and saving new services, protocol developers should test them with Pipette. The first time you add a new service or change service parameters, it is necessary to refresh the Pipette page in your browser to ensure that the service appears in the list or as an output.

To test any subsequent changes to your service, rerun the step where it is used by hovering over the step and typing "q". An updated output service can be tested by dragging a data item on the output service a second time.

Publishing a Service

  • On-prem – The development cycle ends as soon as the service is saved to the location visible to end users.
  • ScienceCloud – Perform a protocol publication step to move your service into production. You can do this by right-clicking the protocol and selecting Publish to ScienceCloud (then follow the onscreen instructions). For further details, see Publishing Protocols on ScienceCloud.

Pipette Analysis Administration Guide

Publishing Protocols on ScienceCloud


seconds ago
a minute ago
minutes ago
an hour ago
hours ago
a day ago
days ago
Invalid characters found