Release Notes

v2023.9.1 Release Notes

These are Release Notes for v2023.9.1 of Risk Cloud API v2, released on September 25th, 2023.

Risk Cloud API v2 is a collection of new API-first and RESTful API endpoints to streamline the creation of custom integrations with the Risk Cloud.

These endpoints are currently in open alpha, meaning that backwards compatibility is not guaranteed and breaking changes are to be expected as the endpoints are finalized. The full release of these new v2 endpoints is anticipated for late 2023.

Risk Cloud API v2 Resources

Featured Updates

No API updates.

RiskRecon Risk Cloud Connector
v2023.9.0 Release Notes

These are Release Notes for v2023.9.0 of Risk Cloud API v2, released on September 8th, 2023.

Risk Cloud API v2 is a collection of new API-first and RESTful API endpoints to streamline the creation of custom integrations with the Risk Cloud.

These endpoints are currently in open alpha, meaning that backwards compatibility is not guaranteed and breaking changes are to be expected as the endpoints are finalized. The full release of these new v2 endpoints is anticipated for late 2023.

Risk Cloud API v2 Resources

Featured Updates

Steps

POST/api/v2/steps
  • assignableUserType - the default value has been updated from APP_AND_EXTERNAL_USERS to APP_USERS
v2023.8.1 Release Notes

These are Release Notes for v2023.8.1 of Risk Cloud API v2, released on August 24th, 2023.

Risk Cloud API v2 is a collection of new API-first and RESTful API endpoints to streamline the creation of custom integrations with the Risk Cloud.

These endpoints are currently in open alpha, meaning that backwards compatibility is not guaranteed and breaking changes are to be expected as the endpoints are finalized. The full release of these new v2 endpoints is anticipated for late 2023.

Risk Cloud API v2 Resources

Featured Updates

Fields (New)

Retrieve a page of all fields whose parent application the current user has Build Access to.

These fields can be filtered by the following properties:

  • application-id - get all fields of a given application
  • workflow-id - get all fields of a given workflow
  • step-id - get all fields of a given step
  • field-type - get all fields of a given fields type
v2023.8.0 Release Notes

These are Release Notes for v2023.8.0 of Risk Cloud API v2, released on August 10th, 2023.

Risk Cloud API v2 is a collection of new API-first and RESTful API endpoints to streamline the creation of custom integrations with the Risk Cloud.

These endpoints are currently in open alpha, meaning that backwards compatibility is not guaranteed and breaking changes are to be expected as the endpoints are finalized. The full release of these new v2 endpoints is anticipated for late 2023.

Risk Cloud API v2 Resources

Featured Updates

No API updates.

Risk Cloud API: Postman

Build and refine your custom integration with our user-friendly Risk Cloud API Postman Workspace, which you can import to your Postman setup in the button below.

Once the Risk Cloud API Postman Collection and Environment have been forked to your Postman Workspace, you're ready to begin integrating with the following next steps.

  • Obtain either your API token or client key & secret key by following the instructions in Risk Cloud API: Authentication

  • In the Risk Cloud API Environment, set the following variable-value pairs:

    • bearerToken - your API token obtained above
    • baseUrl - your Risk Cloud environment (https://environment.logicgate.com, with environment swapped for your subdomain)

  • If authenticating via a client and secret, additionally set the following variable-value pairs:

    • basicAuthUsername - your client key

    • basicAuthPassword - your secret key

  • You're setup to begin sending requests to your Risk Cloud environment from Postman!
v2023.7.1 Release Notes

These are Release Notes for v2023.7.1 of Risk Cloud API v2, released on July 27th, 2023.

Risk Cloud API v2 is a collection of new API-first and RESTful API endpoints to streamline the creation of custom integrations with the Risk Cloud.

These endpoints are currently in open alpha, meaning that backwards compatibility is not guaranteed and breaking changes are to be expected as the endpoints are finalized. The full release of these new v2 endpoints is anticipated for late 2023.

Risk Cloud API v2 Resources

Featured Updates

No API updates.

v2023.7.0 Release Notes

These are Release Notes for v2023.7.0 of Risk Cloud API v2, released on July 13th, 2023.

Risk Cloud API v2 is a collection of new API-first and RESTful API endpoints to streamline the creation of custom integrations with the Risk Cloud.

These endpoints are currently in open alpha, meaning that backwards compatibility is not guaranteed and breaking changes are to be expected as the endpoints are finalized. The full release of these new v2 endpoints is anticipated for late 2023.

Risk Cloud API v2 Resources

Featured Updates

Steps (New)

Retrieve a page of all steps that the current user has Build Access to parent application to.

POST/api/v2/steps

Create a step from a JSON request body.

Retrieve a step specified by the ID in the URL path.

Delete a step specified by the ID in the URL path.

Update a step specified by the ID in the URL path from a JSON request body. Only present properties with non-empty values are updated.

v2023.6.1 Release Notes

These are Release Notes for v2023.6.1 of Risk Cloud API v2, released on June 29th, 2023.

Risk Cloud API v2 is a collection of new API-first and RESTful API endpoints to streamline the creation of custom integrations with the Risk Cloud.

These endpoints are currently in open alpha, meaning that backwards compatibility is not guaranteed and breaking changes are to be expected as the endpoints are finalized. The full release of these new v2 endpoints is anticipated for late 2023.

Risk Cloud API v2 Resources

Featured Updates

No API v2 updates are in this release. Step v2 and Record Read v2 API endpoints are both in active development.

ServiceNow (Asset) Risk Cloud Connector
v2023.6.0 Release Notes

These are Release Notes for v2023.6.0 of Risk Cloud API v2, released on June 14th, 2023.

Risk Cloud API v2 is a collection of new API-first and RESTful API endpoints to streamline the creation of custom integrations with the Risk Cloud.

These endpoints are currently in open alpha, meaning that backwards compatibility is not guaranteed and breaking changes are to be expected as the endpoints are finalized. The full release of these new v2 endpoints is anticipated for late 2023.

Risk Cloud API v2 Resources

Featured Updates

Application

Return Type

  • Changed response : 200 OK
  • Changed content type : application/json
    • Changed property restrictBuildAccess (boolean)

Request

  • Changed content type : application/json
  • Changed property restrictBuildAccess (boolean)

Return Type

  • Changed response : 200 OK
  • Changed content type : application/json
    • Changed property restrictBuildAccess (boolean)
POST/api/v2/applications

Return Type

  • Changed response : 200 OK
  • Changed content type : application/json
    • Changed property restrictBuildAccess (boolean)

Return Type

  • Changed response : 200 OK
  • Changed content type : application/json
    • Changed property content (array)
    • Changed items (object):
      • Changed property restrictBuildAccess (boolean)
Google Looker Native Integration
OpenAI Risk Cloud Connector
v2023.5.2 Release Notes

These are Release Notes for v2023.5.2 of Risk Cloud API v2, released on June 5th, 2023.

Risk Cloud API v2 is a collection of new API-first and RESTful API endpoints to streamline the creation of custom integrations with the Risk Cloud.

These endpoints are currently in open alpha, meaning that backwards compatibility is not guaranteed and breaking changes are to be expected as the endpoints are finalized. The full release of these new v2 endpoints is anticipated for late 2023.

Risk Cloud API v2 Resources

Featured Updates

No API v2 updates are in this release. Step v2 and Record Read v2 API endpoints are both in active development.

v2023.5.1 Release Notes

These are Release Notes for v2023.5.1 of Risk Cloud API v2, released on May 18th, 2023.

Risk Cloud API v2 is a collection of new API-first and RESTful API endpoints to streamline the creation of custom integrations with the Risk Cloud.

These endpoints are currently in open alpha, meaning that backwards compatibility is not guaranteed and breaking changes are to be expected as the endpoints are finalized. The full release of these new v2 endpoints is anticipated for late 2023.

Risk Cloud API v2 Resources

Featured Updates

Application

  • Changed response : 200 OK
  • Changed content type : application/json

    • Added property restrictBuildAccess (boolean)

    • Deleted property restrict-build-access (boolean)

  • Changed response : 200 OK
  • Changed content type : application/json

    • Added property restrictBuildAccess (boolean)

    • Deleted property restrict-build-access (boolean)

  • Changed content type : application/json
  • Added property restrictBuildAccess (boolean)

  • Deleted property restrict-build-access (boolean)

POST/api/v2/applications
  • Changed response : 200 OK
  • Changed content type : application/json

    • Added property restrictBuildAccess (boolean)

    • Deleted property restrict-build-access (boolean)

  • Changed response : 200 OK
  • Changed content type : application/json

    • Changed property content (array)

    • Changed items (object):

      • Added property restrictBuildAccess (boolean)

      • Deleted property restrict-build-access (boolean)

    • Changed property page (object)

      • Added property totalElements (integer)

      • Added property totalPages (integer)

      • Deleted property total-elements (integer)

      • Deleted property total-pages (integer)

Workflow

  • Changed response : 200 OK
  • Changed content type : application/json

    • Added property recordPrefix (string)

    • Added property applicationId (string)

    • Deleted property record-prefix (string)

    • Deleted property application-id (string)

  • Changed response : 200 OK
  • Changed content type : application/json

    • Added property recordPrefix (string)

    • Added property applicationId (string)

    • Deleted property record-prefix (string)

    • Deleted property application-id (string)

  • Changed content type : application/json
  • Added property recordPrefix (string)

  • Deleted property record-prefix (string)

POST/api/v2/workflows
  • Changed content type : application/json
  • New required properties:
    • applicationId
    • recordPrefix
  • New optional properties:
    • application-id
    • record-prefix
  • Added property recordPrefix (string)

  • Added property applicationId (string)

  • Deleted property record-prefix (string)

  • Deleted property application-id (string)

  • Changed response : 200 OK
  • Changed content type : application/json

    • Added property recordPrefix (string)

    • Added property applicationId (string)

    • Deleted property record-prefix (string)

    • Deleted property application-id (string)

  • Changed response : 200 OK
  • Changed content type : application/json

    • Changed property content (array)

    • Changed items (object):

      • Added property recordPrefix (string)

      • Added property applicationId (string)

      • Deleted property record-prefix (string)

      • Deleted property application-id (string)

    • Changed property page (object)

      • Added property totalElements (integer)

      • Added property totalPages (integer)

      • Deleted property total-elements (integer)

      • Deleted property total-pages (integer)

Workflow Map

  • Changed response : 200 OK
  • Changed content type : application/json

    • Changed property page (object)

      • Added property totalElements (integer)

      • Added property totalPages (integer)

      • Deleted property total-elements (integer)

      • Deleted property total-pages (integer)

v2023.5.0 Release Notes

These are Release Notes for v2023.5.0 of Risk Cloud API v2, released on May 8th, 2023.

Risk Cloud API v2 is a collection of new API-first and RESTful API endpoints to streamline the creation of custom integrations with the Risk Cloud.

These endpoints are currently in open alpha, meaning that backwards compatibility is not guaranteed and breaking changes are to be expected as the endpoints are finalized. The full release of these new v2 endpoints is anticipated for late 2023.

Risk Cloud API v2 Resources

Featured Updates

Authentication

POST/api/v1/account/token

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

Application

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

POST/api/v2/applications

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

Changed content type : application/json

  • Changed property type (string)

    Added enum values:

    • CONTROLS_COMPLIANCE
    • CYBER_RISK_MANAGEMENT
    • DATA_PRIVACY_MANAGEMENT
    • ESG
    • INTERNAL_AUDIT_MANAGEMENT
    • OPERATIONAL_RESILIENCY
    • POLICY_MANAGEMENT
    • REPOSITORY

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

Changed content type : application/json

  • Added property restrict-build-access (boolean)

  • Deleted property restrictBuildAccess (boolean)

  • Changed property type (string)

    Added enum values:

    • CONTROLS_COMPLIANCE
    • CYBER_RISK_MANAGEMENT
    • DATA_PRIVACY_MANAGEMENT
    • ESG
    • INTERNAL_AUDIT_MANAGEMENT
    • OPERATIONAL_RESILIENCY
    • POLICY_MANAGEMENT
    • REPOSITORY

Workflow

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

  • Added: application-id in query
  • Added: include-jira-workflows in query
  • Deleted: applicationId in query
  • Deleted: includeJiraWorkflows in query

POST/api/v2/workflows

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

Changed content type : application/json

New required properties:

  • application-id
  • record-prefix

New optional properties:

  • applicationId
  • recordPrefix
  • Added property record-prefix (string)

  • Added property application-id (string)

  • Deleted property recordPrefix (string)

  • Deleted property applicationId (string)

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

Changed content type : application/json

  • Added property record-prefix (string)

  • Deleted property recordPrefix (string)

Workflow Map

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

  • Added: workflow-id in query
  • Deleted: workflowId in query

POST/api/v2/workflow-maps

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

Changed response : 200 OK

  • New content type : application/json

  • Deleted content type : */*

Risk Cloud API: Automated Evidence Collection

With the Automated Evidence Collection endpoint, you have the ability to push evidence files into the Risk Cloud.

Whether your systems are secure, custom, or on-prem, the Automated Evidence Collection endpoint allows you to automate the storage of evidence in the Risk Cloud on your terms, without needing to grant your Risk Cloud environment access to your internal systems.

In this article, we will walk through the steps necessary for uploading evidence with the Risk Cloud API.

  1. Obtain the STEP_ID where you want to create a new record that holds the attachment
  2. Obtain the FIELD_ID  where you would like to upload the attachment
  3. Obtain the RECORD_ID  of the parent record to which the newly created evidence record will be linked
  4. Upload a file using the following Evidence Collection POST request
POST/api/v1/evidence?parentRecordId={RECORD_ID}&fieldId={FIELD_ID}&stepId={STEP_ID}

Setup

Risk Cloud Application Setup

Automated Evidence Collection requires an application with two workflows linked to each other. The Controls Compliance Application available from Risk Cloud Exchange is an ideal application to get started.

API Authentication

Prior to any interaction with Risk Cloud’s APIs we will need to set the authorization header. Instructions on how this can be accomplished can be found in the usage article Risk Cloud API: Authentication.

Evidence Endpoint Usage

Step 1: Obtain the STEP_ID

In the first step, we will be running a series of requests in order to determine the STEP_ID where we would like to create a new record to hold the attachment. If you already know your STEP_ID you may continue to Step 2: Obtain the FIELD_ID.

Using the Risk Cloud application

The most straightforward way to find a step ID is to navigate to the step builder page in the UI and take the ID from the end of the URL:

http://your-company.logicgate.com/build/steps/STEP_ID

Using the Risk Cloud API

First, we need to determine the WORKFLOW_ID of the workflow that contains our field. To do this, you can send the following GET request:

This will return an array of workflow objects, each looking like this:

{
  "id": "WORKFLOW_ID",
  "name": TABLE REPORT NAME,
  "recordPrefix": null,
  "allowGroups": false,
  "requireGroups": false,
  "xpos": 177,
  "ypos": 156,
  "priority": 0,
  "sla": {
    "enabled": false,
    "duration": 0
  },
  "steps": [
    {
      "stepType": "Origin",
      "id": "xt2X0dSM",
      "name": "Default Origin",
      "stepType": "Origin",
      "priority": 1,
      "allowEntitlements": true,
      "xpos": 55,
      "ypos": 55,
      "isPublic": false,
      "sla": {
        "enabled": false,
        "duration": 0
      },
      "chain": false,
      "origin": true,
      "end": false
    },
    {
      "stepType": "End",
      "id": "Y5B1k7yq",
      "name": "Default End",
      "stepType": "End",
      "priority": 2,
      "allowEntitlements": true,
      "xpos": 200,
      "ypos": 55,
      "isPublic": false,
      "sla": {
        "enabled": false,
        "duration": 0
      },
      "chain": false,
      "origin": false,
      "end": true
    }
  ]
}

Once you identify the step where you would like to add an attachment, you can take the “id” value as your STEP_ID for the subsequent steps. Also keep track of the “id” value of the workflow object as the WORKFLOW_ID for the next step.

Step 2: Obtain the FIELD_ID

In this step, we will be running a series of requests in order to determine the FIELD_ID where we would like to upload our attachment. If you already know your FIELD_ID you may continue to Step 3: Obtain the FIELD_ID

Using the Risk Cloud application

The most straightforward way to find a field ID is to navigate to the step builder page in the UI and click the edit pencil on the specific field. The field ID will be displayed on the field edit modal:

Using the Risk Cloud API

Using our WORKFLOW_ID from the previous step, we can send a request to find the specific Field where we want to add an attachment. To do this, we will send the following GET request:

This request will return an array of field objects, similar to this object:

{
  "fieldType": "TEXT_AREA",
  "id": "FIELD ID",
  "name": "text1",
  "label": "text1",
  "tooltip": null,
  "currentValues": [],
  "operators": [
    "NULL",
    "NOT_NULL",
    "EQUALS",
    "NOT_EQUALS",
    "CONTAINS",
    "DOES_NOT_CONTAIN"
  ],
  "convertibleTo": [
    "TEXT"
  ],
  "pattern": null,
  "message": null,
  "hasHtml": false,
  "fieldType": "TEXT_AREA",
  "valueType": "Common",
  "validTypeForCalculationInput": false,
  "discrete": false,
  "global": false
}

Once you identify the field where you would like to add an attachment, you can take the “id” value as your FIELD_ID for the subsequent steps.

Step 3: Obtain the RECORD_ID

In this step, we will be running a series of requests in order to determine the RECORD_ID where we would like to serve as the parent record for linking uploaded attachments to. If you already know your RECORD_ID you may continue to Step 4: Upload a file using a POST request.

Using the Risk Cloud application

The most straightforward way to find a record ID is to navigate to the record in the UI and take the ID from the end of the URL:

http://your-company.logicgate.com/records/RECORD_ID

Using the Risk Cloud API

An overview of the record search endpoint is available in the article Risk Cloud API: Record Search.

Step 4: Upload a file using a POST request

In this step, we will use the STEP_ID, FIELD_ID, and RECORD_ID found in the previous steps to upload our attachment.

The file can be sent in the request using the  multipart/form-data content type with a key named file and a value of the attachment file (often represented by HTTP request libraries or tools as the path to the file).

A cURL sample is demonstrated below:

curl --location 'https://your-company.logicgate.com/api/v1/evidence?parentRecordId={RECORD_ID}&fieldId={FIELD_ID}&stepId={STEP_ID}' \\
--header 'Authorization: Bearer {API_TOKEN}' \\
--form 'file=@"/the/path/to/evidence/file.pdf"'

Once you have built this body, you can send it using the following POST request:

POST/api/v1/evidence?parentRecordId={RECORD_ID}&fieldId={FIELD_ID}&stepId={STEP_ID}

The response should look like this:

{
  "recordId": "CREATED_RECORD_ID",
  "record": { Created Record Information Here },
  "parentRecordId": "RECORD_ID",
  "parentRecord": { Parent Record Information Here },
  "attachmentId": "ATTACHMENT_ID",
  "attachment": { Attachment Data Here },
  "stepId": "STEP_ID",
  "step": { Step Information Here }
}

After sending this final POST request, your attachment should be attached to a newly created record in your specified Step linked to your specified Record and Field.

For any additional questions, please reach out to [email protected]!

UCF Risk Cloud Connector
Salesforce Risk Cloud Connector
Jira Risk Cloud Connector
DocuSign Risk Cloud Connector
Slack Native Integration
ServiceNow Ticket Risk Cloud Connector
Google Drive Risk Cloud Connector
BitSight Risk Cloud Connector
Microsoft Teams Native Integration
CUBE Risk Cloud Connector
Vital4 Risk Cloud Connector
Risk Cloud API: Getting Started

The Risk Cloud API is a collection of RESTful API endpoints that empower you and your team to directly integrate, automate, and build with the Risk Cloud. Risk Cloud API endpoint payloads are JSON based, with some endpoints supporting exports in CSV and XSLX formats for flexible integration.

Explore our full API documentation or step through our step-by-step walkthrough below.

Getting Started

In this walkthrough, we will go over some basic concepts of the Risk Cloud API, including authentication, pagination, getting data, and updating data.

Authentication

The Risk Cloud API uses OAuth 2.0 for authentication, using bearer Access Token in the Authorization HTTP header. To obtain an API Access Token and get started building out your integration, reference the guide Risk Cloud API: Authentication.

Postman

Build and refine your custom integration with our user-friendly Risk Cloud API Postman Workspace, which you can import to your Postman setup in the button below.

For more Postman setup information, reference our guide Risk Cloud API: Postman.

Pagination

The Risk Cloud API contains a variety of endpoints that may return a substantial amount of listed data. These endpoints utilize a style of offset pagination to provide a flexible and consumable means of processing Risk Cloud data. To learn more about pagination in the Risk Cloud API, reference the guide Risk Cloud API: Pagination.

Rate Limit

The Risk Cloud API recommends a rate limit of 10 requests per second in order to ensure optimal and efficient performance.

Getting Data

From exporting to data lakes to fine tuning data for existing dashboard tools, the Risk Cloud API provides a flexible means of exporting data from your Risk Cloud environment. Linked below are guides covering common use cases for exporting Risk Cloud environment data. For all available endpoints, feel free to explore our full API documentation.

Modifying Data

The Risk Cloud API can also perform actions in your Risk Cloud environment such as creating records and users or updating fields and attachments on records. To learn more about modifying data in your Risk Cloud environment via the Risk Cloud API, reference the linked guides below. For all available endpoints, feel free to explore our full API documentation.

Webhooks

In addition to the Risk Cloud API, there are also Risk Cloud Webhooks, which allow you to enhance your custom integrations by sending Risk Cloud automation event data to your external systems. To learn more, checkout our guide Risk Cloud Webhooks.

Ascent Risk Cloud Connector
SecurityScorecard Risk Cloud Connector
Tenable Risk Cloud Connector
Black Kite Risk Cloud Connector
Risk Cloud API: Pagination

The Risk Cloud API contains a variety of endpoints that may return a substantial amount of listed data. These endpoints utilize a style of offset pagination to provide a means of processing the data in smaller portions.

Page Requests

Risk Cloud API endpoints that support Pagination accept two optional query parameters to indicate what portion of data to return.

  • page - an integer representing the zero-indexed page number (must not be lessthan 0, defaults to 0)
  • size - an integer representing the size of the page and maximum number of itemsto be returned (must not be less than 1, defaults to 20)

These query parameters function conceptually similar to how pages are implemented in the Risk Cloud UI, where the page is the page number value, albeit zero-indexed, and size is the Results per page value.

Example

The Field Read All endpoint of GET /api/v1/fields utilizes Pagination. If there are 50 active Fields (numbered 1-50) in a Risk Cloud environment, then the following query parameters will return the following Fields.

Page Size Request Fields
None (Default 0) None (Default 20) GET /api/v1/fields 1-20
0 20 GET /api/v1/fields?page=0&size=20 1-20
1 20 GET /api/v1/fields?page=1&size=20 21-40
2 20 GET /api/v1/fields?page=2&size=20 41-50
0 8 GET /api/v1/fields?page=0&size=8 1-8
1 8 GET /api/v1/fields?page=1&size=8 9-16

Page Responses

When a Risk Cloud API endpoint returns a Page, the response body contains a variety of properties. 

Property Type Description
content array A list of the returned items
number integer The zero-indexed page number
size integer The size of the page and maximum number of items to be returned
totalElements integer The total number of items available
totalPages integer The total number of pages available based on the size
first boolean Whether the current page is the first one
last boolean Whether the current page is the last one
empty boolean Whether the current page is empty
numberOfElements integer The number of items currently on this page
sort object The sorting parameters for the page
sort.empty boolean Whether the current page is empty
sort.sorted boolean Whether the page items are sorted
sort.unsorted boolean Whether the page items are not sorted

Page Processing

Depending on the integration, there are multiple strategies for processing data from a Risk Cloud API endpoint that supports Pagination.

  • Bulk
  • Iteration

Bulk

The Bulk strategy involves sending a single request to obtain a bulk result. This is accomplished by providing a large value for the size query parameter. The size value should be large enough to surpass the expected maximum amount of possible returned items. An example would be: GET /api/v1/fields?size=1000

The items can then be obtained from the content property of the response.

Pseudocode Example

CALL GetFields with size as 1000 RETURNING response 
SET items to response.content

Iteration

The Iteration strategy involves sending multiple requests and assembling a result. This can be accomplished in multiple ways, including the following.

  • Incrementing the pagenumber until a response where last is true is received
  • Incrementing the page number until it reaches the amount of the totalPagesresponse property

Pseudocode Example

SET items to [] 
SET index to 0 
REPEAT 
  CALL GetFields with page as index RETURNING response 
  APPEND response.content to items 
  INCREMENT index 
UNTIL response.last = true

 

Risk Cloud API: Record Search

The Risk Cloud API contains the Record Search endpoint GET /api/v1/records/searchto provide a means of searching and filtering Records based on various parameters.

Endpoint

The Record Search endpoint GET /api/v1/records/search is a Paginated endpoint that returns a Page of Records for a given pageand size. Feel free to reference Risk Cloud API: Pagination for more information on how Paginated endpoints function in the Risk Cloud API.

While pageand sizeare optional query parameters for some Paginated endpoints, pageand size are required query parameters for the Record Search endpoint.

The response payload of the Record Search endpoint can be found in our API documentation.

Request

Records by Workflow

To filter the Record Search to only return a Page of Records from a specific Workflow, add the workflowquery parameter to the Record Search request.

  • workflow: the unique ID of a Workflow, filtering the Record Search to only return
    Records from the specified Workflow 

To obtain a Workflow ID, reference Risk Cloud API: View Applications, Workflows, and Steps

Note: page and size query parameters are required for the Record Search endpoint

GET /api/v1/records/search?workflow={workflowId}&page=0&size=20

Linked Child Records

To filter the Record Search to return a Page of Linked Records, add the following query parameters to the Record Search request.

  • parent: the unique ID of the parent Record to seek linked child Records from
  • sourceWorkflow: the unique ID of the Workflow that the parent Record is from
  • workflow: the unique ID of the linked Workflow from which linked child Records are sought
  • mapped: whether the returned Records are linked to the parent Record or not

Note: page and size query parameters are required for the Record Search endpoint

GET /api/v1/records/search?page=0&size=20&parent={recordId}&sourceWorkflow={workflowId}&wo
rkflow={linkedWorkflowId}&mapped=true

Response

The Record Search endpoint returns a Page of Record objects, where the Records are within an array of the content property of the Page. Each Record object of Page’s content array is formatted as shown below.

Property Type Description
properties array A list of Custom Field and System Field properties
properties[].header string The name of the Custom Field or System Field
properties[].fieldType enum The Custom Field type for Custom Fields or null for System Fields
properties[].systemField enum The System Field type for System Fields or null for custom Fields
properties[].recordId string The unique ID of the Record containing this property
properties[].url string The path extension to the Field, only on Record Names
properties[].rawValue object / string / array Either a single Value object, a list of Value objects, or a string  representing, depending on the type of Field
properties[].formattedValue string The string representation of the Value or Values
record object A returned Record
record.id string The unique ID of the Record
record.depth integer The depth of the Record
record.name string The name of the Record
record.dueDate long The Due Date of the Record measured in milliseconds since the Unix epoch
record.user boolean Whether the Record has an assignee
record.canEdit boolean Whether the current User is allowed to edit this Record
record.canRead boolean Whether the current User is allowed to read this Record
record.step Step The current Step of the Record
recird.workflow Workflow The Workflow of the Record
record.application Application The Application of the Record
record.jiraKey string The Jira Key of the Record if
record.stepId string The ID of the current Step of the Record
record.stepEnd boolean Whether the current Step of the Record is an End Step

 

Tidying Up Existing REST APIs

Originally posted on Nordic APIs

What if, one morning, you discover that every internal REST API endpoint of your web application is suddenly displayed as-is in your public REST API documentation? Your Developer Portal is overflowing with messages from eager API users struggling to make integrations with the exciting new functionality the endpoints provide.

  • “Is the name property required on this GET request?”
  • “What is the request body supposed to look like to create a new Blog object?”
  • “I tried to update a User, and now I’m seeing null pointer exceptions everywhere!”

On top of an overflowing portal, not only are the newly posted internal endpoints causing confusion but regressions are being discovered in preexisting public API endpoints too! Whether this scenario feels like a distant bad dream or resonates a little too close to reality, as time and development tickets go by, the quality and conciseness of some existing API endpoints may slowly decline.

From older public endpoints to internal endpoints that may become public, how can you tidy up existing REST API endpoints for public usage? Let’s get tidying!

Strategies for Tidying REST APIs

Rescope the Data

Request and response data can often be closely tied to internal database resources. It can be tempting to include all properties that are available on a resource in the API to support more integration possibilities. However, some resource properties may not be relevant to an API user.

Data Transfer Objects (DTOs), which provide a decoupled representation of your database resources, are particularly useful for making more concise request and response payloads for REST API endpoints. In addition to conciseness, DTOs also improve maintainability and flexibility, allowing for database and service level resources to be updated independently from their corresponding API representations.

DTOs and Database Resources

Using a User resource as an example, a JSON representation of a User database resource may contain the following properties.

json
{
  "id": "string",
  "email": "string",
  "password": "string",
  "roleId": "string",
  "companyId": "string",
  "firstName": "string",
  "lastName": "string",
  "loginAttempts": 0
}

A JSON representation of a User DTO could contain a scoped-down, API-friendly representation of the data, as shown below.

json
{
  "id": "string",
  "email": "string",
  "firstName": "string",
  "lastName": "string"
}

What Properties Should Be Included in the DTO?

For a given resource (e.g., a User), consider the following process for crafting a DTO representation:

  • Begin with an empty DTO.
  • Consider each property and relationship of the resource individually (e.g. User.emailUser.loginAttempts, etc…).
  • Reflect on the value of including the property or relationship in the API.
  • User.email is high value in an API endpoint for both identifying the user or creating an integration to email the user.
  • User.loginAttempts may only be relevant to the internal web application and omitting it from the API may make the endpoint more concise.

It can be difficult to decide to omit an available property from a resource’s DTO representation in an API. However, as API users build out integrations, it’s less complicated to add a property to an API endpoint by popular demand rather than having to risk breaking backward compatibility by removing a potentially unused existing property.

If introducing a DTO on an existing API endpoint’s request or response would break API compatibility, consider creating a separate endpoint for the DTO implementation and coordinating a migration or deprecation strategy with API users.

Observe the UI

A single front-end change that works with what is available can be more valuable to a team in the short term than multiple changes across the stack, saving time and precious story points. However, over time, this can cause the alignment between the front-end and back-end to decline, which could call for a reassessment of the existing API endpoint.

For example, a radio button component with three options in a UI may be represented by three corresponding boolean properties in the API, where each option was added individually over time in separate code contributions. However, after taking a look at the current state of the functionality, the radio button component as a whole may be better represented in the API via a single enum property with values for each option.

If your web app has a user interface, observe how an existing endpoint is used in the frontend:

  • Are there unused properties on the endpoint?
  • Is data being transformed in the frontend to accommodate the endpoint, where the endpoint could be modified itself?
  • Are other endpoints being referenced where a concise, composite endpoint that supports the same functionality would be better suited?

Once these questions have been addressed, consider updating the API endpoint accordingly to align it closer to how it’s currently being used.

Reference a Guideline

If you have existing API guidelines for your public endpoints, dust them off! If you don’t have API guidelines, consider modeling existing API guidelines (e.g. ZalandoMicrosoftGoogle) or creating your own from API best practices.

Some examples of API guidelines to improve the consistency and clarity of an API could include:

  • Are the API documentation descriptions concise, accurate, or relevant?
  • Are URL paths aligned (using camelCase vs. kebab-case)?
  • When should query parameters be used vs. request body?

Once you have API guidelines in place, pass through your API and capture any notable deviations in some API maintenance tickets. With defined API guidelines, there is also an opportunity to integrate the guidelines into code review automation to ensure that the guidelines are preserved going forward.

Notate Required-Ness

As API endpoints may expand over time, identifying what request body properties or query parameters are actually required can become daunting. It can be incredibly valuable to take a second look at an existing endpoint, test it, and even dig into the underlying code to determine what is truly required. Once the required properties on an endpoint have been identified, ensure that the properties are noted as being required in the API documentation as well.

Break It Down

Some endpoints can carry a lot of responsibility, perhaps even snowballing in scope over time. In particular, endpoints that update resources can have large request body payloads containing multiple related objects, making it difficult to break down and simplify the endpoint.

While CRUD (Create, Read, Update, Delete) does not necessarily match the HTTP methods of REST 1-to-1, the CRUD methodology does provide a widely adopted and straightforward framework for breaking down a resource’s endpoint functionality into a handful of more concise endpoints.

Let’s use the example of an update endpoint for a User resource that has a Blog relationship resource in the request payload.

json
{
  "email": "string",
  "firstName": "string",
  "lastName": "string",
  "blogs": [
    {
      "id": "string",
      "title": "string",
      "content": "string"
    }
  ]
}

The existing endpoint allows an API user to update a User while also creating or updating an attached Blog.

  • Consider the unique resources in the endpoint’s request.
  • If each resource had its own Create, Read, Update, or Delete API endpoints, how could similar functionality be achieved (albeit with more requests)?
  • Could an existing object be represented as an ID instead of the full object?

After answering these questions, a decision could be made to:

  • Remove the Blog from the User update endpoint.
  • Write new endpoints for updating or creating a Blog that accept the User.id to establish the relationship.
  • Achieve similar functionality through the new, more concise endpoints.

The new create or update endpoints for a Blog could then have a payload similar to the following.

json
{
  "title": "string",
  "content": "string",
  "userId": "string"
}

Additionally, it may be valuable to include usage documentation to accompany the new endpoint flow. While there is a case to be made that multiple endpoints could be expensive for paid APIs or less performant, the introduction of new concise endpoints can additionally provide more flexibility to your API and potential integrations.

Next Steps

As development moves forward and edge cases arise, it can be worth considering these tips when refactoring or reviewing API changes.

  • Rescope the data: Evaluate the necessity of resource properties.
  • Observe the UI: Leverage the current UI to inform API decisions.
  • Reference a guideline: Align your API and adopt best practices with a guide.
  • Notate required-ness: Ensure that optional and required properties are up-to-date.
  • Break it down: Evaluate how endpoint functionality could be replicated with scoped-down endpoints.

REST API maintenance is a continuous process. When there is routine attention to the accuracy, relevance, and clarity of existing API endpoints, API users and developers alike can be more confident in the use cases and integrations they create and support.

Risk Cloud API: Authentication

How to use Risk Cloud's API to create or retrieve an API Access Token

The Risk Cloud API uses OAuth 2.0 for authentication which uses a bearer token in the Authorization http header. In order to start using the API, first retrieve your Client and Secret keys from the Profile page. This can be navigated to by clicking the Person icon in the top right corner and then the Profile button.

In the Profile page, go to the "Access Key" tab. If this tab is not there, please contact your Risk Cloud administrator as you may not have API privileges.

In the "Access Key" tab you will see both Client and Secret keys. These are both necessary to generate an access key or retrieve an existing access key.

*Note that this panel also has the ability to generate the Access Key on its own.
After having both Client and Secret keys they will need to be base64 encoded with a colon in between them: {CLIENT}:{SECRET}

Once they are encoded, take your encoded string and place it in the authorization header as Authorization: Basic {ENCODED}

Once this URL is pinged with the correct Authorization Header a JSON response will appear mimicking the following structure:

POST/api/v1/account/token

Response:

{
    "access_token": "KEY_HERE",
    "token_type": "bearer",
    "expires_in": 31532918,
    "scope": "read write"
}

The returned access token can then be used in the authorization header to interact with Risk Cloud's API

Authorization: Bearer {ACCESS_TOKEN}
Risk Cloud API: Create Records

How to create a Record, assign values to Fields, and submit a Record using common Risk Cloud endpoints.

We will start off by assuming an Application and Workflow have been created in Risk Cloud using the Build tools. In this example, we have created an “Onboarding” Application with a Workflow called “Employee". This Workflow has three Steps: “Add Employee”, “Manager Meeting”, and “Active Employee.”

Since the Origin Step in this Workflow is “Add Employee,” we will be using the Risk Cloud API to create a Record in this Step of our Workflow. When our new Record is created in “Add Employee”, we would also like the following  Fields in this Step to be populated with values:

Now that we have our Workflow set up we can interact with the Risk Cloud API to create a Record in “Add Employee”, populate these Fields, and submit the Record to “Manager Review”.

To create a Record, we need to start with a POST request with the proper JSON body. The JSON body requires three JSON objects: “step”, “workflow”, and “currentValueMaps”. We will construct our JSON body one object at a time.

Obtaining proper API authentication

Prior to any interaction with Risk Cloud's APIs we will need to set the authorization header. Instructions on how this can be accomplished can be found here.

Step

First, we need a Step object with a Step’s ID key value-pair. This Step ID can be pulled via the browser’s URL and should look like this:

https://your-company.logicgate.com/build/steps/{STEP_ID}

We will take this value and input it into our JSON body. Our JSON now looks like the following:

{
  "step": {
    "id": "STEP_ID"
  }
}

Workflow

We need to next first fetch the Workflow’s ID.

Type: GET

https://your-company.logicgate.com/api/v1/workflows/step/{STEP_ID}

We will take the “id” value as the WORKFLOW_ID and use this to continue to fetch all the Fields in the Workflow using the following endpoint.

Current Value Maps

Risk Cloud uses currentValueMaps to map values to the proper Fields. Let us create our currentValueMaps object for the first input text value, “Employee Name.”

Type: GET

https://your-company.logicgate.com/api/v1/fields/workflow/{WORKFLOW_ID}/values

Now we must parse through the array for the Fields we need and use the ID for our currentValueMap object.

So far our object should look like:

{
  "field": {
    "id": "TEXT_FIELD_ID",
     "fieldType": "TEXT"
   }
 }

We will now need to input the values we want to set for this Field. In the Risk Cloud platform we refer to this, in an API frame, as currentValues. For non-discrete values (such as text and numeric values) we only need to set the textValue of the currentValue. Our object now looks like the following:

{
  "currentValues": [
    {
      "textValue": "John Doe",
      "discriminator": "Common"
    }
  ],
  "field": {
    "id": "TEXT_FIELD_ID",
    "fieldType": "TEXT"
  }
}

Let us similarly set the “Job Type” Field, a discrete-value Select Field. When we fetched the list of Fields above, each Field object had a key called currentValues. These are the value inputs to this Field. For the Select Field (and all other discrete field types) the values in this array are the selectable values for this Field. Those values for this situation are 'Account Executive', 'Developer', and 'Customer Success Manager'.

We will set the value for the Job Type Field to be Developer. Our JSON object should look like the following now:

{
  "currentValues": [
    {
      "id": "SELECTED_CURRENT_VALUE_ID",
      "textValue": "Developer",
      "discriminator": "Common"
    }
  ],
  "field": {
    "id": "SELECT_FIELD_ID",
    "fieldType": "SELECT"
  }
}

Let us put everything together! We should get the following JSON object that is ready to create a new Record with Field inputs.

{
  "step": {
    "id": "STEP_ID"
  },
  "currentValueMaps": [
    {
      "currentValues": [
        {
          "textValue": "John Doe",
          "discriminator": "Common"
        }
      ],
      "field": {
        "id": "TEXT_FIELD_ID",
        "fieldType": "TEXT"
      }
    },
    {
      "currentValues": [
        {
          "id": "SELECTED_CURRENT_VALUE_ID",
          "textValue": "Developer",
          "discriminator": "Common"
        }
      ],
      "field": {
        "id": "SELECT_FIELD_ID",
        "fieldType": "SELECT"
        }
    }
  ]
}

Now we can submit this Record with the following endpoint

Type: POST

https://your-company.logicgate.com/api/v1/records/public

Body

{
  "step": {
    "id": "STEP_ID"
  },
  "currentValueMaps": [
    {
      "currentValues": [
        {
          "textValue": "John Doe",
          "discriminator": "Common"
        }
      ],
      "field": {
        "id": "TEXT_FIELD_ID",
        "fieldType": "TEXT"
      }
    },
    {
      "currentValues": [
        {
          "id": "SELECTED_CURRENT_VALUE_ID",
          "textValue": "Developer",
          "discriminator": "Common"
        }
      ],
      "field": {
        "id": "SELECT_FIELD_ID",
        "fieldType": "SELECT"
        }
    }
  ]
}

From this we get a response object with information pertaining to the Record created and submission including the Record ID, the Record’s current Step and creation date. For those users with the access, a Record will now appear in the Home Screen ready for “Manager Meeting.”

Risk Cloud API: Export Record Data

This is a step-by-step guide to exporting Records and their Field data as a CSV or XLSX file using common Risk Cloud endpoints.

In order to properly export Records and their Field data, we first need to gather information on the Layout ID, Application ID, and Workflow ID. Then, we will construct out a JSON body with this information to make a proper POST request for exporting Records.

Obtain Proper API Authentication

Prior to any interaction with Risk Cloud's APIs we will need to set the authorization header. Instructions on how this can be accomplished can be found here.

Layout

The Layout ID can be obtained by either looking for the ID in the URL when in the Layout’s edit modal or by using the following endpoint.

Type: GET

https://your-company.logicgate.com/api/v1/layouts

This will return a list of all Layouts. Now, parse this array of Layouts until you find your Layout, and place the Layout ID into your JSON object:

{
 "layout": "LAYOUT_ID"
}

Application and Workflow

The Application ID can be found using the following endpoint:

Type: GET

https://your-company.logicgate.com/api/v1/applications/workflows

This will return a list of all active Applications with their Workflows. Similarly to Layout, parse this array until you find your Application and Workflow and add this Application ID and Workflow ID into your JSON object. The JSON object should look like this:

{
 "layout": "LAYOUT_ID",
 "applications": ["APPLICATION_ID"],
 "workflow": "WORKFLOW_ID"
}

Note: The key for Applications is plural "application" and is an array of string IDs. Additionally, to export all Records in one Application, across all Workflows in that Application, use a Global Layout and do not specify a Workflow in your JSON body.

Statuses and Steps

With our current JSON body, we will be exporting all Records in the Workflow. What if we wanted to be more granular with our Record selection? Good news!

The next keys in our JSON object, “statuses” and “steps” is optional. This key allows us to filter to the Records with one of the following specific statuses: INACTIVE, NOT_ASSIGNED, ASSIGNED, IN_PROGRESS, COMPLETE. The Step ID can be pulled via the browser’s URL. If you decided to use one of these statuses and specify a Step, your JSON object would look like this:

{
 "layout": "LAYOUT_ID",
 "applications": ["APPLICATION_ID"],
 "workflow": "WORKFLOW_ID",
 "statuses": ["IN_PROGRESS"],
 "step": "STEP_ID"
}

Retrieve Records

Now you can use the JSON object above as the body of the request to retrieve the Field information and values for your selected Layout, Application, Workflow, Steps, and Record statuses.

Type: POST

https://your-company.logicgate.com/api/v1/records/export/csv

Note: To export as an XLSX document, change “csv” to “xlsx” in the request URL.

Risk Cloud API: Update Records

How to update the value of a Field in a specific Record.

When you submit a Record in Risk Cloud, all of the Field values you have selected or input are saved on that Record. In this article we will learn how to update a specific Field's value in a specific Record using the Risk Cloud API. In this example we will cover how to update a Select Field. The API requests & responses seen in this article will differ slightly based on the Field type that is being updated.

Updating a Field on a Record

Within a Step, we have a Field named "Severity." Severity has selectable values of "Low," "Medium," and "High."

Let's assume that you have created a Record and selected a severity of "Medium," but would like to change that to "High." We are able to do this with some requests to the Risk Cloud API.

First, we must obtain the values already on the Record, which can be done via the following GET request.

Note: The "record_id" to use in your GET request will be the unique string of numbers and letters in the record URL. In our case, the URL of the record we would like to update is https://your-company.logicgate.com/records/srAIdk3c. The "record_id" we will use is "srAIdk3c."

Obtaining proper API authentication

Prior to any interaction with Risk Cloud’s APIs we will need to set the authorization header. Instructions on how this can be accomplished can be found here.

Response:

{
    "srAIdk3c": {
        "id": "k9IYrkst",
        "active": true,
        "created": 1554232752610,
        "updated": 1554233219059,
        "step": null,
        "user": null,
        "currentValues": [
            {
                "discriminator": "Common",
                "id": "ziJtKBiZ",
                "active": true,
                "created": 1554232752610,
                "updated": null,
                "valueType": "Common",
                "textValue": "Medium",
                "numericValue": 1,
                "isDefault": false,
                "archived": false,
                "priority": 2,
                "empty": false,
                "default": false,
                "fieldId": null
            }
        ],
        "field": {
            "fieldType": "SELECT",
            "id": "srAIdk3c",
            "active": true,
            "created": 1554228320342,
            "updated": 1554232752610,
            "name": "Severity",
            "label": "Severity Level",
            "tooltip": null
        },
        "record": null,
        "node": null,
        "expressionResult": 2,
        "assignment": null
    }
}

The response is a key value pair where the key is the ID of the field and the value is the selected value. The most important part of this response is the currentValues array. The object inside this array is what is currently selected, and what we need to update.

Because we are updating a "select" field, we should first understand what all of our options are! We can do this by submitting a GET request to the "field" endpoint. You can find your field_id in the response above within the "field" object, or by calling the fields/workflow/WORKFLOW_ID endpoint.

Response:

{
    "fieldType": "SELECT",
    "id": "srAIdk3c",
    "name": "Severity",
    "label": "Severity Level",
    "tooltip": null,
    "currentValues": [
        {
            "discriminator": "Common",
            "id": "IXxbj7uk",
            "valueType": "Common",
            "textValue": "Low",
            "numericValue": 1,
            "isDefault": false,
            "archived": false,
            "priority": 3,
            "empty": false,
            "default": false,
            "fieldId": "srAIdk3c"
        },
        {
            "discriminator": "Common",
            "id": "ziJtKBiZ",
            "valueType": "Common",
            "textValue": "Medium",
            "numericValue": 1,
            "isDefault": false,
            "archived": false,
            "priority": 2,
            "empty": false,
            "default": false,
            "fieldId": "srAIdk3c"
        },
        {
            "discriminator": "Common",
            "id": "fwy1ntpD",
            "valueType": "Common",
            "textValue": "High",
            "numericValue": 1,
            "isDefault": false,
            "archived": false,
            "priority": 1,
            "empty": false,
            "default": false,
            "fieldId": "srAIdk3c"
        }
    ],
...
}

The currentValues array contains all of the selectable options for Severity select field in the form of objects. We can choose any object in the array to be our new selected value, and for this example we will be choosing the value of "High."

In the following POST request, use the "record_id" for the specific record that you want to update.

POST/api/v1/valueMaps?record=RECORD_ID

Request:

{
        "id": "k9IYrkst",
        "active": true,
        "created": 1554232752610,
        "updated": 1554233219059,
        "step": null,
        "user": null,
        "currentValues": [
        {
            "discriminator": "Common",
            "id": "fwy1ntpD",
            "valueType": "Common",
            "textValue": "High",
            "numericValue": 1,
            "isDefault": false,
            "archived": false,
            "priority": 1,
            "empty": false,
            "default": false,
            "fieldId": "srAIdk3c"
        }
        ],
        "field": {
            "fieldType": "SELECT",
            "id": "srAIdk3c",
            "active": true,
            "created": 1554228320342,
            "updated": 1554244721428,
            "name": "Severity",
            "label": "Severity Level",
            "tooltip": null
        },
        "record": null,
        "node": null,
        "expressionResult": 2,
        "assignment": null
}

Notice that we have replaced the object in the currentValues array with the value object for "High." This serves to update the selected value from our original value of "medium" to our new desired value of "High."

Response:

{
    "id": "k9IYrkst",
    "currentValues": [
        {
            "discriminator": "Common",
            "id": "fwy1ntpD",
            "valueType": "Common",
            "textValue": "High",
            "numericValue": 1,
            "isDefault": false,
            "archived": false,
            "priority": 1,
            "empty": false,
            "default": false,
            "fieldId": null
        }
    ],
    "field": {
        "fieldType": "SELECT",
        "id": "srAIdk3c",
        "name": "Severity",
        "label": "Severity Level",
        "tooltip": null
    },
    "node": {
        "stepType": "End",
        "id": "lkePaPYj"
    },
    "expressionResult": 10
}

We can see the severity level has been updated to "High."

For more information about the Risk Cloud API you can read our Developer Center.

Risk Cloud API: Upload Attachments

This article will describe how to add an attachment to a Risk Cloud Record using our API.

Within Risk Cloud, you are able to add “Attachment” Fields to your Records. These Fields allow you, perhaps very obviously, to attach files. Customers use these Fields in order to upload evidence, add documents for employee attestation, and many additional use cases.

In this article, we will walk through three steps needed to attach a document using Risk Cloud API:

  1. Obtain the FIELD_ID where you would like to upload an attachment

  2. Upload a file via a POST /api/v1/attachments?field={FIELD_ID} request

  3. Attach the file to your specific record via a POST /api/v1/valueMaps?record={RECORD_ID} request

Obtaining proper API Authentication

Prior to any interaction with Risk Cloud’s APIs we will need to set the authorization header. Instructions on how this can be accomplished can be found here.

Step 1: Obtain the FIELD_ID

In the first step, we will be running a series of requests in order to determine the FIELD_ID where we would like to upload our attachment. If you already know your FIELD_ID from obtaining it in the Field Edit menu of your Risk Cloud environment, you may continue to Step 2.

First, we need to determine the WORKFLOW_ID of the workflow that contains our field. To do this, you can send the following GET request:

This will return an array of workflow objects, each looking like this:

{
        "id": "WORKFLOW_ID",
        "name": TABLE REPORT NAME,
        "recordPrefix": null,
        "allowGroups": false,
        "requireGroups": false,
        "xpos": 177,
        "ypos": 156,
        "priority": 0,
        "sla": {
            "enabled": false,
            "duration": 0
        },
        "steps": [
            {
                "stepType": "Origin",
                "id": "xt2X0dSM",
                "name": "Default Origin",
                "stepType": "Origin",
                "priority": 1,
                "allowEntitlements": true,
                "xpos": 55,
                "ypos": 55,
                "isPublic": false,
                "sla": {
                    "enabled": false,
                    "duration": 0
                },
                "chain": false,
                "origin": true,
                "end": false
            },
            {
                "stepType": "End",
                "id": "Y5B1k7yq",
                "name": "Default End",
                "stepType": "End",
                "priority": 2,
                "allowEntitlements": true,
                "xpos": 200,
                "ypos": 55,
                "isPublic": false,
                "sla": {
                    "enabled": false,
                    "duration": 0
                },
                "chain": false,
                "origin": false,
                "end": true
            }
        ]
    }

After identifying the Workflow that contains the Field you would like to add an attachment to, you can take the “id” from this object as your WORKFLOW_ID.

Now that we have our WORKFLOW_ID, we can send a request to find the specific Field where we want to add an attachment. To do this, we will send the following GET request:

This request will return an array of field objects, similar to this object:

{
        "fieldType": "TEXT_AREA",
        "id": "FIELD ID",
        "name": "text1",
        "label": "text1",
        "tooltip": null,
        "currentValues": [],
        "operators": [
            "NULL",
            "NOT_NULL",
            "EQUALS",
            "NOT_EQUALS",
            "CONTAINS",
            "DOES_NOT_CONTAIN"
        ],
        "convertibleTo": [
            "TEXT"
        ],
        "pattern": null,
        "message": null,
        "hasHtml": false,
        "fieldType": "TEXT_AREA",
        "valueType": "Common",
        "validTypeForCalculationInput": false,
        "discrete": false,
        "global": false
    }

Once you identify the Field where you would like to add an attachment, you can take the “id” value as your FIELD_ID for the subsequent steps.

Step 2: Upload the file

In this step, we will use the FIELD_ID found in step one to upload our attachment.

The file can be sent in the request using the  multipart/form-data content type with a key named file and a value of the attachment file (often represented by HTTP request libraries or tools as the path to the file).

A cURL sample is demonstrated below:

curl --location 'https://your-company.logicgate.com/api/v1/attachments?field={FIELD_ID}' \\
--header 'Authorization: Bearer {API_TOKEN}' \\
--form 'file=@"/the/path/to/attachment.pdf"'

Once you have built this body, you can send it using the following POST request:

POST/api/v1/attachments?field={FIELD_ID}

The response should look like this:

{
    "attachmentStatus": "CLEAN",
    "id": "QoZy9k73",
    "valueType": "Attachment",
    "discriminator": "CLEAN",
    "textValue": "FILE NAME",
    "numericValue": 1.0,
    "isDefault": false,
    "archived": false,
    "priority": 0,
    "attachmentStatus": "CLEAN",
    "contentType": "image/png",
    "fileSize": NUMBER,
    "fileExtension": "png",
    "originalFileExtension": "png",
    "awsS3Key": "S3 KEY",
    "versionCount": 1,
    "empty": false,
    "fieldId": "EbfvwDRi"
}

Step 3: Attach the file to the record

In this final step, we will compile the information from our previous two steps in order to attach our upload to the specific record that we are interested in. We will build our POST request’s body using the following structure:

{
  "currentValues": [
      # RESPONSE FROM STEP 2
  ],
  "field": {
    "valueType": "Attachment",
    "fieldType": "ATTACHMENT",
    "id": "FIELD_ID"
  }
}

Once you build the above body, send the following POST request:

POST/api/v1/valueMaps?record={RECORD_ID}

The response should look like this:

{
    "id": "uexgD8Ej",
    "currentValues": [
        {
            "discriminator": "CLEAN",
            "id": "QoZy9k73",
            "valueType": "Attachment",
            "discriminator": "CLEAN",
            "textValue": "TEXT",
            "numericValue": 1.0,
            "isDefault": false,
            "archived": false,
            "priority": 0,
            "attachmentStatus": "CLEAN",
            "contentType": "image/png",
            "fileSize": 33517,
            "fileExtension": "png",
            "originalFileExtension": "png",
            "awsS3Key": "S3 KEY",
            "versionCount": 1,
            "empty": false,
            "fieldId": null
        }
    ],
    "field": {
        "fieldType": "ATTACHMENT",
        "id": "EbfvwDRi",
        "name": "attachment",
        "label": "attachment",
        "tooltip": null,
        "enableVersions": true,
        "validTypeForCalculationInput": false
    },
    "expressionResult": 1.0
}

After sending this final POST request, your attachment should be attached to your specified Record and Field.

For any additional questions, please reach out to [email protected]!

Risk Cloud API: View User Access Audits
A guide to the API endpoints that allow you to track user login attempts

This article details 3 endpoints for obtaining access logs for All Login Attempts, Successful Logins, and Login Failures. The results from these endpoints are only accessible to access keys belonging to users with the Admin > All module entitlement.

Login Attempts

Retrieve a log of login successes and failures for a Risk Cloud user, using their email.

Parameters

  • email: a valid user email (e.g. [email protected], or for Postman syntax admin%[email protected])
  • size: the size of the paged response
  • page: the number of the page in the response

Result

A paginated response of all login logs ordered from newest to oldest containing the following info:

  • Type: Login or LoginFail
  • Timestamp: time stamp to determine time of Login
  • Message: details on reason for LoginFail, null for Login
  • Remote Address: remote IP of Login user

Logins (Successes)

Retrieve a log of successful login attempts for all users.

Parameters

  • email: a valid user email (e.g. [email protected], or for Postman syntax admin%[email protected])
  • size: the size of the paged response
  • page: the number of the page in the response

Result

A paginated response of all login logs ordered from newest to oldest containing the following info:

  • Type: Login
  • Timestamp: time stamp to determine time of Login
  • Message: null for Login
  • Remote Address: remote IP of login user

Logins (Failures)

Retrieve a log of failed login attempts.

Parameters

  • email: a valid user email (e.g. [email protected], or for Postman syntax admin%[email protected])
  • size: the size of the paged response
  • page: the number of the page in the response

Result

A paginated response of all login logs ordered from newest to oldest containing the following info:

  • Type: LoginFail
  • Timestamp: time stamp to determine time of LoginFail
  • Message: details on reason of LoginFail
  • Remote Address: remote IP of Login user
Risk Cloud PowerBI Connection

How to set up PowerBI to pull data from a Risk Cloud Table Report automatically.

After loading/importing the Power BI Template file, LogicGate_EXAMPLE_Extract to PowerBI.pbit (reach out to [email protected] for the file), you will see a screen that looks like the below. You will need to enter (1) your OAuth 2.0 client; (2) your OAuth 2.0 secret; (3) your Risk Cloud environment URL; and (4) the Table Report ID you would like to extract data from.

To find the Client and Secret, within Risk Cloud, you can navigate to your User Profile via the User icon at the top-right corner of your screen. There, flip to the Access Key key tab and you will see your Client and Secret.

You can obtain your Table Report ID from the URL in your browser window after navigating to a Table Report, as shown below.

Note: The Table Report ID is the last eight digits of the URL. In the image above this is D7r2TCSR (this will be different for your Table Report).

Once you have all that information, you can then enter it into the Power BI template similar to the below:

From there, it will load all your table report data into a table in Power BI.

Lastly, you can use that information to build reports, add additional data sources from other internal systems, and more!

Risk Cloud Webhooks

Use Risk Cloud Webhooks to enhance your custom integrations and send event data to your external systems to detect Risk Cloud events and perform custom operations.

Make your custom integrations more responsive and integrated with Risk Cloud Webhooks. This feature gives you the ability to send event data from Risk Cloud to an external URL via an HTTP request when a triggering event occurs in Risk Cloud.

A Risk Cloud Webhook can be setup in the Risk Cloud as a type of Job Operation, and can be triggered based off the following detected events in your Risk Cloud environment:

  • Record Due - fires when a record is approaching or past its due date
  • Record Reassigned - fires when a record is manually reassigned
  • Record Created - fires when a record is created
  • Record Moved - fires when a record is moved to a new step in a workflow
  • Fixed Scheduled - fires at a recurring or set time

Setting up Risk Cloud Webhooks can be accomplished in the following steps:

  1. Work with your relationship manager or customer success manager to enable Risk Cloud Webhooks in your environment. NOTE: Risk Cloud Webhooks may need to be added to your Risk Cloud subscription agreement.

  2. Configure the external webhook URLs that you would like to send data to.

  3. Create jobs in Risk Cloud with your desired triggering event and use the new webhook operation to send data to your specified URL.

Configuring Webhook URLs

Once Risk Cloud Webhooks has been enabled in your environment, you will be able to add webhook URLs from the Admin > Integration page. Clicking “Configure Integration” will bring up a modal where you are able to add webhook URLs.

Make sure to give your webhook URLs recognizable names, as this is how they will be referenced when you create a Job.

When you click “Save Webhook URL,” we will attempt to call the provided URL with a standard GET request.

If successful, your webhook will be saved and you will be presented with a one-time secret key. This key is presented only once, and can be used to ensure that data is coming from Risk Cloud.

Creating a Risk Cloud Webhook Job

Now that you have configured one or more webhook URLs, you can begin adding webhook job operations. You can learn more about creating jobs in this help article.

Once you have specified your trigger and an optional message, you will want to select the “webhook” operation. Once this operation is selected, you can specify which webhook URL should be sent data when the job is triggered. We will show you an example of what data is being sent based on the workflow/trigger that you have selected.

NOTE: No custom field data will be sent with Risk Cloud Webhooks. We are only sending event data and record/workflow identification data.

When you save your job everything will be ready for Risk Cloud to start sending event data via webhooks.

Reach out to [email protected] for additional support or your relationship manager to enable this feature.

Risk Cloud API: View Applications, Workflows, and Steps

How to obtain Application, Workflow, and Step information for review and future API requests via the Risk Cloud API.

API Authentication

Prior to any interaction with Risk Cloud's APIs we will need to obtain an Access Token for the Authorization header. Instructions on how the Access Token can be obtained can be found here.

Background

When working with the Risk Cloud via the API, it is common to require IDs for entities such as Applications, Workflows, and Steps.

The endpoint described below returns an array of all Applications in your environment, including their Workflows and Steps. The endpoint provides important ID data for Applications, Workflow, and Steps that can be used to interact with the API further, such as using a Step ID to Create Records or a Workflow ID to get a list of Fields.

Usage

To obtain a list of all Applications, Workflows, and Steps in your environment, make the following request.

curl --request GET 'https://your-company.logicgate.com/api/v1/applications?generic=true' \
--header 'Authorization: Bearer {ACCESS_TOKEN}'

The response will contain an array of all Applications. Application, Workflow, and Step IDs can be located as the values for id properties for usage in future API requests.

[
  {
    "active": true,
    "color": "string",
    "copied": true,
    "created": "2019-08-24T14:15:22Z",
    "homeScreen": {
      "active": true,
      "application": {},
      "created": "2019-08-24T14:15:22Z",
      "id": "string",
      "tableReports": [
        null
      ],
      "updated": "2019-08-24T14:15:22Z"
    },
    "icon": "fa-bookmark",
    "id": "string",
    "imported": true,
    "live": true,
    "name": "string",
    "permissionsEnabled": true,
    "type": "string",
    "updated": "2019-08-24T14:15:22Z",
    "workflows": [
      {
        "active": null,
        "allowGroups": null,
        "application": null,
        "applicationId": null,
        "created": null,
        "fields": null,
        "id": null,
        "name": null,
        "primaryField": null,
        "priority": null,
        "recordPrefix": null,
        "requireGroups": null,
        "sequence": null,
        "sla": null,
        "steps": null,
        "updated": null,
        "userGroups": null,
        "workflowMaps": null,
        "workflowType": null,
        "xpos": null,
        "ypos": null
      }
    ]
  }
]

 

Risk Cloud API: Viewing Fields

This article walks through obtaining Field information for review and future API requests via the Risk Cloud API.

API Authentication

Prior to any interaction with Risk Cloud's APIs we will need to obtain an Access Token for the Authorization header. Instructions on how the Access Token can be obtained can be found here.

Background

When working with the Risk Cloud via the API, it is common to require IDs for Fields for accomplishing tasks such as updating Records.

The following endpoint will return an array of Field objects that exist within a given Workflow.

Usage

Obtaining all Fields of a Workflow can be accomplished in two steps:

  1. Obtaining a Workflow ID

  2. Requesting the Workflow's Fields

Obtaining a Workflow ID

To obtain Workflow IDs in your environment (more information on this endpoint can be found in Viewing Applications, Workflows, and Steps), make the following request.

curl --request GET 'https://your-company.logicgate.com/api/v1/applications?generic=true' \
--header 'Authorization: Bearer {ACCESS_TOKEN}'

The response will contain an array of all Applications. Workflow IDs can be located as the values for id properties for "workflow" objects in that JSON.

[
  {
    ...
    "workflows": [
      {
        ...
        "id": null
      }
    ]
  }
]

Requesting the Workflow's Fields

Now that you have obtained a Workflow ID, you can obtain a list of all Fields on that Workflow.

curl --request GET 'https://your-company.logicgate.com/api/v1/fields/workflow/{workflowId}/values' \
--header 'Authorization: Bearer {ACCESS_TOKEN}'

The response will contain a list of all Fields that exist within the given Workflow. The Field IDs of which can be used for updating Records or viewing current Field values.

[
  {
    "active": true,
    "convertibleTo": [
      "string"
    ],
    "created": "2019-08-24T14:15:22Z",
    "currentValues": [
      {
        "active": null,
        "archived": null,
        "created": null,
        "defaultField": null,
        "discriminator": null,
        "empty": null,
        "field": null,
        "fieldId": null,
        "id": null,
        "idOrTransientId": null,
        "isDefault": null,
        "numericValue": null,
        "priority": null,
        "textValue": null,
        "transientIdOrId": null,
        "updated": null,
        "valueType": null
      }
    ],
    "defaultValues": [
      {
        "active": null,
        "archived": null,
        "created": null,
        "defaultField": null,
        "discriminator": null,
        "empty": null,
        "field": null,
        "fieldId": null,
        "id": null,
        "idOrTransientId": null,
        "isDefault": null,
        "numericValue": null,
        "priority": null,
        "textValue": null,
        "transientIdOrId": null,
        "updated": null,
        "valueType": null
      }
    ],
    "discrete": true,
    "fieldType": "TEXT",
    "global": true,
    "id": "string",
    "label": "string",
    "labels": [
      "string"
    ],
    "name": "string",
    "operators": [
      "EQUALS"
    ],
    "tooltip": "string",
    "updated": "2019-08-24T14:15:22Z",
    "validTypeForCalculationInput": true,
    "valueType": "string",
    "workflow": {
      "active": true,
      "allowGroups": true,
      "application": {},
      "applicationId": "string",
      "created": "2019-08-24T14:15:22Z",
      "fields": [
        null
      ],
      "id": "string",
      "name": "string",
      "primaryField": {},
      "priority": 0,
      "recordPrefix": "string",
      "requireGroups": true,
      "sequence": {},
      "sla": {},
      "steps": [
        null
      ],
      "updated": "2019-08-24T14:15:22Z",
      "userGroups": [
        null
      ],
      "workflowMaps": [
        null
      ],
      "workflowType": "[",
      "xpos": 0,
      "ypos": 0
    },
    "workflowId": "string"
  }
]

 

Risk Cloud API: Viewing Users

This article walks through obtaining User information for review and future API requests via the Risk Cloud API.

API Authentication

Prior to any interaction with Risk Cloud API we will need to obtain an Access Token for the Authorization header. Instructions on how the Access Token can be obtained can be found here.

Permissions

Listing all Users via the Risk Cloud API requires an Access Token from an Admin Primary account.

Background

When working with the Risk Cloud via the API, it is common to require IDs for Users for accomplishing tasks from as enabling and disabling Users to assigning Users to Records.

The following endpoint will return an array of all Users in your Risk Cloud environment.

Usage

To obtain a list of all Users in your environment, make the following request.

curl --request GET 'https://your-company.logicgate.com/api/v1/users' \
--header 'Authorization: Bearer {ACCESS_TOKEN}'

The response will contain an array of all Users in your environment, the IDs of which can be located as the values for id properties for usage in future API requests.

[
  {
    "active": true,
    "convertibleTo": [
      null
    ],
    "created": "2019-08-24T14:15:22Z",
    "currentValues": [
      null
    ],
    "defaultValues": [
      null
    ],
    "discrete": true,
    "fieldType": "[",
    "global": true,
    "id": "string",
    "label": "string",
    "labels": [
      null
    ],
    "name": "string",
    "operators": [
      null
    ],
    "tooltip": "string",
    "updated": "2019-08-24T14:15:22Z",
    "validTypeForCalculationInput": true,
    "valueType": "string",
    "workflow": {},
    "workflowId": "string",
    "allowLocalLogin": true,
    "applicationEntitlements": [
      null
    ],
    "archived": true,
    "autoprovisioned": true,
    "company": "string",
    "defaultField": {},
    "disabled": true,
    "discriminator": "string",
    "email": "string",
    "empty": true,
    "external": true,
    "field": {},
    "fieldId": "string",
    "first": "string",
    "idOrTransientId": "string",
    "imageUrl": "string",
    "intercomHash": "string",
    "isDefault": true,
    "languageTag": "string",
    "last": "string",
    "lastLogin": {},
    "locked": true,
    "loginAttempts": 0,
    "modulePermissionSets": [
      null
    ],
    "notificationPreference": true,
    "numericValue": 0,
    "password": "string",
    "priority": 0,
    "records": [
      null
    ],
    "resetPasswordToken": "string",
    "roles": [
      null
    ],
    "scimStatus": "string",
    "sendEmail": true,
    "serviceAccount": true,
    "status": "string",
    "stepPermissionSets": [
      null
    ],
    "superUser": true,
    "textValue": "string",
    "tier": "[",
    "timeZone": "string",
    "transientIdOrId": "string"
  }
]
Risk Cloud API: Create Users

This article walks through creating Users via the Risk Cloud API.

API Authentication

Prior to any interaction with Risk Cloud API we will need to obtain an Access Token for the Authorization header. Instructions on how the Access Token can be obtained can be found here.

Permissions

Creating a User via the Risk Cloud API requires an Access Token from an Admin Primary account.

Background

In order to create Users in your environment via the Risk Cloud API, we will need to assemble the JSON of the User for an API POST request.

The Create User endpoint can be helpful for integrations that automate the onboarding of new colleagues or teams to the Risk Cloud.

Usage

Creating a User via the Risk Cloud API can be accomplished in two steps:

  1. Configure the User in JSON

  2. Create the User via a request

Configure the User in JSON

Below is a sample JSON body of a User to be created.

{
  "active": true,
  "status": "Active",
  "tier": "SECONDARY",
  "valueType": "User",
  "sendEmail": false,
  "email": "[email protected]",
  "first": "FirstName",
  "last": "LastName",
  "company": "Your Company"
}

The properties of tier, sendEmail, email, first, last, and company should be adjusted to the User you will be creating.

The sendEmail property is important in that if this is true then the system will send an automatic Welcome Message after the User is created.

Additionally, the tier property designates the User's access tier. Values can be:

  • "PRIMARY" Primary users are users who have access to the Build section of the app (these are typically Admin users).
  • "SECONDARY" Secondary users are users without access to the Build section (these are typically end-users).
  • "LIMITED" Limited users are secondary users who only use the platform infrequently (these are typically end-users performing quarterly or annual tasks).

Create the User via a request

Once the JSON for the User you'd like to create has been assembled, you can create the User by placing the JSON in the following request.

curl --request POST 'https://your-company.logicgate.com/api/v1/users' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer {ACCESS_TOKEN}' \
--data-raw '{
"active": true,
"status": "Active",
"tier": "SECONDARY",
"valueType": "User",
"sendEmail": false,
"email": "[email protected]",
"first": "FirstName",
"last": "LastName",
"company": "Your Company"
}'

If successful, the User will be created and the response will contain the new User's information as shown below, including the User ID which can be used for future API requests.

{
  "status":"Active",
  "id":"a4b3c2d1",
  "active":true,
  "created":1629383622871,
  "updated":1629383622932,
  "email":"[email protected]",
  "company":"Your Company",
  "imageUrl":null,
  "imageS3Key":null,
  "status":"Active",
  "tier":"SECONDARY",
  "first":"FirstName",
  "last":"LastName",
  "languageTag":"en-GB",
  "timeZone":"Europe/Kiev",
  "notificationPreference":false,
  "mfaEnabled":false,
  "mfaSetup":false,
  "autoprovisioned":false,
  "scimStatus":null,
  "sendEmail":false,
  "roles":[],
  "stepPermissionSets":[],
  "applicationEntitlements":[],
  "records":[],
  "lastLogin":null,
  "external":false,
  "superUser":false,
  "name":"FirstName LastName",
  "locked":false,
  "idOrTransientId":"a4b3c2d1",
  "transientIdOrId":"a4b3c2d1",
  "empty":false
}

 

Accessibility Improvements at LogicGate

Where to Begin?

1. Start Small

Screenshot of the LogicGate application’s record page with circles denoting which items have tab focus.
The Tab Stops feature in Microsoft Accessibility Insights visually shows the focus path of each tab press across a page.

What do we Attack First?

1. Using aria-label

Screenshot of the top toolbar of a record page in the LogicGate application with readout from a screen reader.
Without an aria-label, buttons and other elements are not given context when announced through screen readers. We were able to address these issues by adding “aria-label” to each button, with short text describing what each button does. Ex: In the above, aria-label=”Add to your favorites”

In the above image (left), a label, “What is the Weather Like Today,” is present, but it is not associated with the accompanying radio buttons, which loses context for screen readers (middle). The above was fixed (right) by changing the label to a legend, which provides text for screen readers in radio groups.

Having sufficient foreground and background color contrast helps text and labels stand out to users. WAVE has a built-in color contrast checker where you can quickly lighten and darken colors until they pass WCAG guidelines.

How do we Know We’re Helping?

  • Can we interact with elements using common keys (enter, space, arrow keys, etc.)?
Screenshot of toolbar from LogicGate application with active focus on the notifications icon.
We use an outline on many items to more easily show when an element has focus.

What’s Next?

Left: our first iteration involved an extra focus on an SVG asterisk to show when fields are required. Right: our current iteration denotes required fields with a text asterisk, while still conveying information to a screen reader. Text denoting asterisks are for required fields, not pictured.
What Do We Look for in Developers?

Interviews are stressful. From finding time to meet a slew of people with different titles, to handling a dreaded technical curveball, interviewing can feel like a full-time job, except one where you don’t get paid. Amidst all of this, you’re trying to ask the right questions to determine if you’ll want to be a member of the team six months after signing the acceptance letter. At the very least, knowing what to expect would take some stress out of the interview process.

At LogicGate, we want you to be prepared every step of the way: from your first chat with a team member, to your final onsite. We figure the best way to prepare is to know exactly what we’ll ask, so consider this a crib sheet for your interview journey. What can you expect from your first day on the job to your one-year anniversary and beyond? While we can’t create a time machine to look at your one-year anniversary, we can describe what we look for in employees and the culture we provide at LogicGate.

What does LogicGate look for in engineers?

Aside from bug-smashing and coding skills, we look for engineers who are considerate, curious, and collaborative. Being a considerate engineer doesn’t just mean organizing variables alphabetically with meaningful names. While we appreciate taking the time to clean up code, a thoughtful engineer considers the user and recognizes how every line of code committed helps solve a larger business problem.

We also look for engineers who anticipate problems before they occur and are happy to research solutions that could improve our team’s efficiency. When the answer isn’t obvious, are they willing to reach out for help, jump on a call to pair, or message a channel for clarification?

While we appreciate coding capabilities and prowess in certain areas of the stack, we are just as closely looking for how a candidate helps enhance our six core values. We hope that anyone joining our team strengthens our commitment to these values as they grow into their position.

What can I expect from the interview process?

Our goal is to have a breezy interview process, especially considering candidates use their free time to apply. We aim for transparency while being careful not to waste anyone’s time.

1. Phone Screen

A team member will reach out to you for a casual chat, usually no more than 30 minutes. While chatting, communication is key. We look for engineers who strengthen our core values, which are integral parts of our organization. Have you embraced curiosity by trying out new testing utilities? Have you done the right thing by taking ownership of a mistake you made in the past?

Most importantly, what are you looking for? Everyone has a different vision of the ideal workplace. We’d like to hear what motivates you in your career — whether that’s thoughtful perks or opportunities to learn. Finally, do you see LogicGate as a place where you can thrive? If so, we’re happy to be a potential next step in your journey.

2. Hiring Manager Interview

Don’t worry, we won’t be asking you to pseudo-code Dijkstra’s algorithm or tell us how to set up CD variables. This is a two-way conversation between you and a member of our engineering team, so feel free to show off and name some technologies! When you’re met with a challenging problem, what are some tools you’ve used?

We also want to hear how you like to work with other team members. Do you prefer to jump on a call and chat about technical issues, write a bulleted list of edge cases, or perhaps you appreciate starting a thread with other engineers? One of our values at LogicGate is to be as one. We hope to discover the skills you bring to LogicGate that help strengthen and empower our growing development team.

3. Tech Challenge

You’ll then receive a take-home challenge catered to the role you applied for. We haven’t slipped any hidden bugs into the code to make you squirm. Instead, we want to see how you tackle problem solving. We hope these challenges highlight your skills without wasting time with unnecessary fluff.

Overall, we’re looking for:

  • Comfort in the coding language of your stack
  • Consideration to keep code tidy and use thoughtful naming conventions
  • Ability to follow instructions and determine critical functionality
  • Recognition of existing code patterns
  • Ability to discuss your thought process in a recap

4. Onsite

The final step of our process is an onsite, which may or may not happen in our Chicago office. This is the first time you’ll get to see our app in action. Many of us hadn’t heard of GRC before starting at LogicGate, so this is a good opportunity to ask how our app helps empower customers to solve their unique challenges.

As you meet more members of the team, we’ll revisit the technical competencies and core values from earlier calls. We’d also like to hear your thoughts on the technical challenge. What was your thought process when solving the challenge? After submission, did you consider another approach that might have worked?

We’re also available to answer any questions about working at LogicGate: what perks do we offer, how closely do we collaborate, why do we have a goat for a mascot?

We recognize LogicGate is also being interviewed, so we welcome any questions that come to mind. Overall, we hope you finish this step with a good idea of what we do and how we operate. If any question remains unanswered, feel free to reach out to a member of our team.

What is it like to work at LogicGate?

We want to get you involved as soon as possible. While some of the first week is spent onboarding, you’ll be greeted with several “easy win” tickets to get your feet wet without drowning in tasks.

As your knowledge of our app grows, you’ll tackle more challenges and become familiar with your squad’s responsibilities. Over the following months, small wins become larger victories, and you’ll begin touching new parts of the app or stack, should you desire. We definitely want our candidates to explore their interests and embrace curiosity.

We embrace the agile flow at LogicGate, which you’ll notice from the daily stand and ticket pointing, to a retrospective at the end of each sprint. We also encourage pairing with one another — even in our remote-first environment. All our developers, project managers, etc. work collaboratively and are quick to jump on a call with one another to solve a bug, clean up some logic, or figure out how to implement a user story.

Using the crawl, walk, run approach also helps us develop new features. Why create a monstrous new set of changes in one fell swoop when we can disassemble a feature into smaller pieces? This helps our entire engineering team, from frontend engineers to QA testers, develop, implement, and sign off on new features.

Interested?

Find out more about our open positions here.

v2021.4.0 Release Notes

Featured Updates

Jobs

What does the future hold for a given Record? Now there is a way to tell! With the new Upcoming Job Runs by Record endpoint, API users can see a glimpse into the future of a Record's upcoming Job runs.

All Updates

New

 

Deleted

 

Deprecated

 

Changed

Method Parameter
  • Add step in query
  • Delete trigger in query
  • Add generic in query
  • Delete validated in query
  • Delete hasChild in query
  • Delete validated in query
  • Delete page in query
  • Delete size in query
  • Add hidden in query
  • Add tableReport in query
  • Delete map in query
  • Add id in query
  • Add name in path
  • Delete user in query
  • Add minUpdated in query
  • Delete record in query
  • Add direct in query
  • Add includeJiraWorkflows in query
  • Delete distinct in query

API Documentation

Check out our API Documentation for more usage information on all of the Risk Cloud's API endpoints.

v2021.3.0 Release Notes

Featured Updates

Images

Release the cat memes! Images are further supported in the Rich Text portions of the Risk Cloud platform. API users can now retrieve and upload images via the following new endpoints.

POST/api/v1/images/upload

Record Audits

Have you ever been curious how active a particular Risk Cloud Workflow is? The GET /api/v1/audit/records endpoint now accepts a Workflow ID, allowing users to filter retrieved Record Audits by Workflow.

An example usage is below (timestamps are expected in milliseconds):

All Updates

New

POST/api/v1/images/upload

 

Changed

Method Parameter
  • Add generic in query
  • Add workflow in query
  • Add trigger in query
  • Delete step in query
  • Delete entitled in query
  • Add hasChild in query
  • Add minUpdated in query
  • Delete field in query
  • Delete numericValue in query
  • Delete textValue in query
  • Add validated in query
  • Add page in query
  • Add size in query
  • Delete cache in query
  • Delete workflow in query
  • Delete hidden in query
  • Delete tableReport in query
  • Add steps in query
  • Add workflow in query
  • Delete workflows in query
  • Delete field in query
  • Add distinct in query
  • Delete direct in query
  • Delete includeJiraWorkflows in query

API Documentation

Check out our API Documentation for more usage information on all of the Risk Cloud's API endpoints.

v2021.2.0 Release Notes

Featured Updates

Favorites

Favorites are here! Users are now able to show some love to their favorite Records, Dashboards, and Reports.

The following new endpoints empower API users to manage and search their Favorites, including by type (e.g. Record, Dashboard, TableReport, VisualReport).

POST/api/v1/favorites

Record Due Date

The sun gently sets as the PUT /api/v1/records/due-date endpoint is deprecated as of v2021.2.0. API users can now migrate to PATCH /api/v1/records/{recordId}/due-date as the replacement to this endpoint.

All Updates

New

POST/api/v1/favorites
POST/api/v1/slack/state

 

Deleted

POST/api/v1/jobs/scheduled

 

Deprecated

 

Changed

Method Parameter
  • Delete generic in query
  • Add page in query
  • Add record in query
  • Delete minUpdated in query
  • Add field in query
  • Add numericValue in query
  • Add textValue in query
  • Delete hasChild in query
  • Delete minUpdated in query
  • Delete validated in query
  • Add hidden in query
  • Add tableReport in query
  • Add workflows in query
  • Delete steps in query
  • Delete workflow in query
  • Add state in query
  • Add user in query
  • Add direct in query
  • Add includeJiraWorkflows in query
  • Delete distinct in query

API Documentation

Check out our API Documentation for more usage information on all of the Risk Cloud's API endpoints.

v2021.1.0 Release Notes

Featured Updates

Record Search

Searching for Records of a particular Workflow? Summaries of record data can now be aggregated by Workflows and even Steps via the GET /api/v1/records/search/summarize endpoint.

Job History

No resume or CV necessary! Job history can now be obtained by the new GET /api/v1/jobs/history endpoint, which allows API users to obtain historical information of a given job including statuses, trigger dates, and more.

All Updates

New

POST/api/v1/jobs/scheduled

 

Deleted

 

Changed

Method Parameter
  • Add minUpdated in query
  • Delete page in query
  • Delete record in query
  • Add applicationId in query
  • Add entitled in query
  • Delete permitted in query
  • Add hasChild in query
  • Add minUpdated in query
  • Delete field in query
  • Delete numericValue in query
  • Delete textValue in query
  • Add cache in query
  • Add workflow in query
  • Delete page in query
  • Delete size in query
  • Delete hidden in query
  • Delete tableReport in query
  • Add steps in query
  • Add workflow in query
  • Delete workflows in query
  • Add map in query
  • Add nextCursor in query
  • Add size in query
  • Add nextCursor in query
  • Add size in query
  • Add field in query
  • Add record in query
  • Delete minUpdated in query

API Documentation

Check out our API Documentation for more usage information on all of the Risk Cloud's API endpoints.

2 Quick Tips I’ve learned for FE Testing as a LogicGate Dev

Tip #1: Run a Single Test in Jasmine

describe('Component: Table Report', () => {
 
  it('should render', () => {
    expect(component).toBeDefined();
  });
});
fdescribe('Component: Table Report', () => {
 
  it('should render', () => {
    expect(component).toBeDefined();
  });
});
describe('Component: Table Report', () => {
 
  fit('should render', () => {
    expect(component).toBeDefined();
  });
});

Tip #2: Add Breakpoints Using Chrome Dev Tools

Screenshot of the chrome dev tools shelf with the Source tab (third tab) selected
Kotlin at LogicGate

Here at LogicGate we are constantly on the lookout for new technology to add to our toolbelts. One of the latest additions to our tech stack has been getting a lot of attention in the JVM community after being named an officially supported language for Android development by Google. It’s Kotlin!

The Kotlin Use Case

LogicGate is primarily a Spring Boot application written in Java 8. While the MVP of the application was being developed there was an emphasis on quick features and unfortunately maintaining a sensible degree of test coverage became an afterthought. A horror, we know. However, given the sparse test suite, the task presented a green field opportunity and we were free to experiment a bit.

In comes Kotlin. We wanted to explore adding a new JVM language to our stack. We wanted to quickly produce a high volume of base tests and wanted to avoid some of Java’s verbosity. This was the perfect opportunity to try something new with relatively low risk.

Why Kotlin over Java?

Kotlin is a modern language that has a strong type system to minimize or completely eliminate null references. Kotlin also adds a more solid functional style than Java 8 does. With Java 8, you must stream everything to perform functional operations on something.

Java:

List<String> str = things.stream()
 .map(Object::toString())
 .collect(Collectors.joining(“, “));

Kotlin:

val joined = things.joinToString(", ")

We can see from this simple example that Kotlin allows the ability to write very functional style code and be readable.

This allows developers to write clean, concise, and readable functional code with less verbosity. And all the beauty of being on the JVM.

This, coupled with the Java interoperability, makes Kotlin a force to be reckoned with as a programming language of choice.

The Kotlin Experience

Tests are an amazing way to get developers familiar with Kotlin. It provides a safe place to experiment and learn without the fear of accidentally shipping bugs to production. During the early implementation of our Kotlin test suite we were able to iterate based on new ideas and inter-developer debates on proper Kotlin idioms. Since production code wasn’t at stake such refactors provided low-stress updates.

Another huge pro of Kotlin for our dev-team is its amazing interoperability with Java and, as IntelliJ users, IDE support for Kotlin is incredible. We are able to use any Java class within our Kotlin code with no problem. This was a huge benefit for us and a big reason why we chose to use Kotlin for our test suite.

We went from 0 tests to 300+, both unit and integration. All written in Kotlin! It has been a great experience and really proven to us that Kotlin can provide value on the JVM.

The Future of Kotlin at LogicGate

Now that all developers on our backend team have got their hands dirty with Kotlin we are ready to write some production code! We plan to explore additional Kotlin integration in the application through incremental conversion of utility classes. As our team grows and we scale our core product we will definitely look to Kotlin as a strong candidate for new microservices and internal projects.

Spring Boot with Neo4j & MySQL

Our customers use LogicGate to build complex process applications that link organizational hierarchies, assets, and compliance requirements across the enterprise. The dynamic nature of the platform (giving users the ability to customize objects and their attributes, workflow, etc.) can be supported by a relational database, to a point, using an entity-attribute-value model. However, for complex processes with recursively linked entities, this relational model restricts insight across deeply linked assets.

How do we access these recursively linked entities? Answer: Neo4j.

Neo4j uses nodes and relationships instead of tables and join columns. Nodes store a small amount of data where the majority of the data stored are in the relationships between the nodes. This allows for large scale traversals of recursively linked entities to be done with ease.

After scouring the Internet for resources on how to use Neo4j with another datasource I struggled with a large volume of out-dated resources. With lots of help from the Neo4j slack channel I was able to get a MySQL datasource and a Neo4j datasource running together in the same application. In this post I will explain how to configure all of it. Enjoy!

Graph database + Relational database = < 3

Neo4j 4.1.6 is the last iteration before 4.2.0 which was officially released on Jan. 25th, 2017. One would say, “Why not just use 4.2.0?” Well, 4.2.0 requires Spring Boot 1.5.0 which does not have a release version just yet. So let’s focus on the latest Neo4j release version and Spring Boot 1.4.X.

Firstly, install Neo4j. Follow the instructions found on this page. If on a Mac simply run brew install neo4j . When Neo4j is done installing run neo4j start in terminal to start up the database. That is all that is needed to install Neo4j.

Let’s dive into the Spring Boot portion. Open build.gradle file and add the following dependencies:

compile "org.springframework.data:spring-data-neo4j-rest:3.4.6.RELEASE"
compile "org.springframework.data:spring-data-neo4j:4.1.6.RELEASE"
compile "org.neo4j:neo4j-ogm-core:2.0.6"
compile "org.neo4j:neo4j-ogm-http-driver:2.0.6"

For this use case, the communication method to the Neo4j database has to be a RESTful call. To achieve this the HTTP driver can be used. There are two other driver options: Bolt and Embedded. This post will focus on using the HTTP driver.

Refresh the gradle dependencies by running ./gradlew clean build in the root directory of theSpring Boot project. After this, we can start configuring the application.

We will need to edit existing / add new annotations within the Java file that contains the application configuration.

Application Class Annotations

@ComponentScan(values = {"com.example"})

This tells Spring Boot to scan all project packages.com.example holds all the classes that pertain to both relational and graph databases. This includes @Controller, @Service, @Entity, and @Repository .

@EnableAutoConfiguration(exclude = {Neo4jDataAutoConfiguration.class, DataSourceAutoConfiguration.class})

This explicitly tell Spring Boot how to set up our datasources. This is why Neo4jDataAutoConfiguration.class and DataSourceAutoConfiguration.cass are excluded.

Currently the application class should look like the following:

package com.example;
import ...
@Configuration
@ComponentScan(values = {"com.example"})
@EnableAutoConfiguration(exclude = {Neo4jDataAutoConfiguration.class, DataSourceAutoConfiguration.class})
public class DemoApplication {
public static void main(String[] args) {
  SpringApplication.run(DemoApplication.class, args);
 }
}

Datasource Configuration Class

The next step will be to create a configuration file that will configures both the MySQL and Neo4j databases. The annotations for this class file are the following:

@Configuration
@EnableNeo4jRepositories(basePackages = "com.example.graph")
@EnableJpaRepositories(basePackages = "com.example.relational")
@EnableTransactionManagement
  • @Configuration annotation tells Spring that “This is a configuration file please load it!”. This will generate bean definitions at runtime
  • @EnableNeo4jRepositories(basePackages = "com.example.graph) will tell Spring Boot to enable all repositories under the package com.example.graph to be a neo4j graph repository
  • @EnableJpaRepositories(basePackages = "com.example.relational") will tell Spring Boot to enable all repositories under the package com.example.relational to be relational repositories.
  • @EnableTransactionManagement allows us to use annotation-driven transaction management

Now that annotations are set up let’s beginning building out our configuration class.

public class DatasourceConfig extends Neo4jConfiguration

Our class needs to extend Neo4jConfiguration so configuration for Neo4j settings can be set explicitly.

Next, create a configuration bean that will configure the Neo4j database.

@Bean
public org.neo4j.ogm.config.Configuration getConfiguration() {
  org.neo4j.ogm.config.Configuration config = new org.neo4j.ogm.config.Configuration();
  config
    .driverConfiguration()
    .setDriverClassName("org.neo4j.ogm.drivers.http.driver.HttpDriver")
    .setURI("http://YOUR_USERNAME:YOUR_PASSWORD@localhost:7474");
  return config;
}

This method wires up the Neo4j database with Spring Boot. Setting the location of the database with a username and password and we also state which driver we are using. In this case, using the HttpDriver .

The next bean sets the configuration settings in the Neo4j session that is used to interact with the Neo4j database.

@Bean
public SessionFactory getSessionFactory() {
  return new SessionFactory(getConfiguration(), "com.example.graph");
}

Another Neo4j bean that needs to be configured is the getSession bean. This allows Neo4j to integrate with the Spring Boot application.

@Bean
public Session getSession() throws Exception {
  return super.getSession();
}

Now that Neo4j is almost taken care of let’s set up the relational datasource. In this case, MySQL is used. To achieve this, creating a datasource bean as well as a entity manager bean is needed.

@Primary
@Bean(name = "dataSource")
@ConfigurationProperties(prefix = "spring.datasource")
public DataSource dataSource() {
  return DataSourceBuilder
    .create()
    .driverClassName("com.mysql.jdbc.Driver")
    .build();
}

@Primary
@Bean
@Autowired
public LocalContainerEntityManagerFactoryBean entityManagerFactory(DataSource dataSource) {
  LocalContainerEntityManagerFactoryBean entityManagerFactory = new LocalContainerEntityManagerFactoryBean();
  entityManagerFactory.setDataSource(dataSource);
  entityManagerFactory.setPackagesToScan("com.example.core");
  entityManagerFactory.setJpaDialect(new HibernateJpaDialect());
  Map<String, String> jpaProperties = new HashMap<>();
  jpaProperties.put("hibernate.connection.charSet", "UTF-8");
  jpaProperties.put("spring.jpa.hibernate.ddl-auto", "none");
  jpaProperties.put("spring.jpa.hibernate.naming-strategy", "org.springframework.boot.orm.jpa.SpringNamingStrategy");
  jpaProperties.put("hibernate.bytecode.provider", "javassist");
  jpaProperties.put("hibernate.dialect", "org.hibernate.dialect.MySQL5InnoDBDialect");
  jpaProperties.put("hibernate.hbm2ddl.auto", "none");
  jpaProperties.put("hibernate.order_inserts", "true");
  jpaProperties.put("hibernate.jdbc.batch_size", "50");

  entityManagerFactory.setJpaPropertyMap(jpaProperties);
  entityManagerFactory.setPersistenceProvider(new HibernatePersistenceProvider());
  return entityManagerFactory;
}

These beans are declared primary because the MySQL database should take precedence over the Neo4j database.

The JPA properties can be tweaked to your liking as well!

The last thing that needs to set up are the transaction managers. These manage the transactions for the relational database, Neo4j database, and then the manager for the overall application.

@Autowired
@Bean(name = "neo4jTransactionManager")
public Neo4jTransactionManager neo4jTransactionManager(Session sessionFactory) {
  return new Neo4jTransactionManager(sessionFactory);
}

@Autowired
@Primary
@Bean(name = "mysqlTransactionManager")
public JpaTransactionManager mysqlTransactionManager(LocalContainerEntityManagerFactoryBean entityManagerFactory)
  throws Exception {
  return new JpaTransactionManager(entityManagerFactory.getObject());
}


@Autowired
@Bean(name = "transactionManager")
public PlatformTransactionManager transactionManager(Neo4jTransactionManager neo4jTransactionManager, JpaTransactionManager mysqlTransactionManager) {
  return new ChainedTransactionManager(
    mysqlTransactionManager,
    neo4jTransactionManager
  );
}

The ChainedTransactionManager allows for multiple transaction managers. This means that any transaction that occurs will be delegated to each manager. If the first manager fails, the second manager will then be invoked.

I have created a repository with a demo application that can be found on GitHub.

That’s it! The application now has access to both MySQL and Neo4j! Like / comment. All constructive criticism welcomed!

This is my first blog post ever! Wahoo!