The Risk Cloud API is a collection of RESTful API endpoints that empower you and your team to directly integrate, automate, and build with the Risk Cloud. Risk Cloud API endpoint payloads are JSON based, with some endpoints supporting exports in CSV and XSLX formats for flexible integration.
Explore our full API documentation or step through our step-by-step walkthrough below.
In this walkthrough, we will go over some basic concepts of the Risk Cloud API, including authentication, pagination, getting data, and updating data.
The Risk Cloud API uses OAuth 2.0 for authentication, using bearer Access Token in the Authorization HTTP header. To obtain an API Access Token and get started building out your integration, reference the guide Risk Cloud API: Authentication.
The Risk Cloud API contains a variety of endpoints that may return a substantial amount of listed data. These endpoints utilize a style of offset pagination to provide a flexible and consumable means of processing Risk Cloud data. To learn more about pagination in the Risk Cloud API, reference the guide Risk Cloud API: Pagination.
From exporting to data lakes to fine tuning data for existing dashboard tools, the Risk Cloud API provides a flexible means of exporting data from your Risk Cloud environment. Linked below are guides covering common use cases for exporting Risk Cloud environment data. For all available endpoints, feel free to explore our full API documentation.
The Risk Cloud API can also perform actions in your Risk Cloud environment such as creating records and users or updating fields and attachments on records. To learn more about modifying data in your Risk Cloud environment via the Risk Cloud API, reference the linked guides below. For all available endpoints, feel free to explore our full API documentation.
In addition to the Risk Cloud API, there are also Risk Cloud Webhooks, which allow you to enhance your custom integrations by sending Risk Cloud automation event data to your external systems. To learn more, checkout our guide Risk Cloud Webhooks.
The Risk Cloud API contains a variety of endpoints that may return a substantial amount of listed data. These endpoints utilize a style of offset pagination to provide a means of processing the data in smaller portions.
Risk Cloud API endpoints that support Pagination accept two optional query parameters to indicate what portion of data to return.
page
- an integer representing the zero-indexed page number (must not be lessthan 0, defaults to 0)size
- an integer representing the size of the page and maximum number of itemsto be returned (must not be less than 1, defaults to 20)These query parameters function conceptually similar to how pages are implemented in the Risk Cloud UI, where the page
is the page number value, albeit zero-indexed, and size
is the Results per page value.
The Field Read All endpoint of GET /api/v1/fields
utilizes Pagination. If there are 50 active Fields (numbered 1-50) in a Risk Cloud environment, then the following query parameters will return the following Fields.
Page | Size | Request | Fields |
None (Default 0) | None (Default 20) | GET /api/v1/fields |
1-20 |
0 | 20 | GET /api/v1/fields?page=0&size=20 |
1-20 |
1 | 20 | GET /api/v1/fields?page=1&size=20 |
21-40 |
2 | 20 | GET /api/v1/fields?page=2&size=20 |
41-50 |
0 | 8 | GET /api/v1/fields?page=0&size=8 |
1-8 |
1 | 8 | GET /api/v1/fields?page=1&size=8 |
9-16 |
When a Risk Cloud API endpoint returns a Page, the response body contains a variety of properties.
Property | Type | Description |
content |
array | A list of the returned items |
number |
integer | The zero-indexed page number |
size |
integer | The size of the page and maximum number of items to be returned |
totalElements |
integer | The total number of items available |
totalPages |
integer | The total number of pages available based on the size |
first |
boolean | Whether the current page is the first one |
last |
boolean | Whether the current page is the last one |
empty |
boolean | Whether the current page is empty |
numberOfElements |
integer | The number of items currently on this page |
sort |
object | The sorting parameters for the page |
sort.empty |
boolean | Whether the current page is empty |
sort.sorted |
boolean | Whether the page items are sorted |
sort.unsorted |
boolean | Whether the page items are not sorted |
Depending on the integration, there are multiple strategies for processing data from a Risk Cloud API endpoint that supports Pagination.
The Bulk strategy involves sending a single request to obtain a bulk result. This is accomplished by providing a large value for the size
query parameter. The size
value should be large enough to surpass the expected maximum amount of possible returned items. An example would be: GET /api/v1/fields?size=1000
The items can then be obtained from the content
property of the response.
CALL GetFields with size as 1000 RETURNING response SET items to response.content
The Iteration strategy involves sending multiple requests and assembling a result. This can be accomplished in multiple ways, including the following.
page
number until a response where last
is true
is receivedpage
number until it reaches the amount of the totalPages
response propertySET items to [] SET index to 0 REPEAT CALL GetFields with page as index RETURNING response APPEND response.content to items INCREMENT index UNTIL response.last = true
The Risk Cloud API contains the Record Search endpoint GET /api/v1/records/search
to provide a means of searching and filtering Records based on various parameters.
The Record Search endpoint GET /api/v1/records/search
is a Paginated endpoint that returns a Page of Records for a given page
and size
. Feel free to reference Risk Cloud API: Pagination for more information on how Paginated endpoints function in the Risk Cloud API.
While page
and size
are optional query parameters for some Paginated endpoints, page
and size
are required query parameters for the Record Search endpoint.
The response payload of the Record Search endpoint can be found in our API documentation.
To filter the Record Search to only return a Page of Records from a specific Workflow, add the workflow
query parameter to the Record Search request.
workflow
: the unique ID of a Workflow, filtering the Record Search to only return To obtain a Workflow ID, reference Risk Cloud API: View Applications, Workflows, and Steps
Note: page
and size
query parameters are required for the Record Search endpoint
GET /api/v1/records/search?workflow={workflowId}&page=0&size=20
To filter the Record Search to return a Page of Linked Records, add the following query parameters to the Record Search request.
parent
: the unique ID of the parent Record to seek linked child Records fromsourceWorkflow
: the unique ID of the Workflow that the parent
Record is fromworkflow
: the unique ID of the linked Workflow from which linked child Records are soughtmapped
: whether the returned Records are linked to the parent
Record or notNote: page
and size
query parameters are required for the Record Search endpoint
GET /api/v1/records/search?page=0&size=20&parent={recordId}&sourceWorkflow={workflowId}&wo rkflow={linkedWorkflowId}&mapped=true
The Record Search endpoint returns a Page of Record objects, where the Records are within an array of the content
property of the Page. Each Record object of Page’s content
array is formatted as shown below.
Property | Type | Description |
properties |
array | A list of Custom Field and System Field properties |
properties[].header |
string | The name of the Custom Field or System Field |
properties[].fieldType |
enum | The Custom Field type for Custom Fields or null for System Fields |
properties[].systemField |
enum | The System Field type for System Fields or null for custom Fields |
properties[].recordId |
string | The unique ID of the Record containing this property |
properties[].url |
string | The path extension to the Field, only on Record Names |
properties[].rawValue |
object / string / array | Either a single Value object, a list of Value objects, or a string representing, depending on the type of Field |
properties[].formattedValue |
string | The string representation of the Value or Values |
record |
object | A returned Record |
record.id |
string | The unique ID of the Record |
record.depth |
integer | The depth of the Record |
record.name |
string | The name of the Record |
record.dueDate |
long | The Due Date of the Record measured in milliseconds since the Unix epoch |
record.user |
boolean | Whether the Record has an assignee |
record.canEdit |
boolean | Whether the current User is allowed to edit this Record |
record.canRead |
boolean | Whether the current User is allowed to read this Record |
record.step |
Step | The current Step of the Record |
recird.workflow |
Workflow | The Workflow of the Record |
record.application |
Application | The Application of the Record |
record.jiraKey |
string | The Jira Key of the Record if |
record.stepId |
string | The ID of the current Step of the Record |
record.stepEnd |
boolean | Whether the current Step of the Record is an End Step |
Originally posted on Nordic APIs
What if, one morning, you discover that every internal REST API endpoint of your web application is suddenly displayed as-is in your public REST API documentation? Your Developer Portal is overflowing with messages from eager API users struggling to make integrations with the exciting new functionality the endpoints provide.
name
property required on this GET
request?”Blog
object?”User
, and now I’m seeing null pointer exceptions everywhere!”On top of an overflowing portal, not only are the newly posted internal endpoints causing confusion but regressions are being discovered in preexisting public API endpoints too! Whether this scenario feels like a distant bad dream or resonates a little too close to reality, as time and development tickets go by, the quality and conciseness of some existing API endpoints may slowly decline.
From older public endpoints to internal endpoints that may become public, how can you tidy up existing REST API endpoints for public usage? Let’s get tidying!
Request and response data can often be closely tied to internal database resources. It can be tempting to include all properties that are available on a resource in the API to support more integration possibilities. However, some resource properties may not be relevant to an API user.
Data Transfer Objects (DTOs), which provide a decoupled representation of your database resources, are particularly useful for making more concise request and response payloads for REST API endpoints. In addition to conciseness, DTOs also improve maintainability and flexibility, allowing for database and service level resources to be updated independently from their corresponding API representations.
Using a User
resource as an example, a JSON representation of a User
database resource may contain the following properties.
json { "id": "string", "email": "string", "password": "string", "roleId": "string", "companyId": "string", "firstName": "string", "lastName": "string", "loginAttempts": 0 }
A JSON representation of a User
DTO could contain a scoped-down, API-friendly representation of the data, as shown below.
json { "id": "string", "email": "string", "firstName": "string", "lastName": "string" }
For a given resource (e.g., a User
), consider the following process for crafting a DTO representation:
User.email
, User.loginAttempts
, etc…).User.email
is high value in an API endpoint for both identifying the user or creating an integration to email the user.User.loginAttempts
may only be relevant to the internal web application and omitting it from the API may make the endpoint more concise.It can be difficult to decide to omit an available property from a resource’s DTO representation in an API. However, as API users build out integrations, it’s less complicated to add a property to an API endpoint by popular demand rather than having to risk breaking backward compatibility by removing a potentially unused existing property.
If introducing a DTO on an existing API endpoint’s request or response would break API compatibility, consider creating a separate endpoint for the DTO implementation and coordinating a migration or deprecation strategy with API users.
A single front-end change that works with what is available can be more valuable to a team in the short term than multiple changes across the stack, saving time and precious story points. However, over time, this can cause the alignment between the front-end and back-end to decline, which could call for a reassessment of the existing API endpoint.
For example, a radio button component with three options in a UI may be represented by three corresponding boolean
properties in the API, where each option was added individually over time in separate code contributions. However, after taking a look at the current state of the functionality, the radio button component as a whole may be better represented in the API via a single enum
property with values for each option.
If your web app has a user interface, observe how an existing endpoint is used in the frontend:
Once these questions have been addressed, consider updating the API endpoint accordingly to align it closer to how it’s currently being used.
If you have existing API guidelines for your public endpoints, dust them off! If you don’t have API guidelines, consider modeling existing API guidelines (e.g. Zalando, Microsoft, Google) or creating your own from API best practices.
Some examples of API guidelines to improve the consistency and clarity of an API could include:
camelCase
vs. kebab-case
)?Once you have API guidelines in place, pass through your API and capture any notable deviations in some API maintenance tickets. With defined API guidelines, there is also an opportunity to integrate the guidelines into code review automation to ensure that the guidelines are preserved going forward.
As API endpoints may expand over time, identifying what request body properties or query parameters are actually required can become daunting. It can be incredibly valuable to take a second look at an existing endpoint, test it, and even dig into the underlying code to determine what is truly required. Once the required properties on an endpoint have been identified, ensure that the properties are noted as being required in the API documentation as well.
Some endpoints can carry a lot of responsibility, perhaps even snowballing in scope over time. In particular, endpoints that update resources can have large request body payloads containing multiple related objects, making it difficult to break down and simplify the endpoint.
While CRUD (Create, Read, Update, Delete) does not necessarily match the HTTP methods of REST 1-to-1, the CRUD methodology does provide a widely adopted and straightforward framework for breaking down a resource’s endpoint functionality into a handful of more concise endpoints.
Let’s use the example of an update endpoint for a User
resource that has a Blog
relationship resource in the request payload.
json { "email": "string", "firstName": "string", "lastName": "string", "blogs": [ { "id": "string", "title": "string", "content": "string" } ] }
The existing endpoint allows an API user to update a User
while also creating or updating an attached Blog
.
After answering these questions, a decision could be made to:
Blog
from the User
update endpoint.Blog
that accept the User.id
to establish the relationship.The new create or update endpoints for a Blog
could then have a payload similar to the following.
json { "title": "string", "content": "string", "userId": "string" }
Additionally, it may be valuable to include usage documentation to accompany the new endpoint flow. While there is a case to be made that multiple endpoints could be expensive for paid APIs or less performant, the introduction of new concise endpoints can additionally provide more flexibility to your API and potential integrations.
As development moves forward and edge cases arise, it can be worth considering these tips when refactoring or reviewing API changes.
REST API maintenance is a continuous process. When there is routine attention to the accuracy, relevance, and clarity of existing API endpoints, API users and developers alike can be more confident in the use cases and integrations they create and support.
The Risk Cloud API uses OAuth 2.0 for authentication which uses a bearer token in the Authorization http header. In order to start using the API, first retrieve your Client and Secret keys from the Profile page. This can be navigated to by clicking the Person icon in the top right corner and then the Profile button.
In the Profile page, go to the "Access Key" tab. If this tab is not there, please contact your Risk Cloud administrator as you may not have API privileges.
In the "Access Key" tab you will see both Client and Secret keys. These are both necessary to generate an access key or retrieve an existing access key.
*Note that this panel also has the ability to generate the Access Key on its own.
After having both Client and Secret keys they will need to be base64 encoded with a colon in between them: {CLIENT}:{SECRET}
Once they are encoded, take your encoded string and place it in the authorization header as Authorization: Basic {ENCODED}
Once this URL is pinged with the correct Authorization Header a JSON response will appear mimicking the following structure:
Response:
{ "access_token": "KEY_HERE", "token_type": "bearer", "expires_in": 31532918, "scope": "read write" }
The returned access token can then be used in the authorization header to interact with Risk Cloud's API
Authorization: Bearer {ACCESS_TOKEN}
We will start off by assuming an Application and Workflow have been created in Risk Cloud using the Build tools. In this example, we have created an “Onboarding” Application with a Workflow called “Employee". This Workflow has three Steps: “Add Employee”, “Manager Meeting”, and “Active Employee.”
Since the Origin Step in this Workflow is “Add Employee,” we will be using the Risk Cloud API to create a Record in this Step of our Workflow. When our new Record is created in “Add Employee”, we would also like the following Fields in this Step to be populated with values:
Now that we have our Workflow set up we can interact with the Risk Cloud API to create a Record in “Add Employee”, populate these Fields, and submit the Record to “Manager Review”.
To create a Record, we need to start with a POST request with the proper JSON body. The JSON body requires three JSON objects: “step”, “workflow”, and “currentValueMaps”. We will construct our JSON body one object at a time.
Prior to any interaction with Risk Cloud's APIs we will need to set the authorization header. Instructions on how this can be accomplished can be found here.
First, we need a Step object with a Step’s ID key value-pair. This Step ID can be pulled via the browser’s URL and should look like this:
https://your-company.logicgate.com/build/steps/{STEP_ID}
We will take this value and input it into our JSON body. Our JSON now looks like the following:
{ "step": { "id": "STEP_ID" } }
We need to next first fetch the Workflow’s ID.
Type: GET https://your-company.logicgate.com/api/v1/workflows/step/{STEP_ID}
We will take the “id” value as the WORKFLOW_ID and use this to continue to fetch all the Fields in the Workflow using the following endpoint.
Risk Cloud uses currentValueMaps to map values to the proper Fields. Let us create our currentValueMaps object for the first input text value, “Employee Name.”
Type: GET https://your-company.logicgate.com/api/v1/fields/workflow/{WORKFLOW_ID}/values
Now we must parse through the array for the Fields we need and use the ID for our currentValueMap object.
So far our object should look like:
{ "field": { "id": "TEXT_FIELD_ID", "fieldType": "TEXT" } }
We will now need to input the values we want to set for this Field. In the Risk Cloud platform we refer to this, in an API frame, as currentValues. For non-discrete values (such as text and numeric values) we only need to set the textValue of the currentValue. Our object now looks like the following:
{ "currentValues": [ { "textValue": "John Doe", "discriminator": "Common" } ], "field": { "id": "TEXT_FIELD_ID", "fieldType": "TEXT" } }
Let us similarly set the “Job Type” Field, a discrete-value Select Field. When we fetched the list of Fields above, each Field object had a key called currentValues. These are the value inputs to this Field. For the Select Field (and all other discrete field types) the values in this array are the selectable values for this Field. Those values for this situation are 'Account Executive', 'Developer', and 'Customer Success Manager'.
We will set the value for the Job Type Field to be Developer. Our JSON object should look like the following now:
{ "currentValues": [ { "id": "SELECTED_CURRENT_VALUE_ID", "textValue": "Developer", "discriminator": "Common" } ], "field": { "id": "SELECT_FIELD_ID", "fieldType": "SELECT" } }
Let us put everything together! We should get the following JSON object that is ready to create a new Record with Field inputs.
{ "step": { "id": "STEP_ID" }, "currentValueMaps": [ { "currentValues": [ { "textValue": "John Doe", "discriminator": "Common" } ], "field": { "id": "TEXT_FIELD_ID", "fieldType": "TEXT" } }, { "currentValues": [ { "id": "SELECTED_CURRENT_VALUE_ID", "textValue": "Developer", "discriminator": "Common" } ], "field": { "id": "SELECT_FIELD_ID", "fieldType": "SELECT" } } ] }
Now we can submit this Record with the following endpoint
Type: POST https://your-company.logicgate.com/api/v1/records/public
Body
{ "step": { "id": "STEP_ID" }, "currentValueMaps": [ { "currentValues": [ { "textValue": "John Doe", "discriminator": "Common" } ], "field": { "id": "TEXT_FIELD_ID", "fieldType": "TEXT" } }, { "currentValues": [ { "id": "SELECTED_CURRENT_VALUE_ID", "textValue": "Developer", "discriminator": "Common" } ], "field": { "id": "SELECT_FIELD_ID", "fieldType": "SELECT" } } ] }
From this we get a response object with information pertaining to the Record created and submission including the Record ID, the Record’s current Step and creation date. For those users with the access, a Record will now appear in the Home Screen ready for “Manager Meeting.”
In order to properly export Records and their Field data, we first need to gather information on the Layout ID, Application ID, and Workflow ID. Then, we will construct out a JSON body with this information to make a proper POST request for exporting Records.
Prior to any interaction with Risk Cloud's APIs we will need to set the authorization header. Instructions on how this can be accomplished can be found here.
The Layout ID can be obtained by either looking for the ID in the URL when in the Layout’s edit modal or by using the following endpoint.
Type: GET https://your-company.logicgate.com/api/v1/layouts
This will return a list of all Layouts. Now, parse this array of Layouts until you find your Layout, and place the Layout ID into your JSON object:
{ "layout": "LAYOUT_ID" }
The Application ID can be found using the following endpoint:
Type: GET https://your-company.logicgate.com/api/v1/applications/workflows
This will return a list of all active Applications with their Workflows. Similarly to Layout, parse this array until you find your Application and Workflow and add this Application ID and Workflow ID into your JSON object. The JSON object should look like this:
{ "layout": "LAYOUT_ID", "applications": ["APPLICATION_ID"], "workflow": "WORKFLOW_ID" }
Note: The key for Applications is plural "application" and is an array of string IDs. Additionally, to export all Records in one Application, across all Workflows in that Application, use a Global Layout and do not specify a Workflow in your JSON body.
With our current JSON body, we will be exporting all Records in the Workflow. What if we wanted to be more granular with our Record selection? Good news!
The next keys in our JSON object, “statuses” and “steps” is optional. This key allows us to filter to the Records with one of the following specific statuses: INACTIVE, NOT_ASSIGNED, ASSIGNED, IN_PROGRESS, COMPLETE. The Step ID can be pulled via the browser’s URL. If you decided to use one of these statuses and specify a Step, your JSON object would look like this:
{ "layout": "LAYOUT_ID", "applications": ["APPLICATION_ID"], "workflow": "WORKFLOW_ID", "statuses": ["IN_PROGRESS"], "step": "STEP_ID" }
Now you can use the JSON object above as the body of the request to retrieve the Field information and values for your selected Layout, Application, Workflow, Steps, and Record statuses.
Type: POST https://your-company.logicgate.com/api/v1/records/export/csv
Note: To export as an XLSX document, change “csv” to “xlsx” in the request URL.
When you submit a Record in Risk Cloud, all of the Field values you have selected or input are saved on that Record. In this article we will learn how to update a specific Field's value in a specific Record using the Risk Cloud API. In this example we will cover how to update a Select Field. The API requests & responses seen in this article will differ slightly based on the Field type that is being updated.
Within a Step, we have a Field named "Severity." Severity has selectable values of "Low," "Medium," and "High."
Let's assume that you have created a Record and selected a severity of "Medium," but would like to change that to "High." We are able to do this with some requests to the Risk Cloud API.
First, we must obtain the values already on the Record, which can be done via the following GET request.
Note: The "record_id" to use in your GET request will be the unique string of numbers and letters in the record URL. In our case, the URL of the record we would like to update is https://your-company.logicgate.com/records/srAIdk3c. The "record_id" we will use is "srAIdk3c."
Prior to any interaction with Risk Cloud’s APIs we will need to set the authorization header. Instructions on how this can be accomplished can be found here.
Response:
{ "srAIdk3c": { "id": "k9IYrkst", "active": true, "created": 1554232752610, "updated": 1554233219059, "step": null, "user": null, "currentValues": [ { "discriminator": "Common", "id": "ziJtKBiZ", "active": true, "created": 1554232752610, "updated": null, "valueType": "Common", "textValue": "Medium", "numericValue": 1, "isDefault": false, "archived": false, "priority": 2, "empty": false, "default": false, "fieldId": null } ], "field": { "fieldType": "SELECT", "id": "srAIdk3c", "active": true, "created": 1554228320342, "updated": 1554232752610, "name": "Severity", "label": "Severity Level", "tooltip": null }, "record": null, "node": null, "expressionResult": 2, "assignment": null } }
The response is a key value pair where the key is the ID of the field and the value is the selected value. The most important part of this response is the currentValues array. The object inside this array is what is currently selected, and what we need to update.
Because we are updating a "select" field, we should first understand what all of our options are! We can do this by submitting a GET request to the "field" endpoint. You can find your field_id in the response above within the "field" object, or by calling the fields/workflow/WORKFLOW_ID endpoint.
Response:
{ "fieldType": "SELECT", "id": "srAIdk3c", "name": "Severity", "label": "Severity Level", "tooltip": null, "currentValues": [ { "discriminator": "Common", "id": "IXxbj7uk", "valueType": "Common", "textValue": "Low", "numericValue": 1, "isDefault": false, "archived": false, "priority": 3, "empty": false, "default": false, "fieldId": "srAIdk3c" }, { "discriminator": "Common", "id": "ziJtKBiZ", "valueType": "Common", "textValue": "Medium", "numericValue": 1, "isDefault": false, "archived": false, "priority": 2, "empty": false, "default": false, "fieldId": "srAIdk3c" }, { "discriminator": "Common", "id": "fwy1ntpD", "valueType": "Common", "textValue": "High", "numericValue": 1, "isDefault": false, "archived": false, "priority": 1, "empty": false, "default": false, "fieldId": "srAIdk3c" } ], ... }
The currentValues array contains all of the selectable options for Severity select field in the form of objects. We can choose any object in the array to be our new selected value, and for this example we will be choosing the value of "High."
In the following POST request, use the "record_id" for the specific record that you want to update.
Request:
{ "id": "k9IYrkst", "active": true, "created": 1554232752610, "updated": 1554233219059, "step": null, "user": null, "currentValues": [ { "discriminator": "Common", "id": "fwy1ntpD", "valueType": "Common", "textValue": "High", "numericValue": 1, "isDefault": false, "archived": false, "priority": 1, "empty": false, "default": false, "fieldId": "srAIdk3c" } ], "field": { "fieldType": "SELECT", "id": "srAIdk3c", "active": true, "created": 1554228320342, "updated": 1554244721428, "name": "Severity", "label": "Severity Level", "tooltip": null }, "record": null, "node": null, "expressionResult": 2, "assignment": null }
Notice that we have replaced the object in the currentValues array with the value object for "High." This serves to update the selected value from our original value of "medium" to our new desired value of "High."
Response:
{ "id": "k9IYrkst", "currentValues": [ { "discriminator": "Common", "id": "fwy1ntpD", "valueType": "Common", "textValue": "High", "numericValue": 1, "isDefault": false, "archived": false, "priority": 1, "empty": false, "default": false, "fieldId": null } ], "field": { "fieldType": "SELECT", "id": "srAIdk3c", "name": "Severity", "label": "Severity Level", "tooltip": null }, "node": { "stepType": "End", "id": "lkePaPYj" }, "expressionResult": 10 }
We can see the severity level has been updated to "High."
For more information about the Risk Cloud API you can read our Developer Center.
Within Risk Cloud, you are able to add “Attachment” Fields to your Records. These Fields allow you, perhaps very obviously, to attach files. Customers use these Fields in order to upload evidence, add documents for employee attestation, and many additional use cases. In this article, we will walk through three steps needed to attach a document using Risk Cloud API:
Obtain the FIELD_ID where you would like to upload an attachment
Upload a file using a POST request to https://your-company.logicgate.com/api/v1/attachments?field=FIELD_ID
Attach the file to your specific record using a POST request to https://your-company.logicgate.com/api/v1/valueMaps?record=RECORD_ID
Prior to any interaction with Risk Cloud’s APIs we will need to set the authorization header. Instructions on how this can be accomplished can be found here.
In the first step, we will be running a series of requests in order to determine the FIELD_ID where we would like to upload our attachment. If you already know your FIELD_ID you may continue to step two.
First, we need to determine the WORKFLOW_ID of the workflow that contains our field. To do this, you can send the following GET request:
https://your-company.logicgate.com/api/v1/workflows
This will return an array of workflow objects, each looking like this:
{ "id": "WORKFLOW_ID", "name": TABLE REPORT NAME, "recordPrefix": null, "allowGroups": false, "requireGroups": false, "xpos": 177, "ypos": 156, "priority": 0, "sla": { "enabled": false, "duration": 0 }, "steps": [ { "stepType": "Origin", "id": "xt2X0dSM", "name": "Default Origin", "stepType": "Origin", "priority": 1, "allowEntitlements": true, "xpos": 55, "ypos": 55, "isPublic": false, "sla": { "enabled": false, "duration": 0 }, "chain": false, "origin": true, "end": false }, { "stepType": "End", "id": "Y5B1k7yq", "name": "Default End", "stepType": "End", "priority": 2, "allowEntitlements": true, "xpos": 200, "ypos": 55, "isPublic": false, "sla": { "enabled": false, "duration": 0 }, "chain": false, "origin": false, "end": true } ] }
After identifying the Workflow that contains the Field you would like to add an attachment to, you can take the “id” from this object as your WORKFLOW_ID.
Now that we have our WORKFLOW_ID, we can send a request to find the specific Field where we want to add an attachment. To do this, we will send the following GET request:
http://your-company.logicgate.com/api/v1/fields/workflow/WORKFLOW_ID/values
This request will return an array of field objects, similar to this object:
{ "fieldType": "TEXT_AREA", "id": "FIELD ID", "name": "text1", "label": "text1", "tooltip": null, "currentValues": [], "operators": [ "NULL", "NOT_NULL", "EQUALS", "NOT_EQUALS", "CONTAINS", "DOES_NOT_CONTAIN" ], "convertibleTo": [ "TEXT" ], "pattern": null, "message": null, "hasHtml": false, "fieldType": "TEXT_AREA", "valueType": "Common", "validTypeForCalculationInput": false, "discrete": false, "global": false }
Once you identify the Field where you would like to add an attachment, you can take the “id” value as your FIELD_ID for the subsequent steps.
In this step, we will use the FIELD_ID found in step one to upload our attachment. You will need to create a binary multi-part request, with the form data containing the attachment file and file name.
Once you have built this body, you can send it using the following POST request:
https://your-company.logicgate.com/api/v1/attachments?field=FIELD_ID
The response should look like this:
{ "attachmentStatus": "CLEAN", "id": "QoZy9k73", "valueType": "Attachment", "discriminator": "CLEAN", "textValue": "FILE NAME", "numericValue": 1.0, "isDefault": false, "archived": false, "priority": 0, "attachmentStatus": "CLEAN", "contentType": "image/png", "fileSize": NUMBER, "fileExtension": "png", "originalFileExtension": "png", "awsS3Key": "S3 KEY", "versionCount": 1, "empty": false, "fieldId": "EbfvwDRi" }
In this final step, we will compile the information from our previous two steps in order to attach our upload to the specific record that we are interested in. We will build our POST request’s body using the following structure:
{ "active": true, "currentValues": [ RESPONSE FROM STEP 2 ] ], "field": { "active": true, "valueType": "Attachment", "fieldType": "ATTACHMENT", "id": "FIELD_ID" } }
Once you build the above body, send the following POST request:
https://your-company.logicgate.com/api/v1/valueMaps?record=RECORD_ID
The response should look like this:
{ "id": "uexgD8Ej", "currentValues": [ { "discriminator": "CLEAN", "id": "QoZy9k73", "valueType": "Attachment", "discriminator": "CLEAN", "textValue": "TEXT", "numericValue": 1.0, "isDefault": false, "archived": false, "priority": 0, "attachmentStatus": "CLEAN", "contentType": "image/png", "fileSize": 33517, "fileExtension": "png", "originalFileExtension": "png", "awsS3Key": "S3 KEY", "versionCount": 1, "empty": false, "fieldId": null } ], "field": { "fieldType": "ATTACHMENT", "id": "EbfvwDRi", "name": "attachment", "label": "attachment", "tooltip": null, "enableVersions": true, "validTypeForCalculationInput": false }, "expressionResult": 1.0 }
After sending this final POST request, your attachment should be attached to your specified Record and Field.
For any additional questions, please reach out to [email protected]!
This article details 3 endpoints for obtaining access logs for All Login Attempts, Successful Logins, and Login Failures. The results from these endpoints are only accessible to access keys belonging to users with the Admin > All module entitlement.
Retrieve a log of login successes and failures for a Risk Cloud user, using their email.
Parameters
Result
A paginated response of all login logs ordered from newest to oldest containing the following info:
Retrieve a log of successful login attempts for all users.
Parameters
Result
A paginated response of all login logs ordered from newest to oldest containing the following info:
Retrieve a log of failed login attempts.
Parameters
Result
A paginated response of all login logs ordered from newest to oldest containing the following info:
After loading/importing the Power BI Template file, LogicGate_EXAMPLE_Extract to PowerBI.pbit (reach out to [email protected] for the file), you will see a screen that looks like the below. You will need to enter (1) your OAuth 2.0 client; (2) your OAuth 2.0 secret; (3) your Risk Cloud environment URL; and (4) the Table Report ID you would like to extract data from.
To find the Client and Secret, within Risk Cloud, you can navigate to your User Profile via the User icon at the top-right corner of your screen. There, flip to the Access Key key tab and you will see your Client and Secret.
You can obtain your Table Report ID from the URL in your browser window after navigating to a Table Report, as shown below.
Note: The Table Report ID is the last eight digits of the URL. In the image above this is D7r2TCSR (this will be different for your Table Report).
Once you have all that information, you can then enter it into the Power BI template similar to the below:
From there, it will load all your table report data into a table in Power BI.
Lastly, you can use that information to build reports, add additional data sources from other internal systems, and more!
Use Risk Cloud Webhooks to enhance your custom integrations and send event data to your external systems to detect Risk Cloud events and perform custom operations.
Make your custom integrations more responsive and integrated with Risk Cloud Webhooks. This feature gives you the ability to send event data from Risk Cloud to an external URL via an HTTP request when a triggering event occurs in Risk Cloud.
Setting up Risk Cloud Webhooks can be accomplished in the following steps:
Work with your relationship manager or customer success manager to enable Risk Cloud Webhooks in your environment. NOTE: Risk Cloud Webhooks may need to be added to your Risk Cloud subscription agreement.
Configure the external webhook URLs that you would like to send data to.
Create jobs in Risk Cloud with your desired triggering event and use the new webhook operation to send data to your specified URL.
Once Risk Cloud Webhooks has been enabled in your environment, you will be able to add webhook URLs from the Admin > Integration page. Clicking “Configure Integration” will bring up a modal where you are able to add webhook URLs.
Make sure to give your webhook URLs recognizable names, as this is how they will be referenced when you create a Job.
When you click “Save Webhook URL,” we will attempt to call the provided URL with a standard GET request.
If successful, your webhook will be saved and you will be presented with a one-time secret key. This key is presented only once, and can be used to ensure that data is coming from Risk Cloud.
Now that you have configured one or more webhook URLs, you can begin adding webhook job operations. You can learn more about creating jobs in this help article.
Once you have specified your trigger and an optional message, you will want to select the “webhook” operation. Once this operation is selected, you can specify which webhook URL should be sent data when the job is triggered. We will show you an example of what data is being sent based on the workflow/trigger that you have selected.
NOTE: No custom field data will be sent with Risk Cloud Webhooks. We are only sending event data and record/workflow identification data.
When you save your job everything will be ready for Risk Cloud to start sending event data via webhooks.
Reach out to [email protected] for additional support or your relationship manager to enable this feature.
Prior to any interaction with Risk Cloud's APIs we will need to obtain an Access Token for the Authorization header. Instructions on how the Access Token can be obtained can be found here.
When working with the Risk Cloud via the API, it is common to require IDs for entities such as Applications, Workflows, and Steps.
The endpoint described below returns an array of all Applications in your environment, including their Workflows and Steps. The endpoint provides important ID data for Applications, Workflow, and Steps that can be used to interact with the API further, such as using a Step ID to Create Records or a Workflow ID to get a list of Fields.
To obtain a list of all Applications, Workflows, and Steps in your environment, make the following request.
curl --request GET 'https://your-company.logicgate.com/api/v1/applications?generic=true' \ --header 'Authorization: Bearer {ACCESS_TOKEN}'
The response will contain an array of all Applications. Application, Workflow, and Step IDs can be located as the values for id properties for usage in future API requests.
[ { "active": true, "color": "string", "copied": true, "created": "2019-08-24T14:15:22Z", "homeScreen": { "active": true, "application": {}, "created": "2019-08-24T14:15:22Z", "id": "string", "tableReports": [ null ], "updated": "2019-08-24T14:15:22Z" }, "icon": "fa-bookmark", "id": "string", "imported": true, "live": true, "name": "string", "permissionsEnabled": true, "type": "string", "updated": "2019-08-24T14:15:22Z", "workflows": [ { "active": null, "allowGroups": null, "application": null, "applicationId": null, "created": null, "fields": null, "id": null, "name": null, "primaryField": null, "priority": null, "recordPrefix": null, "requireGroups": null, "sequence": null, "sla": null, "steps": null, "updated": null, "userGroups": null, "workflowMaps": null, "workflowType": null, "xpos": null, "ypos": null } ] } ]
Prior to any interaction with Risk Cloud's APIs we will need to obtain an Access Token for the Authorization header. Instructions on how the Access Token can be obtained can be found here.
When working with the Risk Cloud via the API, it is common to require IDs for Fields for accomplishing tasks such as updating Records.
The following endpoint will return an array of Field objects that exist within a given Workflow.
Obtaining all Fields of a Workflow can be accomplished in two steps:
Obtaining a Workflow ID
Requesting the Workflow's Fields
To obtain Workflow IDs in your environment (more information on this endpoint can be found in Viewing Applications, Workflows, and Steps), make the following request.
curl --request GET 'https://your-company.logicgate.com/api/v1/applications?generic=true' \ --header 'Authorization: Bearer {ACCESS_TOKEN}'
The response will contain an array of all Applications. Workflow IDs can be located as the values for id properties for "workflow" objects in that JSON.
[ { ... "workflows": [ { ... "id": null } ] } ]
Now that you have obtained a Workflow ID, you can obtain a list of all Fields on that Workflow.
curl --request GET 'https://your-company.logicgate.com/api/v1/fields/workflow/{workflowId}/values' \ --header 'Authorization: Bearer {ACCESS_TOKEN}'
The response will contain a list of all Fields that exist within the given Workflow. The Field IDs of which can be used for updating Records or viewing current Field values.
[ { "active": true, "convertibleTo": [ "string" ], "created": "2019-08-24T14:15:22Z", "currentValues": [ { "active": null, "archived": null, "created": null, "defaultField": null, "discriminator": null, "empty": null, "field": null, "fieldId": null, "id": null, "idOrTransientId": null, "isDefault": null, "numericValue": null, "priority": null, "textValue": null, "transientIdOrId": null, "updated": null, "valueType": null } ], "defaultValues": [ { "active": null, "archived": null, "created": null, "defaultField": null, "discriminator": null, "empty": null, "field": null, "fieldId": null, "id": null, "idOrTransientId": null, "isDefault": null, "numericValue": null, "priority": null, "textValue": null, "transientIdOrId": null, "updated": null, "valueType": null } ], "discrete": true, "fieldType": "TEXT", "global": true, "id": "string", "label": "string", "labels": [ "string" ], "name": "string", "operators": [ "EQUALS" ], "tooltip": "string", "updated": "2019-08-24T14:15:22Z", "validTypeForCalculationInput": true, "valueType": "string", "workflow": { "active": true, "allowGroups": true, "application": {}, "applicationId": "string", "created": "2019-08-24T14:15:22Z", "fields": [ null ], "id": "string", "name": "string", "primaryField": {}, "priority": 0, "recordPrefix": "string", "requireGroups": true, "sequence": {}, "sla": {}, "steps": [ null ], "updated": "2019-08-24T14:15:22Z", "userGroups": [ null ], "workflowMaps": [ null ], "workflowType": "[", "xpos": 0, "ypos": 0 }, "workflowId": "string" } ]
Prior to any interaction with Risk Cloud API we will need to obtain an Access Token for the Authorization header. Instructions on how the Access Token can be obtained can be found here.
Listing all Users via the Risk Cloud API requires an Access Token from an Admin Primary account.
When working with the Risk Cloud via the API, it is common to require IDs for Users for accomplishing tasks from as enabling and disabling Users to assigning Users to Records.
The following endpoint will return an array of all Users in your Risk Cloud environment.
To obtain a list of all Users in your environment, make the following request.
curl --request GET 'https://your-company.logicgate.com/api/v1/users' \ --header 'Authorization: Bearer {ACCESS_TOKEN}'
The response will contain an array of all Users in your environment, the IDs of which can be located as the values for id
properties for usage in future API requests.
[ { "active": true, "convertibleTo": [ null ], "created": "2019-08-24T14:15:22Z", "currentValues": [ null ], "defaultValues": [ null ], "discrete": true, "fieldType": "[", "global": true, "id": "string", "label": "string", "labels": [ null ], "name": "string", "operators": [ null ], "tooltip": "string", "updated": "2019-08-24T14:15:22Z", "validTypeForCalculationInput": true, "valueType": "string", "workflow": {}, "workflowId": "string", "allowLocalLogin": true, "applicationEntitlements": [ null ], "archived": true, "autoprovisioned": true, "company": "string", "defaultField": {}, "disabled": true, "discriminator": "string", "email": "string", "empty": true, "external": true, "field": {}, "fieldId": "string", "first": "string", "idOrTransientId": "string", "imageUrl": "string", "intercomHash": "string", "isDefault": true, "languageTag": "string", "last": "string", "lastLogin": {}, "locked": true, "loginAttempts": 0, "modulePermissionSets": [ null ], "notificationPreference": true, "numericValue": 0, "password": "string", "priority": 0, "records": [ null ], "resetPasswordToken": "string", "roles": [ null ], "scimStatus": "string", "sendEmail": true, "serviceAccount": true, "status": "string", "stepPermissionSets": [ null ], "superUser": true, "textValue": "string", "tier": "[", "timeZone": "string", "transientIdOrId": "string" } ]
Prior to any interaction with Risk Cloud API we will need to obtain an Access Token for the Authorization header. Instructions on how the Access Token can be obtained can be found here.
Creating a User via the Risk Cloud API requires an Access Token from an Admin Primary account.
In order to create Users in your environment via the Risk Cloud API, we will need to assemble the JSON of the User for an API POST request.
The Create User endpoint can be helpful for integrations that automate the onboarding of new colleagues or teams to the Risk Cloud.
Creating a User via the Risk Cloud API can be accomplished in two steps:
Configure the User in JSON
Create the User via a request
Below is a sample JSON body of a User to be created.
{ "active": true, "status": "Active", "tier": "SECONDARY", "valueType": "User", "sendEmail": false, "email": "[email protected]", "first": "FirstName", "last": "LastName", "company": "Your Company" }
The properties of tier
, sendEmail
, email
, first
, last
, and company
should be adjusted to the User you will be creating.
The sendEmail
property is important in that if this is true
then the system will send an automatic Welcome Message after the User is created.
Additionally, the tier
property designates the User's access tier. Values can be:
"PRIMARY"
Primary users are users who have access to the Build section of the app (these are typically Admin users)."SECONDARY"
Secondary users are users without access to the Build section (these are typically end-users)."LIMITED"
Limited users are secondary users who only use the platform infrequently (these are typically end-users performing quarterly or annual tasks).Once the JSON for the User you'd like to create has been assembled, you can create the User by placing the JSON in the following request.
curl --request POST 'https://your-company.logicgate.com/api/v1/users' \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer {ACCESS_TOKEN}' \ --data-raw '{ "active": true, "status": "Active", "tier": "SECONDARY", "valueType": "User", "sendEmail": false, "email": "[email protected]", "first": "FirstName", "last": "LastName", "company": "Your Company" }'
If successful, the User will be created and the response will contain the new User's information as shown below, including the User ID which can be used for future API requests.
{ "status":"Active", "id":"a4b3c2d1", "active":true, "created":1629383622871, "updated":1629383622932, "email":"[email protected]", "company":"Your Company", "imageUrl":null, "imageS3Key":null, "status":"Active", "tier":"SECONDARY", "first":"FirstName", "last":"LastName", "languageTag":"en-GB", "timeZone":"Europe/Kiev", "notificationPreference":false, "mfaEnabled":false, "mfaSetup":false, "autoprovisioned":false, "scimStatus":null, "sendEmail":false, "roles":[], "stepPermissionSets":[], "applicationEntitlements":[], "records":[], "lastLogin":null, "external":false, "superUser":false, "name":"FirstName LastName", "locked":false, "idOrTransientId":"a4b3c2d1", "transientIdOrId":"a4b3c2d1", "empty":false }
At LogicGate, we are always striving to be better. As we continue introducing new features, we also seek to make our app accessible for all users. Can people complete records using only a keyboard? Are images and button contexts read aloud to screen readers? As a team, we recently began asking similar questions and saw room for improvement. If we test across a range of browsers, we should also cater to a range of users. This year, we began a new accessibility initiative and have learned tremendously in the process.
With forms, notifications, reporting, and more, it’s tricky to know what to tackle first when auditing the platform. That said, we identified a couple things to help us get started with an audit.
Let’s start with the most trafficked page. Our records page has the most users completing fields and assigning tasks. Instead of spreading efforts across the entire site, we can learn more efficiently if we improve one part of the app then expand our scope as we triage.
There are countless accessibility tools, but a few we used throughout our efforts. From the Lighthouse tool built directly into Chrome’s DevTools, to the “Tab Stops” feature of Microsoft Accessibility Insights, each tool has interesting yet distinct features.
We adopted WebAIM’s WAVE tool as our standard for accessibility audits. WAVE has a Chrome and Firefox extension to scan a page immediately. The tool also works without initiating a refresh, providing feedback for modals or other visuals that must be triggered with a button click.
Accessibility tools are amazing, but they aren’t a catch-all. An aria-label
can sometimes be more descriptive or succinct, and it’s easy to overlook notifications or expanding sections that may go unannounced when a page changes states.
After the first few audits, we began seeing three easy-to-fix errors that frequently surfaced.
aria-label
We recently decided to remove placeholder text from text inputs as they are often the source of color contrast warnings. Instead, we began placing descriptive labels above inputs to guide visitors.
Keeping dependencies up-to-date can also result in accessibility improvements. On the frontend, we use Ng Bootstrap to provide Angular widgets. With each version bump, Ng Bootstrap often introduces accessibility improvements and fixes. Keeping these dependencies up-to-date is a good idea to stay on top of third-party patches.
As developers, we must ask questions when reviewing new features.
Ex: Consider using aria-live="polite"
for notifications that appear on screen and are important to announce to a user.
These are just a couple questions to ask in addition to running a tool like WAVE to check for regressions. However, for all of our planning and auditing, we learn most by asking questions and engaging the wider accessibility community.
We’ve found a wealth of knowledge from podcasts such as A11y Rules by Nicolas Steenhout and Mosen at Large by Jonathan Mosen . We follow Deque Systems, who frequently host A11y workshops and conferences. The Deque community is incredibly robust, tremendously helpful, and constantly evolving. We also chat with our stakeholders and clients who help determine ways to improve our app. We do not have all the answers but continue to learn through constant conversation.
As we expand the scope of our audit, we continue to evolve. We remain reflective in finding creative solutions that benefit everyone taking part in the Risk Cloud. We found that a recent feature we added to improve accessibility provided unnecessary tab stops for visitors, so we questioned our process, readdressed the problem, and found a solution that improves accessibility while removing needless tabs.
We hope to apply the skills we’ve learned through an initial audit to further refine the rest of the app and new tools we introduce. As a team, we share ideas between developers, designers, and the larger LogicGate community to keep a critical eye on our process and continually improve the platform.
Interviews are stressful. From finding time to meet a slew of people with different titles, to handling a dreaded technical curveball, interviewing can feel like a full-time job, except one where you don’t get paid. Amidst all of this, you’re trying to ask the right questions to determine if you’ll want to be a member of the team six months after signing the acceptance letter. At the very least, knowing what to expect would take some stress out of the interview process.
At LogicGate, we want you to be prepared every step of the way: from your first chat with a team member, to your final onsite. We figure the best way to prepare is to know exactly what we’ll ask, so consider this a crib sheet for your interview journey. What can you expect from your first day on the job to your one-year anniversary and beyond? While we can’t create a time machine to look at your one-year anniversary, we can describe what we look for in employees and the culture we provide at LogicGate.
Aside from bug-smashing and coding skills, we look for engineers who are considerate, curious, and collaborative. Being a considerate engineer doesn’t just mean organizing variables alphabetically with meaningful names. While we appreciate taking the time to clean up code, a thoughtful engineer considers the user and recognizes how every line of code committed helps solve a larger business problem.
We also look for engineers who anticipate problems before they occur and are happy to research solutions that could improve our team’s efficiency. When the answer isn’t obvious, are they willing to reach out for help, jump on a call to pair, or message a channel for clarification?
While we appreciate coding capabilities and prowess in certain areas of the stack, we are just as closely looking for how a candidate helps enhance our six core values. We hope that anyone joining our team strengthens our commitment to these values as they grow into their position.
Our goal is to have a breezy interview process, especially considering candidates use their free time to apply. We aim for transparency while being careful not to waste anyone’s time.
A team member will reach out to you for a casual chat, usually no more than 30 minutes. While chatting, communication is key. We look for engineers who strengthen our core values, which are integral parts of our organization. Have you embraced curiosity by trying out new testing utilities? Have you done the right thing by taking ownership of a mistake you made in the past?
Most importantly, what are you looking for? Everyone has a different vision of the ideal workplace. We’d like to hear what motivates you in your career — whether that’s thoughtful perks or opportunities to learn. Finally, do you see LogicGate as a place where you can thrive? If so, we’re happy to be a potential next step in your journey.
Don’t worry, we won’t be asking you to pseudo-code Dijkstra’s algorithm or tell us how to set up CD variables. This is a two-way conversation between you and a member of our engineering team, so feel free to show off and name some technologies! When you’re met with a challenging problem, what are some tools you’ve used?
We also want to hear how you like to work with other team members. Do you prefer to jump on a call and chat about technical issues, write a bulleted list of edge cases, or perhaps you appreciate starting a thread with other engineers? One of our values at LogicGate is to be as one. We hope to discover the skills you bring to LogicGate that help strengthen and empower our growing development team.
You’ll then receive a take-home challenge catered to the role you applied for. We haven’t slipped any hidden bugs into the code to make you squirm. Instead, we want to see how you tackle problem solving. We hope these challenges highlight your skills without wasting time with unnecessary fluff.
Overall, we’re looking for:
The final step of our process is an onsite, which may or may not happen in our Chicago office. This is the first time you’ll get to see our app in action. Many of us hadn’t heard of GRC before starting at LogicGate, so this is a good opportunity to ask how our app helps empower customers to solve their unique challenges.
As you meet more members of the team, we’ll revisit the technical competencies and core values from earlier calls. We’d also like to hear your thoughts on the technical challenge. What was your thought process when solving the challenge? After submission, did you consider another approach that might have worked?
We’re also available to answer any questions about working at LogicGate: what perks do we offer, how closely do we collaborate, why do we have a goat for a mascot?
We recognize LogicGate is also being interviewed, so we welcome any questions that come to mind. Overall, we hope you finish this step with a good idea of what we do and how we operate. If any question remains unanswered, feel free to reach out to a member of our team.
We want to get you involved as soon as possible. While some of the first week is spent onboarding, you’ll be greeted with several “easy win” tickets to get your feet wet without drowning in tasks.
As your knowledge of our app grows, you’ll tackle more challenges and become familiar with your squad’s responsibilities. Over the following months, small wins become larger victories, and you’ll begin touching new parts of the app or stack, should you desire. We definitely want our candidates to explore their interests and embrace curiosity.
We embrace the agile flow at LogicGate, which you’ll notice from the daily stand and ticket pointing, to a retrospective at the end of each sprint. We also encourage pairing with one another — even in our remote-first environment. All our developers, project managers, etc. work collaboratively and are quick to jump on a call with one another to solve a bug, clean up some logic, or figure out how to implement a user story.
Using the crawl, walk, run approach also helps us develop new features. Why create a monstrous new set of changes in one fell swoop when we can disassemble a feature into smaller pieces? This helps our entire engineering team, from frontend engineers to QA testers, develop, implement, and sign off on new features.
Find out more about our open positions here.
What does the future hold for a given Record? Now there is a way to tell! With the new Upcoming Job Runs by Record endpoint, API users can see a glimpse into the future of a Record's upcoming Job runs.
Method | Parameter |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Check out our API Documentation for more usage information on all of the Risk Cloud's API endpoints.
Release the cat memes! Images are further supported in the Rich Text portions of the Risk Cloud platform. API users can now retrieve and upload images via the following new endpoints.
Have you ever been curious how active a particular Risk Cloud Workflow is? The GET /api/v1/audit/records
endpoint now accepts a Workflow ID, allowing users to filter retrieved Record Audits by Workflow.
An example usage is below (timestamps are expected in milliseconds):
Method | Parameter |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Check out our API Documentation for more usage information on all of the Risk Cloud's API endpoints.
Favorites are here! Users are now able to show some love to their favorite Records, Dashboards, and Reports.
The following new endpoints empower API users to manage and search their Favorites, including by type (e.g. Record, Dashboard, TableReport, VisualReport).
The sun gently sets as the PUT /api/v1/records/due-date endpoint
is deprecated as of v2021.2.0. API users can now migrate to PATCH /api/v1/records/{recordId}/due-date
as the replacement to this endpoint.
Method | Parameter |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Check out our API Documentation for more usage information on all of the Risk Cloud's API endpoints.
Searching for Records of a particular Workflow? Summaries of record data can now be aggregated by Workflows and even Steps via the GET /api/v1/records/search/summarize
endpoint.
No resume or CV necessary! Job history can now be obtained by the new GET /api/v1/jobs/history
endpoint, which allows API users to obtain historical information of a given job including statuses, trigger dates, and more.
Method | Parameter |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Check out our API Documentation for more usage information on all of the Risk Cloud's API endpoints.
Debugging is an essential part of writing code. During my time here at LogicGate I’ve learned a ton of helpful tips and tricks (big shout out to my teammates!) that make debugging on the Frontend more efficient. Below, I share two of my favorite debugging tips that have saved me time and headache.
We have almost 800 Frontend unit tests for our code base. You can imagine not wanting to have to run the entire test suite when adding a new test or updating an existing one. Luckily we use Jasmine and the library has a pretty simple — but not obvious — way to handle this.
The solution here? A simple f
! That is correct, all you need to do is put an f
in front of a describe
or it
block.
Here’s a block of test code:
describe('Component: Table Report', () => { it('should render', () => { expect(component).toBeDefined(); }); });
To run the entire block, you would just use fdescribe
:
fdescribe('Component: Table Report', () => { it('should render', () => { expect(component).toBeDefined(); }); });
To run an it
block individually you can also use fit
:
describe('Component: Table Report', () => { fit('should render', () => { expect(component).toBeDefined(); }); });
Simple, right? One caveat here is that if you run a single test this way, it is easy to forget to remove the f
from your code. The dev team at LogicGate has handled this by creating a linting rule that disallows fdescribe
and fit
as function names, and running our linter as part of our CI pipeline.
(Note: I am using Google Chrome in all of the following screenshots. Other browsers may have similar functionality, but for the purpose of this blog I will only discuss Chrome.)
While our reliable friend console.log
will always be there when we need them, adding breakpoints adds more debugging functionality and access. I’ve used debugger
statements in my code to trigger a breakpoint using Chrome + Chrome Dev Tools in the past, but a fellow teammate showed me an easy way to add them directly in the browser.
To find the source file you want to place a breakpoint in, using the file name, open Chrome Dev Tools and go to the Sources
tab.
From the Sources
tab, use cmd + p
on a mac or ctrl + p
on a PC to open the file name search bar to then search for your file.
Once you’re in the file you want to place a breakpoint in, you can search within the file using cmd + f
or ctrl + f
. Place your breakpoint in the left column which shows the line numbers.
Next, interact with the the app in your browser to trigger the break point!
Once you hit the breakpoint, you’ll have access to any variables in the scope of the break point, and you can click through the steps of your code.
You can use the toolbar on the right to walk through code, play through your breakpoint, or disable breakpoints.
Wishing you the best on your testing endeavors.
Here at LogicGate we are constantly on the lookout for new technology to add to our toolbelts. One of the latest additions to our tech stack has been getting a lot of attention in the JVM community after being named an officially supported language for Android development by Google. It’s Kotlin!
LogicGate is primarily a Spring Boot application written in Java 8. While the MVP of the application was being developed there was an emphasis on quick features and unfortunately maintaining a sensible degree of test coverage became an afterthought. A horror, we know. However, given the sparse test suite, the task presented a green field opportunity and we were free to experiment a bit.
In comes Kotlin. We wanted to explore adding a new JVM language to our stack. We wanted to quickly produce a high volume of base tests and wanted to avoid some of Java’s verbosity. This was the perfect opportunity to try something new with relatively low risk.
Kotlin is a modern language that has a strong type system to minimize or completely eliminate null references. Kotlin also adds a more solid functional style than Java 8 does. With Java 8, you must stream everything to perform functional operations on something.
Java:
List<String> str = things.stream() .map(Object::toString()) .collect(Collectors.joining(“, “));
Kotlin:
val joined = things.joinToString(", ")
We can see from this simple example that Kotlin allows the ability to write very functional style code and be readable.
This allows developers to write clean, concise, and readable functional code with less verbosity. And all the beauty of being on the JVM.
This, coupled with the Java interoperability, makes Kotlin a force to be reckoned with as a programming language of choice.
Tests are an amazing way to get developers familiar with Kotlin. It provides a safe place to experiment and learn without the fear of accidentally shipping bugs to production. During the early implementation of our Kotlin test suite we were able to iterate based on new ideas and inter-developer debates on proper Kotlin idioms. Since production code wasn’t at stake such refactors provided low-stress updates.
Another huge pro of Kotlin for our dev-team is its amazing interoperability with Java and, as IntelliJ users, IDE support for Kotlin is incredible. We are able to use any Java class within our Kotlin code with no problem. This was a huge benefit for us and a big reason why we chose to use Kotlin for our test suite.
We went from 0 tests to 300+, both unit and integration. All written in Kotlin! It has been a great experience and really proven to us that Kotlin can provide value on the JVM.
Now that all developers on our backend team have got their hands dirty with Kotlin we are ready to write some production code! We plan to explore additional Kotlin integration in the application through incremental conversion of utility classes. As our team grows and we scale our core product we will definitely look to Kotlin as a strong candidate for new microservices and internal projects.
Our customers use LogicGate to build complex process applications that link organizational hierarchies, assets, and compliance requirements across the enterprise. The dynamic nature of the platform (giving users the ability to customize objects and their attributes, workflow, etc.) can be supported by a relational database, to a point, using an entity-attribute-value model. However, for complex processes with recursively linked entities, this relational model restricts insight across deeply linked assets.
How do we access these recursively linked entities? Answer: Neo4j.
Neo4j uses nodes and relationships instead of tables and join columns. Nodes store a small amount of data where the majority of the data stored are in the relationships between the nodes. This allows for large scale traversals of recursively linked entities to be done with ease.
After scouring the Internet for resources on how to use Neo4j with another datasource I struggled with a large volume of out-dated resources. With lots of help from the Neo4j slack channel I was able to get a MySQL datasource and a Neo4j datasource running together in the same application. In this post I will explain how to configure all of it. Enjoy!
Neo4j 4.1.6 is the last iteration before 4.2.0 which was officially released on Jan. 25th, 2017. One would say, “Why not just use 4.2.0?” Well, 4.2.0 requires Spring Boot 1.5.0 which does not have a release version just yet. So let’s focus on the latest Neo4j release version and Spring Boot 1.4.X.
Firstly, install Neo4j. Follow the instructions found on this page. If on a Mac simply run brew install neo4j
. When Neo4j is done installing run neo4j start
in terminal to start up the database. That is all that is needed to install Neo4j.
Let’s dive into the Spring Boot portion. Open build.gradle
file and add the following dependencies:
compile "org.springframework.data:spring-data-neo4j-rest:3.4.6.RELEASE" compile "org.springframework.data:spring-data-neo4j:4.1.6.RELEASE" compile "org.neo4j:neo4j-ogm-core:2.0.6" compile "org.neo4j:neo4j-ogm-http-driver:2.0.6"
For this use case, the communication method to the Neo4j database has to be a RESTful call. To achieve this the HTTP driver can be used. There are two other driver options: Bolt and Embedded. This post will focus on using the HTTP driver.
Refresh the gradle dependencies by running ./gradlew clean build
in the root directory of theSpring Boot project. After this, we can start configuring the application.
We will need to edit existing / add new annotations within the Java file that contains the application configuration.
@ComponentScan(values = {"com.example"})
This tells Spring Boot to scan all project packages.com.example
holds all the classes that pertain to both relational and graph databases. This includes @Controller, @Service, @Entity, and @Repository
.
@EnableAutoConfiguration(exclude = {Neo4jDataAutoConfiguration.class, DataSourceAutoConfiguration.class})
This explicitly tell Spring Boot how to set up our datasources. This is why Neo4jDataAutoConfiguration.class
and DataSourceAutoConfiguration.cass
are excluded.
Currently the application class should look like the following:
package com.example; import ... @Configuration @ComponentScan(values = {"com.example"}) @EnableAutoConfiguration(exclude = {Neo4jDataAutoConfiguration.class, DataSourceAutoConfiguration.class}) public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } }
The next step will be to create a configuration file that will configures both the MySQL and Neo4j databases. The annotations for this class file are the following:
@Configuration @EnableNeo4jRepositories(basePackages = "com.example.graph") @EnableJpaRepositories(basePackages = "com.example.relational") @EnableTransactionManagement
@Configuration
annotation tells Spring that “This is a configuration file please load it!”. This will generate bean definitions at runtime@EnableNeo4jRepositories(basePackages = "com.example.graph)
will tell Spring Boot to enable all repositories under the package com.example.graph
to be a neo4j graph repository@EnableJpaRepositories(basePackages = "com.example.relational")
will tell Spring Boot to enable all repositories under the package com.example.relational
to be relational repositories.@EnableTransactionManagement
allows us to use annotation-driven transaction managementNow that annotations are set up let’s beginning building out our configuration class.
public class DatasourceConfig extends Neo4jConfiguration
Our class needs to extend Neo4jConfiguration so configuration for Neo4j settings can be set explicitly.
Next, create a configuration bean that will configure the Neo4j database.
@Bean public org.neo4j.ogm.config.Configuration getConfiguration() { org.neo4j.ogm.config.Configuration config = new org.neo4j.ogm.config.Configuration(); config .driverConfiguration() .setDriverClassName("org.neo4j.ogm.drivers.http.driver.HttpDriver") .setURI("http://YOUR_USERNAME:[email protected]:7474"); return config; }
This method wires up the Neo4j database with Spring Boot. Setting the location of the database with a username and password and we also state which driver we are using. In this case, using the HttpDriver
.
The next bean sets the configuration settings in the Neo4j session that is used to interact with the Neo4j database.
@Bean public SessionFactory getSessionFactory() { return new SessionFactory(getConfiguration(), "com.example.graph"); }
Another Neo4j bean that needs to be configured is the getSession
bean. This allows Neo4j to integrate with the Spring Boot application.
@Bean public Session getSession() throws Exception { return super.getSession(); }
Now that Neo4j is almost taken care of let’s set up the relational datasource. In this case, MySQL is used. To achieve this, creating a datasource bean as well as a entity manager bean is needed.
@Primary @Bean(name = "dataSource") @ConfigurationProperties(prefix = "spring.datasource") public DataSource dataSource() { return DataSourceBuilder .create() .driverClassName("com.mysql.jdbc.Driver") .build(); } @Primary @Bean @Autowired public LocalContainerEntityManagerFactoryBean entityManagerFactory(DataSource dataSource) { LocalContainerEntityManagerFactoryBean entityManagerFactory = new LocalContainerEntityManagerFactoryBean(); entityManagerFactory.setDataSource(dataSource); entityManagerFactory.setPackagesToScan("com.example.core"); entityManagerFactory.setJpaDialect(new HibernateJpaDialect()); Map<String, String> jpaProperties = new HashMap<>(); jpaProperties.put("hibernate.connection.charSet", "UTF-8"); jpaProperties.put("spring.jpa.hibernate.ddl-auto", "none"); jpaProperties.put("spring.jpa.hibernate.naming-strategy", "org.springframework.boot.orm.jpa.SpringNamingStrategy"); jpaProperties.put("hibernate.bytecode.provider", "javassist"); jpaProperties.put("hibernate.dialect", "org.hibernate.dialect.MySQL5InnoDBDialect"); jpaProperties.put("hibernate.hbm2ddl.auto", "none"); jpaProperties.put("hibernate.order_inserts", "true"); jpaProperties.put("hibernate.jdbc.batch_size", "50"); entityManagerFactory.setJpaPropertyMap(jpaProperties); entityManagerFactory.setPersistenceProvider(new HibernatePersistenceProvider()); return entityManagerFactory; }
These beans are declared primary because the MySQL database should take precedence over the Neo4j database.
The JPA properties can be tweaked to your liking as well!
The last thing that needs to set up are the transaction managers. These manage the transactions for the relational database, Neo4j database, and then the manager for the overall application.
@Autowired @Bean(name = "neo4jTransactionManager") public Neo4jTransactionManager neo4jTransactionManager(Session sessionFactory) { return new Neo4jTransactionManager(sessionFactory); } @Autowired @Primary @Bean(name = "mysqlTransactionManager") public JpaTransactionManager mysqlTransactionManager(LocalContainerEntityManagerFactoryBean entityManagerFactory) throws Exception { return new JpaTransactionManager(entityManagerFactory.getObject()); } @Autowired @Bean(name = "transactionManager") public PlatformTransactionManager transactionManager(Neo4jTransactionManager neo4jTransactionManager, JpaTransactionManager mysqlTransactionManager) { return new ChainedTransactionManager( mysqlTransactionManager, neo4jTransactionManager ); }
The ChainedTransactionManager
allows for multiple transaction managers. This means that any transaction that occurs will be delegated to each manager. If the first manager fails, the second manager will then be invoked.
I have created a repository with a demo application that can be found on GitHub.
That’s it! The application now has access to both MySQL and Neo4j! Like / comment. All constructive criticism welcomed!
This is my first blog post ever! Wahoo!