API
Introduction
Welcome to the Guru API. This API allows you to upload and perform analysis on your workouts and exercise. See below to find out how to authenticate your calls and start working with the API.
If you would like to integrate and require assistance then please contact us.
Authentication
Authentication with the Guru API occurs using OAuth tokens. You must include your authentication token as a header on each request you make:
Authorization: <token>
var request = require("request");
var options = { method: 'POST',
url: 'https://customer-console-prod.auth.us-west-2.amazoncognito.com/oauth2/token',
headers: { 'content-type': 'application/x-www-form-urlencoded' },
body: 'grant_type=client_credentials&client_id=' + client_id + '&client_secret=' + client_secret + '&scope=https://api.getguru.ai/default' };
request(options, function (error, response, body) {
if (error) throw new Error(error);
console.log(body);
});
Your service will obtain its authentication token using an OAuth Client-Credential Flow. If you are a newly-integrating service then you will need to create an account with Guru via the Console to access your credentials. Please see the Getting Started with Guru guide for more details on account creation.
Once you have your access credentials, the authentication flow will be:
- Exchange your client ID and secret with
https://customer-console-prod.auth.us-west-2.amazoncognito.com
for an access token. The value of theexpires_in
field in the response is the number of seconds until this token expires. - Store the token along with its expiration date in persistent storage so that it can be re-used on each call. It is important to not request new tokens on each call as your application will be rate limited.
- Before making a call to the Guru API, check if the token is expired and, if so, refresh it.
- Make the call to the Guru API using the access token.
See the example on this page for working code to perform the credential exchange. See here for more details on implementing the Client-Credential flow.
Videos
Uploading a video for analysis is a three-step process:
- Call the Create API to specify the video's metadata. This will tell Guru some basic information about the video such as its size, and also include optional additional information such as the activity being performed in the video. This information helps deliver more accurate analysis results. The API will return a URL that specifies where the video should be uploaded to.
- Upload the video content to the URL returned in step 1. The video will be encoded as
multipart/form-data
in the request. - Poll the Analysis API until the video is ready. It will typically take 30 seconds for analysis to complete, though longer wait times may be experienced for larger videos.
See below for details on each individual API call.
Create Video
axios({
method: 'post',
url: 'https://api.getguru.ai/videos',
headers: {
Authorization: token
},
data: {
filename: 'workout.mp4',
size: <<video-size-in-bytes>>,
domain: 'weightlifting',
activity: 'squat',
repCount: 12,
source: 'my-service'
}
}).then(function (response) {
const formData = new FormData();
Object.keys(response.data.fields).forEach((key) => {
formData.append(key, response.data.fields[key]);
});
formData.append("file", video);
axios.post(
response.data.url,
formData,
{
headers: { "Content-Type": "multipart/form-data", "Content-Length": <<video-size-in-bytes>> }
}
).catch(function (error) {
//...
});
}).catch(function (error) {
//...
});
//assuming variable 'file' of type Express.Multer.File
axios({
method: 'post',
url: 'https://api.getguru.ai/videos',
headers: {
Authorization: token
},
data: {
filename: 'workout.mp4',
size: 1234,
domain: 'weightlifting',
activity: 'squat',
repCount: 12,
source: 'my-service'
}
}).then(function (response) {
const formData = new FormData();
Object.keys(response.data.fields).forEach((key) => {
formData.append(key, response.data.fields[key]);
});
formData.append("file", file.buffer , file.originalname);
let headers = formData.getHeaders()
formData.getLength(function(err,length){
headers["content-length"] = length
axios.post(
response.data.url,
formData,
{
headers: headers
}
).then(function (response) {
//...
}).catch(function (error) {
//...
});
});
}).catch(function (error) {
//...
});
import os
import requests
def create(video_path, access_token, domain, activity, rep_count = 3):
return requests.post(
"https://api.getguru.ai/videos",
json = {
"filename": os.path.basename(video_path),
"size": os.path.getsize(video_path),
"domain": domain,
"activity": activity,
"repCount": rep_count,
},
headers = {
"Content-Type": "application/json",
"Authorization": access_token
}
)
def upload(video_path, create_response):
json = create_response.json()
url = json["url"]
fields = json["fields"]
with open(video_path, "rb") as file:
return requests.post(
url,
data=fields,
files={"file": file},
)
# access_token = ...
video_path = "path/to/video.mp4"
create_response = create(video_path, access_token, "weightlifting", "squat", 1)
upload_response = upload(video_path, create_response)
POST https://api.getguru.ai/videos
Request
Parameter | Required | Default | Description |
---|---|---|---|
filename | Yes | None | The name of the video file, including extension. |
size | Yes | None | The size of the video file, in bytes. |
source | No | None | The source of the video. If the video was captured by your service then enter your service's name for this field. |
domain | No | None | The category of exercise being performed in the video. See the table below for accepted values. |
activity | No | None | The movement being performed in the video. See the table below for accepted values. |
repCount | No | None | The number of reps expected to be performed in the video. Omit if unknown. |
The currently accepted values for domain
and activity
are:
Domain | Activity |
---|---|
weightlifting | bench_press, clean_and_jerk, deadlift, snatch, squat |
calisthenics | bodyweight_squat, burpee, chin_up, lunge, push_up, sit_up |
martial_arts | punch, front_kick |
mobility | knee_to_chest |
running | sprint |
yoga | downward_dog |
Response
The response is JSON and contains the following data:
Field | Description |
---|---|
id | Unique identifier for your video. You will use it to make calls to the API to fetch results or perform other operations on the video. |
url | Location to which your video content will be uploaded. This upload must be multipart/form-data encoded. |
fields | The signing fields which must be included in your form when you upload the video. Take a look at the example to see how to combine these fields with your video content. |
Get Video
axios({
url: 'https://api.getguru.ai/videos/' + videoId,
headers: {
Authorization: token
}
}).then(function (response) {
//...
});
GET https://api.getguru.ai/videos/{id}?include=j2p,analysis
Request
Parameter | Required | Default | Description |
---|---|---|---|
id | Yes | None | The ID of the video you wish to fetch data for. |
include | No | None | A comma-separated list of additional fields you wish to return. Accepted values are j2p , analysis , and objects . |
Response
The response is JSON and contains the following data:
Field | Description |
---|---|
status | Indicating whether the video has been uploaded to Guru. Possible values are: Pending (if the video has not been uploaded yet), Success , or Failed . |
reason | The reason that the analysis failed. Only present when status is Failed |
uri | The location from which the raw video can be downloaded. |
overlays | Contains information about the overlays (e.g. wireframes) Guru has built for this video. The object will map the type of overlay to an object that has a status field. If the overlay has been built then it will also contain a uri field that is a link to download the overlayed video. |
fps | The frame rate (in frames per second) of the uploaded video |
analysis | Only present if specified in include . See the Get Analysis endpoint for the structure of this object. Contains a status field to indicate whether processing has completed. |
j2p | Only present if specified in include . See the Get Joint Data endpoint for the structure of this object. Contains a status field to indicate whether processing has completed. |
objects | Only present if specified in include . Contains an array of each object detected in the video. Each object will contain an array of boundingBoxes , showing the location of that object at particular frames in the video. |
The currently supported overlay types are:
Type | Description |
---|---|
skeleton | Contains a wireframe drawing of the joints and major landmarks identified on the person. |
all | Contains all supported overlay elements, including wireframes, rep counting, and analytics about the movement. |
Update Video
axios({
method: 'put',
url: 'https://api.getguru.ai/videos/' + videoId,
headers: {
Authorization: token
},
data: {
repCount: 10,
}
}).then(function (response) {
//...
});
PUT https://api.getguru.ai/videos/{id}
Request
The request payload should be in a JSON-encoded body. All of the fields are optional. If a field is omitted, then the existing value will be preserved.
Parameter | Required | Default | Description |
---|---|---|---|
repCount | No | None | The number of reps that were performed in the video. |
domain | No | None | The category of exercise being performed in the video. See the table in Create Video for accepted values. |
activity | No | None | The movement being performed in the video. See the table in Create Video for accepted values. |
Response
The response is JSON and contains the ID of the video.
Get Analysis
axios({
url: 'https://api.getguru.ai/videos/' + videoId + '/analysis',
headers: {
Authorization: token
}
}).then(function (response) {
//...
}).catch(function (error) {
//...
});
A successful response would look something like this:
{
"status": "Complete",
"domain": "weightlifting",
"activity": "squat",
"reps": [
{
"startTimestampMs": 123,
"midTimestampMs": 456,
"endTimestampMs": 789,
"analyses": [
{
"analysisType": "HIP_KNEE_ANGLE_DEGREES",
"analysisScalar": 12.34
}
]
}
]
}
GET https://api.getguru.ai/videos/{id}/analysis
Request
Parameter | Required | Default | Description |
---|---|---|---|
id | Yes | None | The ID of the video you wish to fetch analysis for. |
Response
The response is JSON and contains the following data:
Field | Description |
---|---|
status | Indicating where analysis was successfully performed. Possible values are: Pending , Complete , or Failed . |
reason | The reason that the analysis failed. Only present when status is Failed |
domain | The category of exercise being performed in the video. |
activity | The movement being performed in the video. |
reps | An array of objects, each one an individual rep detected in the video. Each rep indicates the timestamp offsets within the video where it can be found (via startTimestampMs , midTimestampMs , and endTimestampMs ). It also specifies an analyses array of objects, which contains the individual insights generated by the analysis. |
Some of the analyses on reps will include an analysisOpinion
field.
This is the opinion of the Guru platform on the quality of the rep on this particular metric.
For example, for the hip/knee angle analysis of a squat,
Guru will have a better opinion of a user who sits lower in a squat as opposed to
one who sits higher. Possible values for this field are good
or bad
.
When set, reason
will be one of the following values:
Value | Description |
---|---|
LOW_QUALITY_POSE_ESTIMATE |
Guru couldn't confidently detect the body's position throughout the video |
Details - Sprints
For sprints, the analysis field contains some additional fields:
Field | Description |
---|---|
fieldMarkers | Only present if the runner is on an American football field with 5-yard markers or if 10 yards have been marked with start and end cones. See below for a description of the values. |
reps | A list containing one entry for each stride. A "stride" begins when the toe leaves the ground and ends when the same foot contacts the ground. See below for details on the fields in each rep. |
runnerProgress | A list which represents a time-series of the distance the runner has traveled. Each entry in the list contains three fields: distanceFromStart (in meters), frameIndex and timestamp (in milliseconds). Note that distanceFromStart is negative when the runner hasn't yet crossed the starting line (e.g., if the runner is 1 meter behind the starting line 1 second into the video, then distanceFromStart=-1 @ timestamp=1000 ) |
Field Markers
The fieldMarkers
object has 4 possible keys: startLine
, middleLine
,
finishLine
, and cones
. The start, middle, and finish lines correspond to 0,
5, and 10 yards on an American football field. The start line is the line that
the runner crosses first and the end line is the line that they cross last.
Field | Description |
---|---|
type | Either YARD_LINE or CONE |
position | Contains fields x1 , y1 , x2 , and y2 . If YARD_LINE , these fields represent the unnormalized coordinates of the line segment. If CONE , these fields represent a bounding-box that circumscribes the cone. |
frame_idx | The frame index of the video corresponding to the detection timestamp: The timestamp (in milliseconds) corresponding to the detection |
Reps
Each entry in reps
corresponds to a stride and contains the following fields:
Field | Description |
---|---|
startTimestampMs | The timestamp at which the toe leaves the ground |
midTimestampMs | The timestamp corresponding to the middle of the stride (defined as peak flexion at the hip) |
endTimestampMs | The timestamp at which the foot touches back down onto the ground |
analyses | A list of objects containing two fields: analysisType and anaylsisScalar . See below for details. |
Each entry in the analyses
list contains these fields:
Analysis Type | Description |
---|---|
IS_LEFT_LEG | 1 if the left leg is the leg swinging forward in this stride, 0 if it's the right leg |
STRIDE_AIR_TIME_MS | The time that the runner is in the air during this stride, equal to the duration between toe-off (startTimestampMs of this stride) and the touch-down of the opposite foot (endTimestampMs of the previous stride). This will be null for the first stride in the list since it isn't well defined without a previous stride. |
STRIDE_GROUND_TIME_MS | The time that the runner is on the ground until the next stride begins. Ground time begins at the same instant that the air time ends, at touch-down of the opposite foot. This will also be null for the first stride in the list. |
STRIDE_PEAK_FLEXION_ANGLE | The peak flexion angle of the leg during the stride |
STRIDE_PEAK_FLEXION_ANGLE_TIMESTAMP | The timestamp at which the leg reaches the peak flexion angle |
STRIDE_PEAK_EXTENSION_ANGLE | The peak extension angle of the leg during the stride |
STRIDE_PEAK_EXTENSION_ANGLE_TIMESTAMP | The timestamp at which the leg reaches the peak extension angle |
Get Joint Data
axios({
url: 'https://api.getguru.ai/videos/' + videoId + '/j2p',
headers: {
Authorization: token
}
}).then(function (response) {
//...
}).catch(function (error) {
//...
});
A successful response would look something like this:
{
"status": "Complete",
"resolutionHeight": 1280,
"resolutionWidth": 720,
"jointToPoints": {
"leftAnkle": [{
"frame_idx": 0,
"part": "leftAnkle",
"position": {
"x": 0.4,
"y": 0.7
},
"score": 0.8,
"timestamp": 0
},
{
"frame_idx": 4,
"part": "leftAnkle",
"position": {
"x": 0.5,
"y": 0.75
},
"score": 0.86,
"timestamp": 0.1
}
],
"leftElbow": [...],
"leftEye": [...],
"leftHand": [....],
"leftHip": [...],
"leftKnee": [...],
"leftShoulder": [...],
"leftWrist": [...],
"rightElbow": [...],
"rightEye": [...],
"rightHand": [....],
"rightHip": [...],
"rightKnee": [...],
"rightShoulder": [...],
"rightWrist": [...]
},
"analysis": {
"reps": [{
"startTimestampMs": 123,
"midTimestampMs": 456,
"endTimestampMs": 789,
"analyses": [{
"analysisType": "HIP_KNEE_ANGLE_DEGREES",
"analysisScalar": 12.34
}]
}]
}
}
GET https://api.getguru.ai/videos/{id}/j2p
Request
Parameter | Required | Default | Description |
---|---|---|---|
id | Yes | None | The ID of the video you wish to fetch joint-to-point (j2p) for. |
Response
The response is JSON and contains the following data:
Field | Description |
---|---|
status | Indicating where analysis was successfully performed. Possible values are: Pending , Complete , or Failed . |
reason | The reason that the analysis failed. Only present when status is Failed |
resolutionHeight | The height of the video. |
resolutionWidth | The width of the video. |
analysis | This contains the rep information equivalent to that returned from the Get Analysis endpoint. |
jointToPoints | An object containing one attribute for each joint being tracked. Each joint is an array of JointFrame objects, detailed below. |
The semantics of the reason
field are identical to those of the the
Analysis endpoint.
JointFrame
objects define the information for a single joint for a single frame within the video.
They have the following structure:
Field | Description |
---|---|
frame_idx | The index of the frame within the video. |
part | The name of the joint. |
position | An object containing an x and y , the 2D location of that joint within the video, relative to the resolution of the video. Each coordinate will be >= 0 and <= 1. |
score | The confidence the model has in its prediction of this joint location within the frame. Value is >= 0 and <= 1. |
timestamp | The timestamp of this frame within the video, measured in seconds. |
Errors
The Guru API uses the following error globally across all endpoints:
Error Code | Meaning |
---|---|
400 | Bad Request -- The request did not contain the required fields, or some were invalid. Consult the endpoint's documentation. |
401 | Unauthorized -- The request could not be authenticated. Ensure you are specifying a valid authentication token. |
404 | Not Found -- The resource requested no longer exists. If fetching an analysis then the video may have been deleted. |
405 | Method Not Allowed -- The endpoint you called does not support the HTTP method specified. |
406 | Not Acceptable -- Requested an unsupported format. All endpoints currently serve JSON. |
429 | Too Many Requests -- Your service is making too many calls and has been throttled. Please wait before retrying and lower your call volume. |
500 | Internal Server Error -- An unexpected server-side error. Please reach out to Guru for resolution. |
Guru.js
Guru.js is the Javascript framework that allows you to tell the Guru AI platform how to process your video. The same code runs on both the server-side API and on-device, allowing you to write once and run anywhere.
Before you can begin writing Guru.js, you will need to create a Schema.
A Schema
is a configuration that holds all the information Guru
needs to process a video. You can create a new Schema from
the Guru Console.
We recommend starting from a template to get up and running faster.
A Schema holds three separate Guru.js functions:
- Process - This function defines what AI operations should be carried out on each frame.
- Render - This function defines what visual rendering should be done on each frame of the video. It can be used to visualize the output of Guru's AI platform. It is optional.
- Analyze - This function runs once on the output of all the frames. It is used to perform final analysis on the video, such as counting the number of repetitions of a movement.
Guru will call your Process
function for individual frames within the video. It will then
pass the result of the AI operations to your Render
function, so that you can modify
the display of the output video. Finally, it will pass the collection of Process
results
to your Analyze
function, so that you can perform aggregate analysis of the video.
We will now go over each function in more detail.
Process
The Process function allows you to perform AI operations on a frame within the video.
/**
* @param {Frame} frame - The frame to process
* @return {*} - The output of processing for this frame.
*/
async function processFrame(frame) {
return {};
}
class Frame {
/**
* Find objects of a specific type within the video.
*
* @param {(string|Array.<string>)} objectTypes - The type of the object to find.
* Can either be a string, in which case objects of a single type will be found, or an array of strings, in which case multiple object types will be found.
* @param {boolean} keypoints - Flag indicating whether to include keypoints in the results. Defaults to true.
* @return {Array.<FrameObject>} A list of FrameObject instances matching the given criteria.
*/
async findObjects(objectTypes, {keypoints = true} = {});
}
The Process function is responsible for performing AI operations on the frame,
such as object detection and pose estimation, and returning the results
of those operations. The function accepts a single argument of type Frame
.
This object provides an easy-to-use interface to the Guru AI Platform.
Refer to Appendix A for a definition of the common types used by Frame
.
The output of the Process
function will then be provided as input to the
Render
and Analyze
functions documented below.
Example implementation that finds all of the people in the frame and outputs their location information
async function processFrame(frame) {
const objects = await frame.findObjects("person");
return {people: objects};
}
Render
The Render function allows you to modify the FrameCanvas with the output of the Process function.
/**
* @param {FrameCanvas} frameCanvas - A canvas for the current frame, on which draw operations can be made.
* @param {*} processResult - The output of the Process function for this frame.
*/
function renderFrame(frameCanvas, processResult) {
}
/**
* The canvas for a frame, onto which draw operations can be made. This is passed as input to renderFrame().
*/
class FrameCanvas {
/**
* Draw a box with the given color around this object's location onto the frame.
*
* @param {FrameObject} object - The object around which the box will be drawn.
* @param {Color} color - The color of the box.
* @param {number} width - The width of the box's border, in pixels. Defaults to 5.
* @return {FrameCanvas} This FrameCanvas, that can be used to chain calls.
*/
drawBoundingBox(object, color, width = 2);
/**
* Draws a circle on the canvas.
*
* @param {Position} position - The position of the center of the circle.
* @param {number} radius - The radius of the circle, in pixels.
* @param {Color} color - The color of the circle.
* @param {boolean} filled - True if the circle should be filled in. Default true.
* @param {number} width - If not filled, then this is the width of the circle boundary in pixels. Default 2.
* @param {number} alpha - Optional, how transparent the circle should be. 0 is invisible, 1 is fully visible. Default is 1.
*/
drawCircle(position, radius, color, {
filled = true,
width = 2,
alpha = 1.0,
} = {});
/**
* Draws a line between two points on the canvas.
*
* @param {Position} from - The position to draw from.
* @param {Position} to - The position to draw to.
* @param {Color} color - The color of the line.
* @param {number} width - Optional, the width of the line in pixels. Default 2.
* @param {number} alpha - Optional, how transparent the line should be. 0 is invisible, 1 is fully visible. Default is 1.
*/
drawLine(from, to, color, {
width = 2,
alpha = 1.0,
} = {});
/**
* Draws a rectangle on the canvas. The rectangle may have a background color, or be transparent.
*
* @param {Position} topLeft - The position of the top-left corner of the rectangle.
* @param {Position} bottomRight - The position of the bottom-right corner of the rectangle.
* @param {Color} borderColor - Optional, the color of border of the rectangle. Either this or backgroundColor must be present.
* @param {Color} backgroundColor - Optional, the color of background of the rectangle. If omitted then the background will be transparent. Either this or backgroundColor must be present.
* @param {number} width - Optional, the width of the border in pixels. Default 2.
* @param {number} alpha - Optional, how transparent the rectangle should be. 0 is invisible, 1 is fully visible. Default is 1.
*/
drawRect(topLeft, bottomRight, {
borderColor = undefined,
backgroundColor = undefined,
width = 2,
alpha = 1.0,
} = {});
/**
* Draw the skeleton for the given object onto the frame. Note that the object must have its keypoints
* inferred in order to draw the skeleton.
*
* @param {FrameObject} object - The object whose skeleton will be drawn onto the canvas.
* @param {Color} lineColor - The color of the lines connecting the keypoints in the skeleton.
* @param {Color} keypointColor - The color of the circles representing the joint keypoints in the skeleton.
* @param {number} lineWidth - The width, in pixels, of the lines connecting the keypoints. Defaults to 5.
* @param {number} keypointRadius - The radius, in pixels, of the circles representing the keypoints. Defaults to 5.
*/
drawSkeleton(object, lineColor, keypointColor, lineWidth = 2, keypointRadius = 5);
/**
* Draws text at a specific location on the canvas.
*
* @param {string} text - The text to draw.
* @param {Position} position - The location to draw at. 0,0 is the top-left corner.
* @param {Color} color - The color of the text.
* @param {number} maxWidth - Optional, the maximum width of the text in pixels, after which it will wrap. Default 1000.
* @param {number} fontSize - Optional, the size of the font. Default 24.
* @param {number} padding - Optional, the amount of padding to apply to the location of the text from its location. Default 0.
* @param {number} alpha - Optional, how transparent the font should be. 0 is invisible, 1 is fully visible. Default is 1.
*/
drawText(text, position, color, {
maxWidth = 1000,
fontSize = 24,
padding = 0,
alpha = 1.0,
} = {});
The Render function receives two arguments: one of type FrameCanvas
, defined below,
and a second which is the output of the Process
function for that frame.
Your implementation can use the FrameCanvas
object to modify the appearance of the
output video, using information from the output of Process
.
Refer to Appendix A for a definition of the common types used by FrameCanvas
.
Example implementation that draws a red box around each object found in the frame
function renderFrame(frameCanvas, processResult) {
const objects = processResult.objects;
objects.forEach(object => {
frameCanvas.drawBoundingBox(object, new Color(255, 0, 0));
});
}
Analyze
The Analyze function allows you to operate on the output of multiple Process functions and return the analysis for the video.
/**
* @param {Array.<FrameResult>} frameResults - An array of the outputs of calls to the Process function.
* @return {*} - The analysis result for this video
*/
async function analyzeVideo(frameResults) {
return {};
}
/**
* @typedef {Object} FrameResult
* @property {number} frameIndex - The index of the frame within the video.
* @property {number} timestampMs - The timestamp of the frame in milliseconds.
* @property {*} returnValue - The value that was returned from the Process function for this frame.
*/
Example implementation that reduces the FrameResults to an array of unique object types found across the video.
async function analyzeVideo(frameResults) {
const frameObjectTypes = frameResults.map((frameResult) => {
return frameResult.returnValue.objects.map((object) => {
return object.objectType;
});
}).flat();
return Array.from(new Set(frameObjectTypes));
}
Example implementation that counts reps.
async function analyzeVideo(frameResults) {
const personId = frameResults.objectIds("person")[0];
const personFrames = frameResults.objectFrames(personId);
const reps = MovementAnalyzer.repsByKeypointDistance(personFrames, Keypoint.rightHip, Keypoint.rightAnkle);
return {
"reps": reps
};
}
The Analyze function receives one argument, an Array of FrameResult
objects.
This function can perform any kind of analysis required on the results of each Frame
to perform complex reasoning. It could, for example, count the number of repetitions
of a particular movement, or de-duplicate the names of unique objects found across the
entire video.
The output of this function will be the analysis result of the video. It will be accessible from the Get Analysis endpoint when using server-side processing, or as the output of the SDK when performing on-device processing.
The Array of FrameResult
s is augmented by a number of useful methods for performing analysis:
Method | Description |
---|---|
frameResults.objectFrames(objectId) |
Given the ID of an object, fetch the FrameObject s that describe its movement throughout the video. |
frameResults.objectIds(objectType) |
Given a type of object (e.g. person ), fetch the IDs of each instance found of that type in the video. |
frameResults.resultArray() |
Return an array of the raw results from each call to your Process code. |
Guru.js also provides a library of domain-specific analysis functions to help perform common operations.
MovementAnalyzer
This Analyzer provides methods related to human movement. It is useful for building fitness or sport-focused apps.
Method | Parameters | Description |
---|---|---|
repsByKeypointDistance |
|
Given the frames of a person, find repetitions of a movement defined by the movement between two keypoints. For example, you could use
|
personMostlyFacing |
|
Given the frames of a person, determine which direction they were mostly facing during the course of the video. Returns a value from the ObjectFacing enum. |
personMostlyStanding |
|
Given the frames of a person, determine whether they were standing for the majority of the video. Returns a boolean. |
ObjectFacing Enum |
Description |
---|---|
Away | Object facing away from the camera. |
Down | Object facing towards the bottom of the frame. |
Left | Object facing the left-side of the frame. |
Right | Object facing the right-side of the frame. |
Toward | Object facing towards the camera. |
Unknown | Object direction unknown. |
Up | Object facing towards the top of the frame. |
Appendix A
/**
* A two-dimensional box, indicating the bounds of something.
*
* @typedef {Object} Box
* @property {Position} top_left - The top-left corner of the box.
* @property {Position} bottom_right - The bottom-right corner of the box.
*/
/**
* A colour, represented as an RGB value. Valid values for each are >= 0 and <= 255.
*
* @typedef {Object} Color
* @property {number} r - The amount of red in the color.
* @property {number} g - The amount of green in the color.
* @property {number} b - The amount of blue in the color.
*/
/**
* A single object present within a particular frame or image.
*
* @typedef {Object} FrameObject
* @property {string} objectType - The type of the object.
* @property {Box} boundary - The bounding box of the object, defining its location within the frame.
* @property {Object.<string, Position>} keypoints - A map of the name of a keypoint, to its location within the frame.
*/
/**
* The two-dimensional coordinates indicating the location of something.
*
* @typedef {Object} Position
* @property {number} x - The x coordinate of the position.
* @property {number} y - The y coordinate of the position.
* @property {number} confidence - The confidence Guru has of the accuracy of this position. 0.0 implies no confidence, 1.0 implies complete confidence.
*/
Documented here are common types that are used across each of the
Process
, Render
, and Analyze
functions.