Plask JS API Client
@plask-ai/client is a powerful and easy-to-use JavaScript client for interacting with the Plask AI API. It provides seamless integration with Supabase for authentication and a simple interface for sending and receiving messages through WebSockets.
Features
- Easy Supabase integration for authentication.
- WebSocket communication with the Plask AI API.
- Simplified message sending with structured data.
Installing
Install @plask-ai/client
You can install @plask-ai/client **via the terminal. In case using npm,
npm install @plask-ai/client
In case of yarn
yarn add @plask-ai/client
Initializing
Create a new client for use in the browser.
You can initialize a new Supabase client using the signIn() method
The Plask client is your entrypoint to the rest of the Supabase functionality and is the easiest way to interact with everything we offer within the Plask ecosystem.
import { Client } from "@plask-ai/client";
const client = new Client();
await client.signIn("example@example.com", "password_goes_here");
Parameters
email **(REQUIRED)
The email address which you signed up for the Plask website
password **(REQUIRED)
The email address which you signed up for the Plask website
Request Format
To initiate a pose estimation task, the client needs to send a request in the following format:
const task_message = {
task: "pose_estimation",
jobId: "test",
fileName: "shuffle.mp4"
input: {
fileUrl: "some_s3_url/video.mp4",
startTime: 0,
endTime: 3,
multiPerson: false
},
destination: {
type: "client",
}
};
Fields
task- The name of the machine learning task.
- Available tasks:
pose_estimation_v1.pose_estimation_v2. - Pose estimation v1 extracts human motion from a stationary camera and produces more precise poses.
- Pose estimation v2 extracts both human motion and camera motion from a moving camera. It extracts natural motion by using methods such as foot locking.
jobId- A string that identifies the request.
inputParameters for extracting motion.
fileUrl- A downloadable video URL.
Supported formats: mp4, mov, webm, avi.
startFrame,endFrameSpecifies the frame range for motion extraction in the video.
startFramemust be 0 or higher.endFramemust not exceed the total number of frames in the video.startTime,endTimeAlternatively, specify the time range for motion extraction in seconds.
startTimemust be 0 or higher.endTimemust not exceed the video's total length in seconds.multiPerson- extract multiple people from the video (up to 5)
destinationThe destination for the extracted motion.
type- Available types:
client.
- Available types:
Response Format
Waiting Time Notification
const message = {`Your request is in queue. Estimated waiting time: ${waitingTime} seconds.`};
• The estimated waiting time is sent based on the queue status after sending the task message.
Ping
const message = {"ping"};
• A “ping” message is sent every 30 seconds to maintain the connection.
Output
const message = {
"fileName": "shuffle.mp4",
"output": ..., // motion data
"jobId": "test",
"status": "Done",
};
- Upon task completion, output and status are sent.
filename- Name of the video file.
jobId- A string that identifies the request.
outputOn successful task completion:
durationDuration of the extracted motion in seconds.
fpsFrames per second of the extracted motion.
data
- On failure, an error message is sent.
status- Status types: “Done”, “Error”.
Motion Data Structure
The structure of single motion data is as follows:
[{
dataType: "vector3",
name: "hips",
keyframes: [
{
frame: 0,
value: [0, 0, 0] // x, y, z
},
],
property: "position",
type: "Animation",
}, {
dataType: "quaternion",
name: "hips",
keyframes: [
{
frame: 0,
value: [0, 0, 0, 1] // x, y, z w
},
],
property: "retationQuaternion",
type: "Animation",
}, {
dataType: "quaternion",
name: "leftUpLeg",
keyframes: [
{
frame: 0,
value: [0, 0, 0, 1] // x, y, z w
},
],
property: "retationQuaternion",
type: "Animation",
},
...]
The structure of multi motion data is as follows:
{"0": [{
dataType: "vector3",
name: "hips",
keyframes: [
{
frame: 0,
value: [0, 0, 0] // x, y, z
},
],
property: "position",
type: "Animation",
}, {
dataType: "quaternion",
name: "hips",
keyframes: [
{
frame: 0,
value: [0, 0, 0, 1] // x, y, z w
},
],
property: "retationQuaternion",
type: "Animation",
}, {
dataType: "quaternion",
name: "leftUpLeg",
keyframes: [
{
frame: 0,
value: [0, 0, 0, 1] // x, y, z w
},
],
property: "retationQuaternion",
type: "Animation",
},
...],
"1": [...],
...,
}
- List of
boneNames:
[ "hips", "leftUpLeg", "leftLeg", "leftFoot", "leftToeBase", "rightUpLeg", "rightLeg", "rightFoot", "rightToeBase", "spine", "spine1", "spine2", "neck", "head", "leftShoulder", "leftArm", "leftForeArm", "leftHand", "leftHandIndex1", "rightShoulder", "rightArm", "rightForeArm", "rightHand", "rightHandIndex1"]
- List of
properties: “position”, “quaternionRotation”- The unit for
positionis centimeters.
- The unit for
Using response
client has onMessageReceived method which receives a callback function. You can add job you want to work with the result like below
client.onMessageReceived((message: string) =>
some_function(message)
)
uploadAnimation
This feature is only supported for deliverables in v1.
In the onMessageReceived method, upload the resulting motion to the tool as an Animation.
Access the Plask tool with the account you used in the signIn method to view the Animation.
This is done by parsing the message into JSON.
client.onMessageReceived((message) => {
if (message.includes("Done")){
animation = JSON.parse(message);
client.uploadAnimation(animation);
}
});
applyFilter
This feature is only supported for deliverables in v1.
In the onMessageReceived method, provide a filter to smooth the motion of the result.
The filter corrects for noise in the motion, making it clearer and smoother.
This is done by parsing the message into JSON.
client.onMessageReceived((message) => {
if (message.includes("Done")){
beforeFilter = JSON.parse(message);
afterFilter = client.applyFilter(beforeFilter);
// client.uploadAnimation(afterFilter);
}
});
Example with React
```jsx
"use client";
import { Client } from "@plask-ai/client";
import { useEffect, useState } from "react";
export default function Home() {
const [client, setClient] = useState<Client | null>(null);
useEffect(() => {
async function initializeClient() {
try {
const client = new Client();
await client.signIn("example@some.com", "password1234"); // Use environment variables or another secure method to handle credentials
client.onMessageReceived((message) => {
if (message.includes("Done")){
const beforeFilter = JSON.parse(message);
const afterFilter = client.applyFilter(beforeFilter);
client.uploadAnimation(beforeFilter);
client.uploadAnimation(afterFilter);
} else {
console.log(message);
}
});
setClient(client);
} catch (error) {
console.error("Error initializing client:", error);
}
}
initializeClient();
return () => {
client?.closeConnection();
};
}, []);
const onClick = async () => {
if (client) {
const request = {
task: "pose_estimation_v1",
jobId: "test"
input: {
fileUrl:
"some_video_url/shuffle.mp4",
startTime: 0,
endTime: 3,
},
destination: { type: "client" },
};
client.sendMessage(request);
}
};
return (
<main className="flex min-h-screen flex-col items-center justify-between p-24">
<div className="z-10 max-w-5xl w-full items-center justify-between font-mono text-sm lg:flex">
Hello
</div>
<button
onClick={onClick}
className="z-10 max-w-5xl w-full items-center justify-between font-mono text-sm lg:flex bg-slate-700 text-white rounded-lg p-4"
>
Click Me
</button>
</main>
);
}