Edge Impulse inference server

Model information

Inference server for "sss2022 / ftdDemo" (v2).

Get model info

curl -v -X GET http://localhost:80/api/info

How to run inference

curl -v -X POST -H "Content-Type: application/json" -d '{"features": [5, 10, 15, 20]}' http://localhost:80/api/features
(Expecting 600 features for this model)

Try out inferencing

Enter 600 features, split by ',' - for example from "Live classification".