Open API Specification
The OpenAPI Specification (OAS) is a widely adopted standard for describing and documenting APIs. It provides a language-agnostic way to define every aspect of an API, including endpoints, operation parameters, authentication methods, and data models. It facilitates clear communication between all parties.
This detailed specification enables tooling support for the automated generation of documentation, client libraries, and server stubs, significantly reducing the potential for human error and enhancing interoperability.
From Provider
Let's continue to use the simple demo API as an example. In Quickstart, we used it to simulate the Inference API you want to submit. You are expected to submit an OpenAPI Spec of a single endpoint with a single method to Xinfer.AI and explain how Xinfer.AI calls your API. Here is the Open API spec:
XINFERAI_KEY=...
id=1234
# use the id from the previous response
curl -X PUT "https://api.xinfer.ai/utils/src-openapi-specs/v1/$id" \
-H "Accept: application/json" \
-H "Authorization: Bearer $XINFERAI_KEY" \
-H "Content-Type: text/plain" \
--data-binary @"./demo.yaml"
{"data":{"id":1234}}
OAS Editor
Our Open API Spec Editor is an open-source tool for creating and editing OpenAPI specs. It features a user-friendly interface for writing and visualizing YAML API definitions with real-time validation, syntax highlighting, and error detection. Integrated API testing lets users interact with APIs directly within the editor, making it essential for efficiently designing, testing, and documenting APIs.
You can access it through the list view of your APIs, or the API submit link to create a new API.
To Developers
A public-facing OpenAPI Spec is provided for developers with every infernece API by Xinfer.AI. Here is the one that comes with the hosted API from the submission:
XINFERAI_KEY=...
id=1234
# use the id from the previous response
curl -X GET "https://api.xinfer.ai/utils/openapi-specs/v1/$id" \
-H "Accept: Content-Type: text/plain" \
-H "Authorization: Bearer $XINFERAI_KEY"
Here is the link to a generated document based on the API Specification.
API Playground
You can experience the power of our inference API directly through our API Playground. This interactive OpenAPI client uses your login credentials to allow you to set inputs and make API calls in real-time. It provides a hands-on way to explore and understand the API's capabilities.
The API Playground provides an interactive interface for exploring and testing API endpoints defined by OpenAPI specifications. Key features include the ability to dynamically render API documentation and support try-it-out functionality, enabling you to execute API calls directly from the UI.
You access it through the list view of your APIs, or the API submit link to create a new API.
What's next?
Once clear communication protocols are established between AI solution providers, XINFER.AI, and developers who want to use the solution, the question, "How do the users know to use my inference API?" is answered.
The answer to "How do the users find my solution?" is the advanced search of XINFER.AI.