Not using at present
I develop APIs in Tacker project. I use APIs to query about properties/resources.
I used it to integrate Murano (Application Catalog Service) command line service (CLI) with OpenStack CLI.
mainly to automate external deployments, like based on ansible playbooks. But the api is also used by dashboards and sometimes to collect data about customers (for administrative reasons)
I make extensive use of heat, nova, glance and keystone APIs writing automation interfaces between higher-level orchestration systems and multiple OpenStack nodes
To create private cloud.
Work with ceilometer/nova/neutron APIs to consume capacity data
Through Juju, watching nova list to ensure instances are created reasonably. Occasionally directly through the CLI to setup arcane network / storage configurations
CI on our Cloud - all user apis to launch instances/networks 2. User management; quotas, new users 3. StackTask - our own REST api that interacts with keystone and other services to add new users, reset passwords, set quotas etc.
As a developer, would like to use OCI for develop and test also heat command line as well. Also direct API call for cases that some functionality can't be done unless you using API call. On the other hand, Use Horizon in other cases(our customer mostly use Horizon)
Easy-to-read documentation Easy-to-use api (not too many parameters, not too many pre-requisites, easy-to-understand error codes) Error free APIs Sample code using the API Community to help with using the API
The structure is really good.
It helps to integrate the service with any other project easily, enhance any project without writing the same code again and again for same services.
First of all: they exist, and this is amazing because it enables developers and customers to automate and work with openstack. Using some modules/scripts which exist, its possible to roll out an environment in just a few minutes. Another good thing on the APIs is, that they enable you to control everything using APIs. So if you want, you can develop your own UI or tools using OpenStack as a backend.
The level of detail and amount of control that the APIs is definitely a positive. Also the relative consistency between projects is a plus.
It is too easy to use.
Agility and speed of some APIs is good based on Python development
None
REST API object format is intuitive to understand what other commands might be - PUT/POST/DELETE. It used to be that there was a separate CLI per project - requiring you to know the details of each - now openstackclient brings those together - requiring less inside knowledge of each service.
Diversity, you can have really reach APIs and do almost everything. Which can't get that much support in CLI or UI.
Difficult to understand the working Too many pre-requisites and too many parameters Difficult to test the API Error codes are not easy to understand, no clear documentation on the returned error codes (reason, ways of rectifying)
Nothing in general. This could be project specific.
APIs are language specific. We cannot integrate API written in A language with project of B language.
Its still hard to track all the api calls for new users. As one request can fire up a dozend others and they all use ids to track, its somewhat hard to find out which request belongs to which call. Its possible to make it more easy by using log tools, but for beginners thats sometimes not easy to archive.
The biggest negative for me is the number of projects that encode the user's Project ID in the endpoint URL. For automation purposes I really want to be able to provide a token that I created earlier and a static known endpoint derived from either the catalog or querying the higher- level API endpoint. The need to encode project IDs in the endpoint is a major pain point.
No consistency between projects. ex.) UUID format (need hyphen or not) date format
Documentation, slow to release, sometimes hard to understand and incomplete
They tend to be complex and unpredictable.
Accessing the keystone catalog without a token could be useful (but I understand why you can't!) Use case: Our custom password reset in Horizon requires us to get and interact with more than just the hardcoded keystone URL. Updating quota requires interacting with many different service APIs and many REST calls, and knowledge of what each one limits or provides.
When using API, you have to deal with Authentication by yourself. which become a big gate for new developer. Another one will be OpenStack's base problem, if some things happen, sometimes API call will return some message, that's not relative at all( for example, user won't get much Neuton exception through Nova API call)
Yes
Yes, this seems interesting. Isn't this like calling Heat API which in turn calls Nova API to spawn a VM. If yes, we already do this in Tacker.
Sorry, I do not know about this concept.
yes, it could be very helpful as it might be an even easier entry to use the apis. Id like to compare it to horizon. It offers most of the functionality which is needed for all the basic stuff, but if you want to go deeper, you need to use the cli/api some times. Same would be at least a good start for an orchestrator api.
Yes - that would be great
No, cloud management platforms will eventually handle this.
Very much
Yes - we have somewhat built this ourselves in StackTask, setting quota on all projects at once. (though it also handles registration, password reset etc)
Yes, mostly like to see what orchestrate can be achieved here.
Consistency in documentation Consistency in error codes
Very important. This becomes easy to understand APIs from different projects and saves a lot of time. It also encourages developers to contribute to other project's APIs which they are not comfortable with.
Yes, It is highly recommended. Code consistent, Version consistent matters.
yes it is. Its easier to work on the APIs if they use the same function names whenever possible as the same options.
It is REALLY important that APIs are consistent. Method operations should always be consistent, meaning a POST/PUT/DELETE/etc should always be handled the same way on all projects. Handling of non-standard REST cases should be standardized as much as possible (i.e. invoking actions, creating linkages between objects, or other operations that aren't inherently RESTful). Consistency of attribute/property naming would also be a big help.
It's important to remember API.
Cross project consistency matters, can't stand when two APIs are called differently for similar data.
Yes. I should be able to ask for a "new" whatever the service provides int he same way for every service
Yes, very much so. It is important to be able to learn the conventions once (ie. REST and object types) and be able to apply that same knowledge to a new service - learning new workarounds or quirks per project is cumbersome and does not promote mastery of APIs.
Yes, little like why OCI exists, consistency consistency consistency
Read the details of the response Look up the documentation Look up communities or any other internet resource for help
Yes, a lot. I google to decode the error codes. Better to be very specific about the 'error reason' rather than showing a common 'error code' for 2-3 error reasons.
I generally do google and according work on the error.
I try to figure out what went wrong, as this is what apis are for. If its an error which could be due to resources still building, it should be "soft" so that a retry might be worth. But in any case, the message is important. And if its straight documented as for example http return codes, it might be easy to work with them.
Details are really important when debugging. The more information available in an error the better, including a stack trace if available.
I read document and dump API from Horizon.
Yes, detail is very important, should be adjustable.
Yes; response to errors differs depending on context. Wrapped into other tools checking responses and potentially retrying is the lead by hand tweaking the request is the most common need :)
Yes, depending on what I was trying to do at the time. If it's a new feature, I'll usually read the message, try again and then RTFM. If it's an old feature that used to work - it's important to me as a sysadmin that I understand if it is a user error or an actual internal error with our OpenStack cloud that has gone un-noticed.
Yes, when using API call the most important part is to really observe the detail of API request and response. When getting error response, will dig other API call to make sure What's in side that API call if well function. Like we create a server, with image, flavor, and others and we can check all those argument first. Since Most error message can tail you clearly what's wrong, but in Some cases only show you Some Thine went wrong!?
No
Yes, versioning of APIs is helping alot.
No, i am not.
I havent used them but i know about them.
We have not started using microversions yet but know that this will be needed in the relatively near future.
No
No, I install the specific toolchain versions I need onto a clean container to talk to a given cloud.
Aware of microversions and what they provide, but don't utilize them much other than selecting the latest that we have running.
No and Yes, aware of it, but not sure how OpenStack define it.
Many times too little detail
Too little responses when just error codes are displayed. Also there are same error codes for different error reasons. (Can't remember to a specific scenario currently.) Giving a reason instead of code eases the life of non-web-developers.
Nope, the responses contains all the required useful information.
Both. But this might be because of different levels of information they spread. So some errors would leak to much information about what did go wrong, so that for example a customer shouldn't know about it. For admins, it would be sometimes more helpful to get a bit more details, otherwise they would need to dig into the logs again.
Generally the amount of detail is good. Every once in a while it will be necessary to dig into the server logs or debug the code to figure out what a specific issue.
I feel too little for Nova API "GET /v2/servers". So I call "GET /v2/servers/detail" after that.
Too little, I can parse too much out.
too little but not by much
Too little
Not sure
I figure out the UUID by querying "GET" for the resource.
By doing queries that should give them. Sometimes i simply pick them from horizon, sometimes i already use the API to get them. But sometimes it can get messy as you might end up with same named resources and that makes it very hard to find the right one.
Generally we query by name or other parameters that will identify the resource we are interested in.
I use jq command to search UUID from list.
ex.) curl
Query keystone API
nova (for example) list and then look through the response
GET the objects in the resource list, this shows all UUIDs of the objects.
Call a list API then filter it out
Not applicable
No. The catalog, in most of the cases, is too big. Also it is not very comfortable to view it in the logs.
Yes
Yes
Yes - I'm planning to derive a lot more of my logic from the service catalog.
Yes
N/A
Yes - It seems like a sensible convention to configure a single API endpoint(keystone) and then use that to gather the others based on that cloud/user configuration. Hardcoding endpoints(like we need to do for unauth password resets) leads to duplication of this information and configuration pain if this needs to change.
No do much inspection
Documentation is really good. I always refer the API guide at api.openstack.org. The uniformity in documenting APIs for almost all the projects is cool. I actually have a scrapper to populate excel sheet with the API info and it works for most of the projects.
Documentation is the resource from where anyone can learn about the thing. OpenStack API doc should contain working example for each API with the sample output. So that anyone can easily learn about it.
Sometimes a few examples would be helpful, but it has become much better in the last releases. API documentation is quite good.
Most of the API documentation provides a lot of good detail and information. Sometimes there are features added in a release and the API doc does not specify the release that it was added in, so when you try to use it in an earlier release it obviously doesn't work. It would be nice to have swagger generated API docs available on the node so that you could query for information from the specific code running on the node you are working with.
See above
Most of the docs are hard to find, making it easier to search for the
solution on Google rather than through the documentation. One of the
biggest annoyances that I see is on pages like: http://docs.openstack.org/user-guide/cli-launch-
instances.html . That page starts the user off using the openstack
commands, before switching to nova
commands at the end
I think they are good - I like that there are example json for each request and that all methods are detailed. The error codes could be expanded upon to detail in what situations each can be raised, but perhaps thats the point at which I should just look at the code :)
I like the detail of API argument, but too less example for the combination of usage, like how can we do a Live migration with server. But where are the user story
May be APIs for all projects should be at the same version. This might be bit difficult to achieve with new projects coming in, so may be a common version for all the projects announced under "big tenant" during that OS release.
To aware the developers about how to contribute to OpenStack, there should be video tutorials as well from the beginner to advanced level. It will help a lot to beginners.
More information about API accessibility. Like what endpoints should be protected and why and which not. I just noticed that this is not easy to find for some people and it led to some problems as there was some setup where not all endpoints where reachable from any point. So sometimes only the internal ones, sometimes the public ones, but not both at all. And this brought errors with tempest runs for example. But i might have just missed the right doc, so apologize if so.
API discovery is becoming a big deal - the ability to query an endpoint for the details of the API interface and the data model is going to be really important.
Needs to be a working group to oversee all projects and interoperability. Also better documentation.
Mostly discussed