Maybe I am playing with more of these environments than the average API consumer is, because of what I do for a living, but I have to say, I am getting tired of "importing" my API definitions. Don't me wrong. I am stoked that all tools support the importing of machine readable API definitions like OADF, and API Blueprint, but I cannot help always looking to what should be next, and I want to be able to just run each API, in my HTTP client of my choice.
For all of my own APIs, I provide a Postman icon, and link to a Postman Collection. It just gives you quick access to the machine readable currency that all services I depend on speak, OADF, API Blueprint, and Postman Collection. However, I still have to import it into Postman, or other HTTP API client, or service I will be using. While this is a good start, and is something I recommend other API providers do, I think we can still do better.
Think the Twitter and Facebook share buttons, but for API integration, and the currency is OADF and API Blueprint definitions. I appreciate all you HTTP API client providers considering my crazy requests. If done right, I think it could result in some potential new users, depending how you'd handle the process for users who clicked, but didn't actually have an account with your platform yet--anyway, food for thought. #onward
I am spending some time taking another look at my "client research", which started out as just about Postman and PAW, but now contains ten separate services I'm and bundling into this area of research. As with all my research areas, these project repos shift, evolve, split and marge with time, as the API space changes, and my awareness of it grows.
I completely understand the term "client" doesn't provide an adequate label for this bucket of research, but for now, it will have to do. As I add a couple of new services to the bucket, and made my way through some of the existing ones I had, I wanted to step back and look at what they were offering, but more importantly the message that went around quantify what tehse companies were offering.
When it comes to what I call "lines along the API lifecycle", I saw these areas represented.
This is where the API client line potentially intersects with all of these other API life-cycle lines. However, When you start to analyze the features or building blocks offered by these service providers, you begin to see each stop along along the API client line, which becomes pretty critical to other areas of the API lifecycle.
I know that what I am saying might not be completely clear, it isn't for me either. That is why I tell stories, to try and find the patterns, and learn how to articulate all the moving parts. I'm still trying to figure out what to call my research, alongside all of these API service providers working to define just exactly what it is they are selling as well.
The more time I spend with my API client research, the more all of this comes into focus. The problem is that these companies are rapidly adding in new features, in demand to what their customers are needing, which keeps me on my toes, as well as increases overlap with other lines that I track on along the API life-cycle.
I just wanted to take a moment, update my research, and take another look at the companies, and tooling at play.
Shortly after the Zypr voice API came on to the scene in 2011, I launched my research into voice APIs. Like many other areas of the API universe, voice has come in and out of focus for me, something I think will take much longer to unfold, than any of us could have ever imagined. Zypr quickly ran out of steam, and other similar solutions have come and gone over the last couple years as well, leaving my research pretty scattered across many different concepts of how voice and APIs are colliding--lacking any real coherency.
I took a moment last week to take a fresh look at my voice API research, because of a comment by Steven Willmott (@njyx), the CEO of 3Scale. Its not an exact quote, but Steve spoke about how voice is the future of API consumption, after he had attended the AWS:Reinvent in Las Vegas. I agree with him. Voice APIs is a topic that has been significantly stimulated with the introduction of the Amazon Echo platform, but I also feel also coincides with a critical mass of available API driven resources that will deliver some of the value these platforms are promising users.
Voice recognition has always been something that leaves a lot to be desired--think Siri. Even with these challenges there are many dimensions to the voice API discussion, and with the amount of resources now available via simple APIs in 2015, I feel we are reaching a more fertile, and friendly time for voice solutions to return the value end-users desire. We now have a rich playing field of weather, news, stocks, image, video, podcast, and other data, content, rich media, and programmatic resources, which can be linked to specific voice commands--something we didn't have before.
While there is still so much work to be done, but I agree with Steve's vision, that voice will play an increasingly significant role as an API client. I would add that like mobile, or the recent wave of wearables, voice will have special constraints when it comes to API design, further requiring API providers keep their APIs simple, and reflect how users will experience them, not just being a SELECT * FROM table WHERE q = 'search', with a URL bound to it.
I think the API providers who are further along in their journey, will get a boost as voice evolves as an API client, and voice enabled app developers are able to easily integrate valuable API driven resources into their solutions. Even with my new found optimism about voice APIs, I still think we are years away from voice solutions actually living up to, even a small portion of the hype they seem to get over the years. Regardless, I'll be working to keep a closer eye on things, and will be sharing via my voice API research.
I've been tagging companies that I come across in my research, and stories that I find with the term "orchestration" for some time now. Some of this overlaps with what we know as cloud-centric orchestration using Puppet or Chef, but I am specifically looking for how we orchestrate across the API lifecycle which I feel overlaps with cloud orchestration, but pushes into some new realms.
Now that I'm thinking terms of orchestration, something that seems to be a core characteristic of these new API hubs, work spaces, or garages--I'm seeing a possibly new vision of the API life-cycle. I'm going to organize these new hubs, work spaces, and garages under my IDE research. I am starting to believe that these new work spaces are just the next generation IDE meant to span the entire API life-cycle--we will see how this thought evolves.
This new approach to API IDEs gives us design, and development capabilities, but also allows us to mock and deploy APIs. You can generate API documentation, and SDKs, and I'm seeing hints of orchestration using Github and Docker. I'm seeing popular clients like Postman evolve to be more like a API life-cycle IDE, and I'm also seeing API design tooling like Restlet Studio invest in HTTP clients to expand beyond just design, adding live client interaction, testing, and other vital life-cycle elements.
None of my research is absolute. It is meant to help me make sense of the space, and give me a way to put news I curate, companies I discover, and open source tooling into meaningful buckets that might also help you define a meaningful version of your own API life-cycle. I apologize if this post is a little incoherent, but it is how I work through my thoughts around the API space, how things are expanding and evolving in real-time--something I hope will come into better focus in coming weeks.
I was given an introduction to the Microsoft Graph A concept being applied to Office 365 APIs, other Microsoft APIs, and potentially beyond, to map out segments of users and every day objects. As I learn more about this unifying, graph API effort, I will write more, but this particular story is about how we communicate around the first steps taken by developers when integrating with any API. As an API provider, how you talk about integration, and craft your on-boarding resources, can significantly impact how developers view your resources, something that I think still will always need some work across the space.
After being introduced to the Microsoft Graph APIs, we were given a list of code resources, that we could use to hack against the API. The API integration overview had all the modern elements of API integration, with C#, Java, PHP, Node.js, Ruby, and other "coming soon" libraries. The resource toolkit, even had a sandbox account we could use, helping us on-board with less friction. While this approach is very progressive for the Microsoft world I've known, evolving us beyond the endless sea of C# focused WSDLs we all have seen historically, I would like to point what I think should be the next step in our evolution.
It makes me happy that we now speak in multiple programming languages, and provide sandbox or simulation environments. +1 What I'd like to see next, is that we also speak more HTTP, than just language specific clients. I'd like to see these types of API on-boarding toolkits start providing a Postman Collection for the API, or even better, a Swagger or API Blueprint definition that can allow me to not just on-board using the HTTP client of my choice, like Postman, PAW, or Insomnia REST. I agree that we should be speaking the native language of the developers we are courting, but I like to nudge things forward, and encourage speaking a more generic language of HTTP, for those of us who program in many different languages.
Just like being multi-lingual with APIs has moved us out of our web service silos, I'm hopeful that if more developers speak HTTP, it will help move us into the future, where API developers are more HTTP literate, are are really leveraging the strengths of HTTP, or even better--HTTP/2 in their everyday worlds. I started including Postman collections, along with my Swagger definitions, for my APIs. I'm also working to include API Blueprint, and other API definition formats, something that will allow potential API consumers to onboard using my language specific libraries, or the HTTP client of their choice.
As part of this work, both teams are working hard to evolve our tooling for working with, and validating API definitions. I mentioned a couple weeks back, when I shared client SDK research conducted by APIMATIC, that quality SDK generation is kind of the high water mark for measuring API definition completion--meaning if your API definition isn't complete enough to generate a functional SDK, you need to spend more time in your API design editor, until it is more complete.
The one thing I've learned in five years as the API Evangelist is that us technologists and developers don't always see the world like everyone else. We focus on the perfection of the technology, our own desires for the future, and often miss the mark on what end-users actually need. This is one of the hallmark success of APIs over SOA, is that by accident, APIs jumped out of the SOA petri dish (thanks Daniel Jacobson - @daniel_jacobson), and was use solve everyday problems that end-users face, using the technology that is readily available (aka HTTP).
While I think us API folks have done a great job of delivering valuable resources to mobile applications, and a decent enough job at delivering the same resources to web applications, and I guess we are figuring out the whole device thing? maybe? maybe not? Regardless, one area we have failed to serve a major aspect of the business world, is delivering valuable API resources to the number #2 client in the world—the spreadsheet.
Then using a new spreadsheet, I click on add-ons > Blockspring, and logged into my account. After giving Blockspring access to the Google Spreadsheet via my Google Account oAuth, I was given an API console in the right hand sidebar of my spreadsheet interface. The API options I’m given aren't the usual geek buffet, they are everyday use scenarios that would attract the average spreadsheet users.
I select the IMDB movie search, which once chosen, I’m given the option to populate my spreadsheet with results, providing me with API driven resources, right in my worksheets. The best part is it is complete with one cell as a search term, allowing me to customize my IMDB search.
Using Blockspring, I’m given easy to use, API driven resources, that anyone can implement, like visualizing the recent news:
Or possibly evaluate stock volatility clustering, using stock market data APIs (cause you know we all do a lot of this):
Blockspring gives me over 1000 API driven functions that I can use in my Google Spreadsheet—kicking everyone’s asses when it comes to potential API client delivery. While us technologists are arguing over whether or not we can automatically generated Swagger driven SDKs, and the importance of hypermedia APIs when deploying the next generation clients, someone like Blockspring comes along and pipes in APIs to the #2 client in the world—the spreadsheet. #winning
Now the game will be about getting the attention of Google Spreadsheet users, and developing comparable Microsoft Spreadsheet tooling, and getting mainstream Excel users attention as well. The rest of you will have to get the attention of Blockspring, and make sure your API resources have simple, meaningful endpoints that can be piped in as Blockspring Google Spreadsheet functions. Spreadsheet driven business units should not have to learn about APIs and go look for them, at each individual API portal—APIs providers should find and education business users about their resources, via one of the most ubiquitous tools in business.
The majority of the world's data is locked up in spreadsheets, and CSV files. Something I learned during my short time in Washington DC, is that the API community is going to have to court the legions of data stewards who spend their days in spreadsheets at the companies, and government agencies around the world, if we are going to be successful. The tooling for deploying APIs from spreadsheets has emerged, but we have a lot of work ahead to make them simpler and easier to use.
With the majority of the worlds data locked up in spreadsheets, this also means many of the business decision makers have their head in the spreadsheet on a daily basis, depending on the data, calculations, and visualizations that influence their daily decision-making. I’m seeing only light efforts around delivering API driven services in the spreadsheet, something that is going to have to grow significantly before the API industry can reach the scale we would like.
I know that the spreadsheet does not excite API providers, and API integrators, but they are a comfortable tool for many of the business ranks, and if we are going to get them to buy into API economy, and play nicely, we are going to have to accommodate their world. When thinking of spreadsheets and APIs, don't just think delivering content and data to APIs, but also how APIs can deliver vital content and data back to spreadsheets users—acknowledging the ubiquitous tool can provide huge benefits as an API client, as well as data source.
This post has been open for almost two weeks now in Evernote. It began as a simple story about the possibility for generating code samples and libraries using Swagger. The longer it stays open, the wider the definition becomes, so I have to post something, just to draw a line in the sand. I’m not talking about generating code that runs on the server, this post is all about everything on the API consumption side of things.
Shortly after Wordnik launched the machine readable API definition format Swagger, they launched a library for generating client side code samples in a variety of languages. This was something that was evolved upon by Apiary, with the launch of their API design platform, and introduction of API Blueprint. Even with these advances forward, there were still many shortcomings, and debate around what you could actually auto-generate on the client-side using a machine readable API definition continued. I can’t tell you how many random Tweets I get from people saying, “Oh is auto-generation of code cool again?” or “I thought you couldn’t auto-generate client code or SDKs ;-)"
Amidst the debate about what is really possible, and the jokes about our SOA past, new players have emerged like Apimatic that are looking to raise the bar when it comes to generation of not just simple code samples, libraries or stubs, but sophisticated API SDKs. I am sure the jokes about automating client code will still occurs, but there is no denying that the overall conversation is moving forward.
As I’m exploring my own limitations of what is possible when generating client-side code with Swagger, I also come across new players like Lucybot, who are moving the conversation forward with API cookbooks, and Single Page App (SPA) generated from Swagger definitions. I’m not in denial that there is a lot of work ahead, but in the two weeks that I’ve been crafting this post, I’d say I have gotten a glimpse of what is next. When you bundle the latest movements in virtualization and containerization, and using API definitions like Swagger and API Blueprint to auto-generate client side code, I feel like the current potential is unlimited, and things are just heating up.
When you start talking about generating server or client side code for APIs, using machine readable API definition formats like Swagger or API Blueprint, many technologists feel compelled to let you know, that at some point you will hit a wall. There is only so far you can go, when using your API definition as guide for generating server-side or clienit-side code, but in my experience you can definitely save some significant time an energy, by auto-generating code using Swagger definitions.
I just finished re-designing 15 APIs that support the core of API Evangelist, and to support the work I wrote four separate code generation tools:
PHP Server - Generating a Slim PHP framework for my API, based upon Swagger definition.
PHP Client - Assemble a custom PHP client of my design, using Swagger definition as guide.
MySQL Database - Generate a MySQL script based upon the data models available in a Swagger definition.
Using Swagger, I can get myself 90-100% of the way for most of the common portions of the APIs I design. When writing a simple CRUD API like notes, or for links, I can auto-generate the PHP server, and a JS client, and underlying MySQL table structure, which in the end, runs perfectly with no changes.
Once I needed more custom functionality, and have more unique API calls to make, I then have to get my hands dirty, and begin manually working in the code. However auto-generation of code sure gets me a long way down the road, saving me time doing the really mundane, heavy lifting in creating the skeleton code structures I need to get up an running with any new API.
I’m also exploring using APIs.json, complete with Swagger references, and Docker image references to further bridge this gap. In my opinion, a Swagger definition for any API, can act as a fingerprint for which interfaces a docker image supports. I will write about this more in the future, as I produce better examples, but I'm finding that using APIs.json to bind a Swagger definition, with one or many Docker images, opens up a whole new view of how you can automate API deployment, management, and integration.
I’ve been tracking on the usage of spreadsheets in conjunction with APIs for several years now. Spreadsheets are everywhere, they are the number one data management tool in the world, and whether API developers like or not, spreadsheets will continue to collide with the API space, as both API providers, and consumers try to get things done using APIs.
APIs are all about getting access to the resources you need, and spreadsheets are being used by both API providers and consumers to accomplish these goals. It makes complete sense to me that business users would be looking for solutions via spreadsheets, as they are one potential doorway to hacking for the average person—writing macros, calculations, and other dynamic features people execute within the spreadsheet.
I know IT would like to think their central SQL, MySQL, Postgres, Oracle and other database are where the valuable data and content assets are stored at a company, but in reality the most valuable data resources are often stored in spreadsheets across an organization. When it comes time to deploying APIs, this is the first place you should look for your datasources, resulting in Microsoft Excel and Google Spreadsheet to API solutions like we’ve seen from API Spark.
I’m seeing spreadsheets used by companies to deploy APIs in some of the following ways:
Microsoft Excel - Turning Microsoft Excel spreadsheets directly into APIs. by taking a spreadsheet, and generating an API is the fastest way to go from closed data resource to an API for anyone to access, even without programming experience.
Google Spreadsheet - Mounting public and private Google Spreadsheets is an increasingly popular way to publish smaller datasets as APIs. Since Google Spreadsheets is web-based, it becomes very easy to use the Google Spreadsheet API to access any Spreadsheet in a Google account, then generate a web API interface that can allow for reading or writing to a spreadsheet data source via a public, or privately secured API.
Beyond deploying APIs I’m seeing API providers provide some innovative ways for users to connect spreadsheets to their APIs:
Spreadsheet as Client - Electronic parts search API Octopart has been providing a bill of materials (BOM) solution via Microsoft Excel, and now Google Spreadsheets for their customers--providing a distributed parts catalog in a spreadsheet, that is kept up to date via public API.
Spreadsheet as Cache - I’ve talked with U.S. Census and other data providers about how they provide Microsoft Excel and Google Spreadsheet caches of API driven data, allowing users to browse, search and establish some sort of subset of data, then save as a spreadsheet cache for offline use.
Moving out of the realm of what API providers can do for their API consumers with spreadsheets, and into the world of what API consumers can do for themselves, you start to see endless opportunities for API integration with spreadsheets using reciprocity providers:
Zapier - There are five pages of recipes on the popular API reciprocity provider Zapier that allow you to work with Google Docs, and 57 pages that are dealing directly with Google Drive, providing a wealth of tools that non-developers (or developers) can use when connecting common APIs up to Google Spreadsheets.
I’ve seen enough movement in the area of Microsoft Excel and Google Spreadsheets being used with APIs to warrant closer monitoring. To support this I've started publishing most of my research to an API Evangelist spreadsheet research site, which will allow me to better track, curate, tag, and tell stories around spreadsheets and APIs.
As I do with my 60+ API research projects, I will update this site when I have time, publishing anything I've read, written, and companies I think are doing interesting things spreadsheets and APIs. I'm pretty convinced that spreadsheets will be another one of those bridge tools we use to connect where we are going with APIs, with the reality of where the everyday person is, just trying to get their job done.
Disclosure: API Spark is an API Evangelist partner.
As I’m working to add yet another API example to my growing list of hypermedia APIs in the wild, I can't help but think about the long evolution of hypermedia, and how it will eventually become part of the mainstream API consciousness.
I first started following the often heated discussions around hypermedia a couple years ago as leading API technologists began discussing this significant evolution in API design. Hypermedia has numerous benefits and features, but one you often hear in discussions is that if we use hypermedia we can stop designing custom clients that consume APIs.
The logic is that if every API call comes bundled with URLs for discovering and navigating the resources that are made available via an API, clients can just use this as a blueprint for any app to API interactions. This became a fairly large argument between hypermedia architects and hypermedia haters, something that I think turned a lot of people off to the concept, forcing us to stick with many of the bad habits we already knew.
As I review these new hypermedia APIs, few of them are perfect by any hypermedia measurement, but they use the sensible portions of hypermedia discovery and navigation to deliver a better API experience for developers. I don't think API providers are doing it because of the perfect hypermedia vision we've heard articulated in the past, they are borrowing the pieces that make sense to them and that meet their goals.
Someday we may achieve a world where API clients aren't custom, with every application automatically knowing how to discover, interact, and navigate any API resource it will need. However I think in the currently reality, we will see hypermedia being adopted because it just makes sense as a next step for sensible API design, and this is how we should communicate it to curious API designers, looking to understand exactly what is this concept called hypermedia.
One of the predictions that caught my eye was that "server mash-ups will increase but client mash-ups will decline"--he clarifies it with:
The increasing popularity of languages like Node.js, Erlang and Closure will make implementing server-side mash-ups more efficient and easier to maintain than doing the same work within a client application; especially for the mobile platform. This will reduce the “chattiness” of client-side applications and increase the security and flexibility of server-side implementations. The result will be a perceived increase in responsiveness and a reduced use of battery power on mobile apps.
My predication is we’ll see many more returns from server side API mashups, than we did during the client side mashup gold rush days! Especially if providers open source these stacks, while also offering as a service.
We all have our own approaches to API design and development, many of which will never see the light of day. In the API space we hear a lot about API management and API success stories, but not much about the process of designing, developing and initial deployment of APIs. I just had a little taste of how the Wordnik team approaches it, using Swagger.
Often when you hear about Swagger in the industry, you hear about the UI portion. You know the sexy interactive documentation that is fast becoming a standard with APIs, but it’s just the tip of the iceberg--there is a whole lot more power to Swagger, than just interactive docs.
“The heart of Swagger is the specification, and from that, cool shit can get done!”, say Tony Tam Wordnik CEO and technical co-founder.
To demonstrate, Tony walked me through Wordnik’s approach to designing, developming and deploying a new API driven iPad app, using a team of 3:
One person driving an editor writing JSON files, which are the Swagger spec for the needed API
The all three discussed the operations, parameters, while adding them to the JSON, and re-running the swagger spec validator after each meaningful change
When they were happy with the specs, they loaded the JSON files into the UI through apache installed on a local machine
After they inspected each API and operation again, they wrote the models in the spec files, and reviewed again to make sure everything was good
Then they ran the Swagger Codegen and generated a Scalatra (Scala) server from the spec files
Then they ran the Swagger Codegen and generated an Objective-C client from the spec files
The server developer went off and wired the server to the business logic
The front-end guy went and wired the UI to the objective-c library
The process took 2.5 hours in total, from API to interface--a technique they call interface-driven development, which focuses on modeling the perfect interface for the problem they are trying to solve using an API.
The Wordnik approach to API design and development using Swagger is interesting. For me, it demonstrates that a clean API spec should not be an afterthought, and a means by which you generate interactive API documentation, or when API discovery becomes an issue. Your entire design, development and management process should center around a meaningful API spec, which will then allow you to deploy your API server, interactive documentation, client code, while also providing API discovery.
"They're trying to control those eyeballs on their apps, they're an ad-based company, they make money that way,” says Twitpic founder Noah Everett, according to Buzzfeed.
One thing is clear. Twitter is serious in its effort to take control over its ecosystem. It has a plan, and it's systematically rolling it out, taking control over each area it needs to maximize "promoted" revenue.
15 Apr 2009
I am constantly looking for better tools to manage my cloud environments. I have been using Amazon S3 since it first came out. I really saw the potential Amazon S3 had early on, as storage for all my web applications.
I also store all my data at Amazon S3 for my personal and business use. I have used Firefox Add-On S3 Fox to upload and download all my data. It has worked very well to meet my needs, except it fails a lot. It doesn't handle large files very well, and doesn't recover from poor connection failures. Its crude, but gets the job done.
For the most part I rely on writing custom code to upload files to Amazon S3.
I came across a new tool that I would put into my professional or enterprise quality tool set for managing my Amazon S3 data. It is called CloudBerry.
It works like most familiar FTP clients with your local files on the left...and your remote files on the left. It is definitely the best tool out there for the average person to manage their data in the cloud.
It opens up the Amazon S3 storage cloud for use beyond just developers. I am uploading a 2.1 GB zipped folder of images right now. It is a little slower than S3 Fox or code I would write, but its slow and steady. I have to pack up my computer and go upstairs, so I will hit pause...and restart once I set back up.
I highly recommend it for those of you who wish to start using the cloud for your every day storage needs.