These are the news items I've curated in my monitoring of the API space that have some relevance to the API client conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.08 Aug 2017
I have my own URL shortener for API Evangelist called apis.how. I use it to track the click through rates for some of my research projects, and partner sponsorships. I’ve had the URL shortener in operation for about two years now, and I still do not have any type of UI for it, relying 100% on Postman for adding, searching, and managing the URLs I am shortening, and tracking on.
My URL shortener just hasn’t raised to a level of priority where I’ll invest any time into an administrative interface, or dashboard for my URL shortener. I used Bitly and Google for a while, but I really just needed a simple shortening with basic counts, nothing more. When I bought the domain I launched a handful of API endpoints to support, allowing me to add, update, search, and remove URLs, as well as track the click throughs, and query how many clicks a link received for each mont. I can easily accomplish all of this through the Postman interface, making basic calls to my simple API–no over-engineering necessary.
I was reading the post from Runscope on copying environments using their new API. I was looking through the request and response structure for their API, it looks like a pretty good start when it comes to what I’d call API environment portability. I’m talking about allowing us to define, share, replicate, and reuse the definitions for our API environments across the services and tools we are depending on.
If our API environment definitions shared a common schema, and API like Runscope provides, I could take my Runscope environment settings, and use them in my Stoplight, Restlet Client, Postman, and other API services and tooling. It would also help me templatize and standardize my development, staging, production, and other environments across the services I use. Assisting me in keeping my environment house in order, and also something that I can use to audit and turn over my environments to help out with security.
It is just a thought. An API environment API, possessing an evolving but common schema just seems like one of those things that would make the entire API space work a little smoother. Making our API environments exportable, importable, and portable just seems like it would help us think through when it comes setting up, configuring, managing, and evolving our API environments–who knows maybe someday we’ll have API service providers who help us manage our API environments, dictating how they are used across the growing number of API services we are depending on.
Disclosure: Runscope and Restlet are API Evangelist partners.
The Postman team has been hard at work lately, releasing their API data editor, as well as introducing variable highlighting and tooltips. The new autocomplete menu contains a list of all the variables in the current environment, followed by global variables, making your API environment setups more accessible from the Postman interface. Introducing a pretty significant time saver, once you have your environments setup properly.
This is a pretty interesting feature, but what makes me most optimistic, is when this approach becomes available for parameters, headers, and some of the data management features we are seeing emerge with the new Portman data editor. It all feels like the UI equivalent of what we've seen emerge in the latest OpenAPI 3.0 release, helping us better manage and reuse the schema, data, and other bits we put to use across all of our APIs.
Imagine when you can design and mock your API in Postman, crafting our API using a common vocabulary. Reusing environment variables, API path resources, parameters, headers, and other common elements already in use across operations. Imagine when I get tooltip suggesting that I use Schema.org vocabulary, or possibly even RFCs for a date, currency, and other common definitions. Anyways, I'm liking the features coming out of postman, and I'm also liking that they are regularly blogging about this stuff, so I can keep up to speed on what is going on, and eventually cover here on the blog, and include in my research.
In 2017 I think that getting our act together when it comes to our data schema will prove to be just as important as getting it together when it comes to our API definitions and design. This is one reason I'm such a big fan of using OpenAPI to define our APIs because it allows us to better organize the schema of the data included as part of the API request and response structure. So I am happy to see Postman announce their new data editor, something I'm hoping will help us make sense of the schema we are using throughout our API operations.
The Postman data editor provides us with some pretty slick data management UI features including drag and drop, a wealth of useful keyboard shortcuts, bulk actions, and other timesaving features. Postman has gone a long way to inject awareness into how we are using APIs over the last couple of years, and the data editor will only continue developing this awareness when it comes to the data we are passing back and forth. Lord knows we need all the help we can get when it comes to getting our data backends in order.
The Postman data editor makes me happy, but I'm most optimistic about what it will enable, and what Postman has planned as part of their roadmap. They end their announcement with "we have a LOT of new feature releases planned to build on top of this editor, capabilities inspired by things you already do using spreadsheets". For me, this points to some features that would directly map to the most ubiquitous data tools out there--the spreadsheet. With a significant portion of business in the world is done via spreadsheets, it makes the concept of integration into the API toolchain a pretty compelling thing.
I was reviewing the latest changes with Visual Studio 2017 and came across the section introducing connected services, providing a glimpse of Microsoft APIs baked into the integrated development environment (IDE). I've been pushing for more API availability in IDE's for some time now, something that is not new, with Google and SalesForce having done it for a while, but is something I haven't seen any significant movement in for a while now.
I have talked about delivering APIs in Atom using APIs.json, and have long hoped Microsoft would move forward with this in Visual Studio. All APIs should be discoverable from within any IDE, it just makes sense as a frontline for API discovery, especially when we are talking about developers. Microsoft's approach focuses on connecting developers of mobile applications, with "the first Connected Service we are providing for mobile developers enables you to connect your app to an Azure App Service backend, providing easy access to authentication, push notifications, and data storage with online/offline sync".
In the picture, you can see Office 365 APIs, but since I don't have Visual Studio I can't explore this any further. If you have any insight into these new connected services features in the IDE, please let me know your thoughts and experiences. If Microsoft was smart, all their APIs would be seamlessly integrated into Visual Studio, as well as allow developers to easily import any other API using OpenAPI, or Postman Collections.
While I think that IDEs are still relevant to the API development life cycle I feel like maybe there is a reason IDEs haven't caught up in this area. It feels like a need that API lifecycle tooling like Postman, Restlet Client, and Stoplight are stepping up to service the area. Regardless I will keep an eye on. It seems likno-braineriner for Microsoft to make their APIs available via their own IDE products, but maybe we are headed for a different future where a new breed of tools helps us more easily integrate APIs into our applications--no code necessary.
After I wrote a piece on guidance from the USGS around writing fault-resistant code when putting their API to use, my friend Darrel Miller expanding on this by suggesting I include "change resilience" as part of the definition.
@kinlane I would like to see that guidance expanded to include writing change resilient client code.— Darrel Miller (@darrel_miller) September 9, 2016
It is something that has sat in my notebook for a couple weeks, and keeps floating up as a concept I'd like to explore further. I have some initial thoughts on what this means but is something that I need to write about before I grasp better. Hopefully, it will bring more suggestions about what change resilient code means to other people.
Ok, so off the top of my head, what elements would I consider when thinking about producing change resilient client code:
- Status Codes - Making sure clients read, and pay attention to HTTP status codes used by API providers.
- Hypermedia - Links are fragile, and avoiding baking them into clients makes a whole lotta sense.
- Plan B API - Have a backup API identified, that can be used when the A API provider goes away.
- Circuit Breaker - Build in a circuit breaker into code that responds to specific status codes and events.
Now that I'm exploring, I have to ask, who's responsibility is it to build change resilience into the clients? Provider or consumer? Seems like there is a healthy responsibility on both parties? IDK. I guess we should just all be honest about how fragile the API space is, and providers should be honest with consumers when it comes to thinking about change resiliency, but ultimately API consumers have to begin to thinking more deeply and investing more when it comes planning for change--not just freaking out when it happens.
I have to admit that the code I have written as part of my API monitoring system, which integrates with over 30 APIs, isn't very fault or change resistant. When things break, they break. As the only user, this isn't a showstopper for me, but thinking about change is something I"m going to be considering as I kick the tires on my client. While these APIs have been incredibly stable for me, I can't help but listen to Darrel and want to be asking more questions when it comes to dealing with change across my API integrations.
There are more HTTP client tools out there than I can shake a stick at (I've reached that point, I'm shaking sticks at things), and in 2016 I predict there will be even more entrants into the space. I'd say Postman was a pioneering force in the evolution of the HTTP client when it comes for web API space, but is something that it is beginning to collide with API design tooling from Apiary, as well as being morphed by new players like Stoplight.io.
Maybe I am playing with more of these environments than the average API consumer is, because of what I do for a living, but I have to say, I am getting tired of "importing" my API definitions. Don't me wrong. I am stoked that all tools support the importing of machine readable API definitions like OADF, and API Blueprint, but I cannot help always looking to what should be next, and I want to be able to just run each API, in my HTTP client of my choice.
For all of my own APIs, I provide a Postman icon, and link to a Postman Collection. It just gives you quick access to the machine readable currency that all services I depend on speak, OADF, API Blueprint, and Postman Collection. However, I still have to import it into Postman, or other HTTP API client, or service I will be using. While this is a good start, and is something I recommend other API providers do, I think we can still do better.
If you are operating one of the HTTP API clients, or planning one of the next generation API cleints, tools, garage, hub, playground, studio, workbench, or builder, can you please provide a "run in XXXX" embeddable button please? I would like to see pretty little icons throughout API portals, and the service providers we depend on across the API space, that empower me run any API via the client I depend on every day.
Think the Twitter and Facebook share buttons, but for API integration, and the currency is OADF and API Blueprint definitions. I appreciate all you HTTP API client providers considering my crazy requests. If done right, I think it could result in some potential new users, depending how you'd handle the process for users who clicked, but didn't actually have an account with your platform yet--anyway, food for thought. #onward
I am spending some time taking another look at my "client research", which started out as just about Postman and PAW, but now contains ten separate services I'm and bundling into this area of research. As with all my research areas, these project repos shift, evolve, split and marge with time, as the API space changes, and my awareness of it grows.
I completely understand the term "client" doesn't provide an adequate label for this bucket of research, but for now, it will have to do. As I add a couple of new services to the bucket, and made my way through some of the existing ones I had, I wanted to step back and look at what they were offering, but more importantly the message that went around quantify what tehse companies were offering.
When it comes to what I call "lines along the API lifecycle", I saw these areas represented.
This is where the API client line potentially intersects with all of these other API life-cycle lines. However, When you start to analyze the features or building blocks offered by these service providers, you begin to see each stop along along the API client line, which becomes pretty critical to other areas of the API lifecycle.
I know that what I am saying might not be completely clear, it isn't for me either. That is why I tell stories, to try and find the patterns, and learn how to articulate all the moving parts. I'm still trying to figure out what to call my research, alongside all of these API service providers working to define just exactly what it is they are selling as well.
The more time I spend with my API client research, the more all of this comes into focus. The problem is that these companies are rapidly adding in new features, in demand to what their customers are needing, which keeps me on my toes, as well as increases overlap with other lines that I track on along the API life-cycle.
I just wanted to take a moment, update my research, and take another look at the companies, and tooling at play.
Shortly after the Zypr voice API came on to the scene in 2011, I launched my research into voice APIs. Like many other areas of the API universe, voice has come in and out of focus for me, something I think will take much longer to unfold, than any of us could have ever imagined. Zypr quickly ran out of steam, and other similar solutions have come and gone over the last couple years as well, leaving my research pretty scattered across many different concepts of how voice and APIs are colliding--lacking any real coherency.
I took a moment last week to take a fresh look at my voice API research, because of a comment by Steven Willmott (@njyx), the CEO of 3Scale. Its not an exact quote, but Steve spoke about how voice is the future of API consumption, after he had attended the AWS:Reinvent in Las Vegas. I agree with him. Voice APIs is a topic that has been significantly stimulated with the introduction of the Amazon Echo platform, but I also feel also coincides with a critical mass of available API driven resources that will deliver some of the value these platforms are promising users.
Voice recognition has always been something that leaves a lot to be desired--think Siri. Even with these challenges there are many dimensions to the voice API discussion, and with the amount of resources now available via simple APIs in 2015, I feel we are reaching a more fertile, and friendly time for voice solutions to return the value end-users desire. We now have a rich playing field of weather, news, stocks, image, video, podcast, and other data, content, rich media, and programmatic resources, which can be linked to specific voice commands--something we didn't have before.
While there is still so much work to be done, but I agree with Steve's vision, that voice will play an increasingly significant role as an API client. I would add that like mobile, or the recent wave of wearables, voice will have special constraints when it comes to API design, further requiring API providers keep their APIs simple, and reflect how users will experience them, not just being a SELECT * FROM table WHERE q = 'search', with a URL bound to it.
I think the API providers who are further along in their journey, will get a boost as voice evolves as an API client, and voice enabled app developers are able to easily integrate valuable API driven resources into their solutions. Even with my new found optimism about voice APIs, I still think we are years away from voice solutions actually living up to, even a small portion of the hype they seem to get over the years. Regardless, I'll be working to keep a closer eye on things, and will be sharing via my voice API research.
I've been tagging companies that I come across in my research, and stories that I find with the term "orchestration" for some time now. Some of this overlaps with what we know as cloud-centric orchestration using Puppet or Chef, but I am specifically looking for how we orchestrate across the API lifecycle which I feel overlaps with cloud orchestration, but pushes into some new realms.
As I'm carving off my orchestration research, I am also spending time reviewing a newer breed of what I'm calling API hubs, workspaces, or garages. Over the last year, I've broken out IDE research from my overall API Discovery research, and SDK from my API Management research, and client from my API Integration research. In parallel with an API-centric way of life, I want all my research to be as modular as possible, allowing me to link it together into meaningful ways that help me better understand how the space works, or could work.
Now that I'm thinking terms of orchestration, something that seems to be a core characteristic of these new API hubs, work spaces, or garages--I'm seeing a possibly new vision of the API life-cycle. I'm going to organize these new hubs, work spaces, and garages under my IDE research. I am starting to believe that these new work spaces are just the next generation IDE meant to span the entire API life-cycle--we will see how this thought evolves.
This new approach to API IDEs gives us design, and development capabilities, but also allows us to mock and deploy APIs. You can generate API documentation, and SDKs, and I'm seeing hints of orchestration using Github and Docker. I'm seeing popular clients like Postman evolve to be more like a API life-cycle IDE, and I'm also seeing API design tooling like Restlet Studio invest in HTTP clients to expand beyond just design, adding live client interaction, testing, and other vital life-cycle elements.
None of my research is absolute. It is meant to help me make sense of the space, and give me a way to put news I curate, companies I discover, and open source tooling into meaningful buckets that might also help you define a meaningful version of your own API life-cycle. I apologize if this post is a little incoherent, but it is how I work through my thoughts around the API space, how things are expanding and evolving in real-time--something I hope will come into better focus in coming weeks.
Thinking Beyond Just Language Specific Clients and Also Speaking the Formats Popular HTTP Clients Are Using28 Aug 2015
I was given an introduction to the Microsoft Graph A concept being applied to Office 365 APIs, other Microsoft APIs, and potentially beyond, to map out segments of users and every day objects. As I learn more about this unifying, graph API effort, I will write more, but this particular story is about how we communicate around the first steps taken by developers when integrating with any API. As an API provider, how you talk about integration, and craft your on-boarding resources, can significantly impact how developers view your resources, something that I think still will always need some work across the space.
After being introduced to the Microsoft Graph APIs, we were given a list of code resources, that we could use to hack against the API. The API integration overview had all the modern elements of API integration, with C#, Java, PHP, Node.js, Ruby, and other "coming soon" libraries. The resource toolkit, even had a sandbox account we could use, helping us on-board with less friction. While this approach is very progressive for the Microsoft world I've known, evolving us beyond the endless sea of C# focused WSDLs we all have seen historically, I would like to point what I think should be the next step in our evolution.
It makes me happy that we now speak in multiple programming languages, and provide sandbox or simulation environments. +1 What I'd like to see next, is that we also speak more HTTP, than just language specific clients. I'd like to see these types of API on-boarding toolkits start providing a Postman Collection for the API, or even better, a Swagger or API Blueprint definition that can allow me to not just on-board using the HTTP client of my choice, like Postman, PAW, or Insomnia REST. I agree that we should be speaking the native language of the developers we are courting, but I like to nudge things forward, and encourage speaking a more generic language of HTTP, for those of us who program in many different languages.
Just like being multi-lingual with APIs has moved us out of our web service silos, I'm hopeful that if more developers speak HTTP, it will help move us into the future, where API developers are more HTTP literate, are are really leveraging the strengths of HTTP, or even better--HTTP/2 in their everyday worlds. I started including Postman collections, along with my Swagger definitions, for my APIs. I'm also working to include API Blueprint, and other API definition formats, something that will allow potential API consumers to onboard using my language specific libraries, or the HTTP client of their choice.
I am working hard to establish a complete set of APIs for my own API stack which includes establishing complete Swagger definitions for the 25 APIs that I personally operate. These Swagger definitions are then used to generate Postman Collections, APIMATIC SDKs, and API Science monitors. I am also working hard to establish complete Swagger definitions for the 1000 companies in my API Stack, something I am partnering with APIMATIC on.
As part of this work, both teams are working hard to evolve our tooling for working with, and validating API definitions. I mentioned a couple weeks back, when I shared client SDK research conducted by APIMATIC, that quality SDK generation is kind of the high water mark for measuring API definition completion--meaning if your API definition isn't complete enough to generate a functional SDK, you need to spend more time in your API design editor, until it is more complete.
As I've been working to establish my own definition of what a complete Swagger definition is, APIMATIC has been hard at work doing the same, but applying to WADL, Swagger, RAML, API Blueprint, IODocs, and Google Discovery. Also at the same time I'm building my questions API to help me automate the validation of Swagger definitions, APIMATIC has been working on their own validation API, which they just added to their existing API client code generation API.
If you are generating machine readable API definitions in WADL, Swagger, RAML, API Blueprint, IODocs, or Google Discovery, then you should be validating your API designs using APIMATIC. Once you know your API definitions are solid, then you should also generate all your SDKs with APIMATIC too--I am only about 20% the way through doing this for my API Stack.
Nice work APIMATIC team. I predict by the end of the year we will have a full stack of APIs, and tooling, that helps us through almost every step of the API definition driven, API lifecycle.
P.S. They just added Node.js and Go SDKs to their stack this week!
P.S.S. APIMATIC is using Apiary.io.
The one thing I've learned in five years as the API Evangelist is that us technologists and developers don't always see the world like everyone else. We focus on the perfection of the technology, our own desires for the future, and often miss the mark on what end-users actually need. This is one of the hallmark success of APIs over SOA, is that by accident, APIs jumped out of the SOA petri dish (thanks Daniel Jacobson - @daniel_jacobson), and was use solve everyday problems that end-users face, using the technology that is readily available (aka HTTP).
While I think us API folks have done a great job of delivering valuable resources to mobile applications, and a decent enough job at delivering the same resources to web applications, and I guess we are figuring out the whole device thing? maybe? maybe not? Regardless, one area we have failed to serve a major aspect of the business world, is delivering valuable API resources to the number #2 client in the world—the spreadsheet.
We have done a decent job of providing resources to data stewards, helping them deploy APIs using spreadsheets using services like API Spark, but other than a handful of innovative implementations from companies like Octoparts and Twilio, there are no solid API consumption resources that target the spreadsheet environment. Meaning there is no easy way for mainstream spreadsheet users to put common API driven resources to work for them within the spreadsheets that they live in daily--that is until today, with the launch of the Blockspring launched their Google Spreadsheets Add-On.
Yeah I know, making APIs work in spreadsheets has been done for a while, via Google Spreadsheets and Excel Spreadsheets, but nobody has standardized it like Blockspring just did. So let’s take a quick look at the implementation. I went to the Google Chrome App Store, and downloaded the add-on.
Then using a new spreadsheet, I click on add-ons > Blockspring, and logged into my account. After giving Blockspring access to the Google Spreadsheet via my Google Account oAuth, I was given an API console in the right hand sidebar of my spreadsheet interface. The API options I’m given aren't the usual geek buffet, they are everyday use scenarios that would attract the average spreadsheet users.
I select the IMDB movie search, which once chosen, I’m given the option to populate my spreadsheet with results, providing me with API driven resources, right in my worksheets. The best part is it is complete with one cell as a search term, allowing me to customize my IMDB search.
Using Blockspring, I’m given easy to use, API driven resources, that anyone can implement, like visualizing the recent news:
Or possibly evaluate stock volatility clustering, using stock market data APIs (cause you know we all do a lot of this):
Blockspring gives me over 1000 API driven functions that I can use in my Google Spreadsheet—kicking everyone’s asses when it comes to potential API client delivery. While us technologists are arguing over whether or not we can automatically generated Swagger driven SDKs, and the importance of hypermedia APIs when deploying the next generation clients, someone like Blockspring comes along and pipes in APIs to the #2 client in the world—the spreadsheet. #winning
Now the game will be about getting the attention of Google Spreadsheet users, and developing comparable Microsoft Spreadsheet tooling, and getting mainstream Excel users attention as well. The rest of you will have to get the attention of Blockspring, and make sure your API resources have simple, meaningful endpoints that can be piped in as Blockspring Google Spreadsheet functions. Spreadsheet driven business units should not have to learn about APIs and go look for them, at each individual API portal—APIs providers should find and education business users about their resources, via one of the most ubiquitous tools in business.
Nice work Blockspring, in helping ensure the space move beyond excel as a data source for API deployment, and focusing on it as an API client, delivering vital API resources to the business users who can potentially benefit the most, and are willing and able to pay for API access in my opinion.
P.S. As soon as I finished this I remembered this story from last weeks API.Report - Free Federal Energy and Economic Information Delivered Straight to Your Spreadsheet - not an standardized approach, but definitely an important implementation to showcase.
I've said it before, and I will say it again — Excel and spreadsheets will continue to be super critical for the growth of the API industry. There are an increasing number of solutions like APISpark for deploying and managing APIs using spreadsheets, something that will get easier over time, but so far I'm not seeing equal acknowledgment of the potential of Microsoft Excel as an API client.
The majority of the world's data is locked up in spreadsheets, and CSV files. Something I learned during my short time in Washington DC, is that the API community is going to have to court the legions of data stewards who spend their days in spreadsheets at the companies, and government agencies around the world, if we are going to be successful. The tooling for deploying APIs from spreadsheets has emerged, but we have a lot of work ahead to make them simpler and easier to use.
With the majority of the worlds data locked up in spreadsheets, this also means many of the business decision makers have their head in the spreadsheet on a daily basis, depending on the data, calculations, and visualizations that influence their daily decision-making. I’m seeing only light efforts around delivering API driven services in the spreadsheet, something that is going to have to grow significantly before the API industry can reach the scale we would like.
I know that the spreadsheet does not excite API providers, and API integrators, but they are a comfortable tool for many of the business ranks, and if we are going to get them to buy into API economy, and play nicely, we are going to have to accommodate their world. When thinking of spreadsheets and APIs, don't just think delivering content and data to APIs, but also how APIs can deliver vital content and data back to spreadsheets users—acknowledging the ubiquitous tool can provide huge benefits as an API client, as well as data source.
This post has been open for almost two weeks now in Evernote. It began as a simple story about the possibility for generating code samples and libraries using Swagger. The longer it stays open, the wider the definition becomes, so I have to post something, just to draw a line in the sand. I’m not talking about generating code that runs on the server, this post is all about everything on the API consumption side of things.
Shortly after Wordnik launched the machine readable API definition format Swagger, they launched a library for generating client side code samples in a variety of languages. This was something that was evolved upon by Apiary, with the launch of their API design platform, and introduction of API Blueprint. Even with these advances forward, there were still many shortcomings, and debate around what you could actually auto-generate on the client-side using a machine readable API definition continued. I can’t tell you how many random Tweets I get from people saying, “Oh is auto-generation of code cool again?” or “I thought you couldn’t auto-generate client code or SDKs ;-)"
Amidst the debate about what is really possible, and the jokes about our SOA past, new players have emerged like Apimatic that are looking to raise the bar when it comes to generation of not just simple code samples, libraries or stubs, but sophisticated API SDKs. I am sure the jokes about automating client code will still occurs, but there is no denying that the overall conversation is moving forward.
As I’m exploring my own limitations of what is possible when generating client-side code with Swagger, I also come across new players like Lucybot, who are moving the conversation forward with API cookbooks, and Single Page App (SPA) generated from Swagger definitions. I’m not in denial that there is a lot of work ahead, but in the two weeks that I’ve been crafting this post, I’d say I have gotten a glimpse of what is next. When you bundle the latest movements in virtualization and containerization, and using API definitions like Swagger and API Blueprint to auto-generate client side code, I feel like the current potential is unlimited, and things are just heating up.
When you start talking about generating server or client side code for APIs, using machine readable API definition formats like Swagger or API Blueprint, many technologists feel compelled to let you know, that at some point you will hit a wall. There is only so far you can go, when using your API definition as guide for generating server-side or clienit-side code, but in my experience you can definitely save some significant time an energy, by auto-generating code using Swagger definitions.
I just finished re-designing 15 APIs that support the core of API Evangelist, and to support the work I wrote four separate code generation tools:
- PHP Server - Generating a Slim PHP framework for my API, based upon Swagger definition.
- PHP Client - Assemble a custom PHP client of my design, using Swagger definition as guide.
- MySQL Database - Generate a MySQL script based upon the data models available in a Swagger definition.
Using Swagger, I can get myself 90-100% of the way for most of the common portions of the APIs I design. When writing a simple CRUD API like notes, or for links, I can auto-generate the PHP server, and a JS client, and underlying MySQL table structure, which in the end, runs perfectly with no changes.
Once I needed more custom functionality, and have more unique API calls to make, I then have to get my hands dirty, and begin manually working in the code. However auto-generation of code sure gets me a long way down the road, saving me time doing the really mundane, heavy lifting in creating the skeleton code structures I need to get up an running with any new API.
I’m also exploring using APIs.json, complete with Swagger references, and Docker image references to further bridge this gap. In my opinion, a Swagger definition for any API, can act as a fingerprint for which interfaces a docker image supports. I will write about this more in the future, as I produce better examples, but I'm finding that using APIs.json to bind a Swagger definition, with one or many Docker images, opens up a whole new view of how you can automate API deployment, management, and integration.
I’ve been tracking on the usage of spreadsheets in conjunction with APIs for several years now. Spreadsheets are everywhere, they are the number one data management tool in the world, and whether API developers like or not, spreadsheets will continue to collide with the API space, as both API providers, and consumers try to get things done using APIs.
APIs are all about getting access to the resources you need, and spreadsheets are being used by both API providers and consumers to accomplish these goals. It makes complete sense to me that business users would be looking for solutions via spreadsheets, as they are one potential doorway to hacking for the average person—writing macros, calculations, and other dynamic features people execute within the spreadsheet.
I know IT would like to think their central SQL, MySQL, Postgres, Oracle and other database are where the valuable data and content assets are stored at a company, but in reality the most valuable data resources are often stored in spreadsheets across an organization. When it comes time to deploying APIs, this is the first place you should look for your datasources, resulting in Microsoft Excel and Google Spreadsheet to API solutions like we’ve seen from API Spark.
I’m seeing spreadsheets used by companies to deploy APIs in some of the following ways:
- Microsoft Excel - Turning Microsoft Excel spreadsheets directly into APIs. by taking a spreadsheet, and generating an API is the fastest way to go from closed data resource to an API for anyone to access, even without programming experience.
- Google Spreadsheet - Mounting public and private Google Spreadsheets is an increasingly popular way to publish smaller datasets as APIs. Since Google Spreadsheets is web-based, it becomes very easy to use the Google Spreadsheet API to access any Spreadsheet in a Google account, then generate a web API interface that can allow for reading or writing to a spreadsheet data source via a public, or privately secured API.
Beyond deploying APIs I’m seeing API providers provide some innovative ways for users to connect spreadsheets to their APIs:
- Spreadsheet as Client - Electronic parts search API Octopart has been providing a bill of materials (BOM) solution via Microsoft Excel, and now Google Spreadsheets for their customers--providing a distributed parts catalog in a spreadsheet, that is kept up to date via public API.
- Spreadsheet as Cache - I’ve talked with U.S. Census and other data providers about how they provide Microsoft Excel and Google Spreadsheet caches of API driven data, allowing users to browse, search and establish some sort of subset of data, then save as a spreadsheet cache for offline use.
- Spreadsheet as Catch-All - Spreadsheets aren’t always being used just about data, you can see Twilio storing SMS, NPR using as crowdsourced engine, making spreadsheets into a nice bucket for catching just about anything an API can input or output.
Moving out of the realm of what API providers can do for their API consumers with spreadsheets, and into the world of what API consumers can do for themselves, you start to see endless opportunities for API integration with spreadsheets using reciprocity providers:
- Zapier - There are five pages of recipes on the popular API reciprocity provider Zapier that allow you to work with Google Docs, and 57 pages that are dealing directly with Google Drive, providing a wealth of tools that non-developers (or developers) can use when connecting common APIs up to Google Spreadsheets.
I’ve seen enough movement in the area of Microsoft Excel and Google Spreadsheets being used with APIs to warrant closer monitoring. To support this I've started publishing most of my research to an API Evangelist spreadsheet research site, which will allow me to better track, curate, tag, and tell stories around spreadsheets and APIs.
As I do with my 60+ API research projects, I will update this site when I have time, publishing anything I've read, written, and companies I think are doing interesting things spreadsheets and APIs. I'm pretty convinced that spreadsheets will be another one of those bridge tools we use to connect where we are going with APIs, with the reality of where the everyday person is, just trying to get their job done.
Disclosure: API Spark is an API Evangelist partner.
As I’m working to add yet another API example to my growing list of hypermedia APIs in the wild, I can't help but think about the long evolution of hypermedia, and how it will eventually become part of the mainstream API consciousness.
I first started following the often heated discussions around hypermedia a couple years ago as leading API technologists began discussing this significant evolution in API design. Hypermedia has numerous benefits and features, but one you often hear in discussions is that if we use hypermedia we can stop designing custom clients that consume APIs.
The logic is that if every API call comes bundled with URLs for discovering and navigating the resources that are made available via an API, clients can just use this as a blueprint for any app to API interactions. This became a fairly large argument between hypermedia architects and hypermedia haters, something that I think turned a lot of people off to the concept, forcing us to stick with many of the bad habits we already knew.
As I review these new hypermedia APIs, few of them are perfect by any hypermedia measurement, but they use the sensible portions of hypermedia discovery and navigation to deliver a better API experience for developers. I don't think API providers are doing it because of the perfect hypermedia vision we've heard articulated in the past, they are borrowing the pieces that make sense to them and that meet their goals.
Someday we may achieve a world where API clients aren't custom, with every application automatically knowing how to discover, interact, and navigate any API resource it will need. However I think in the currently reality, we will see hypermedia being adopted because it just makes sense as a next step for sensible API design, and this is how we should communicate it to curious API designers, looking to understand exactly what is this concept called hypermedia.
I've added PHP client libraries for the Free Application for Federal Student Aid (FAFSA) API.
- Get all applications (GET)
- Get single applications (GET)
- Add application (POST)
- Update application (PUT)
- Delete application (DELETE)
You can find all PHP libraries available under the master branch in this Github repository under /client/php.
Each is also listed below as a Github Gist.
FAFSA API - GET (All)- PHP
FAFSA API - POST - PHP
FAFSA API - GET (Single)- PHP
FAFSA API - DELETE - PHP
FAFSA API - PUT - PHP
One of the predictions that caught my eye was that "server mash-ups will increase but client mash-ups will decline"--he clarifies it with:
The increasing popularity of languages like Node.js, Erlang and Closure will make implementing server-side mash-ups more efficient and easier to maintain than doing the same work within a client application; especially for the mobile platform. This will reduce the “chattiness” of client-side applications and increase the security and flexibility of server-side implementations. The result will be a perceived increase in responsiveness and a reduced use of battery power on mobile apps.
As with my earlier post, individual API deployments will get smaller and more numerous, I agree 100%. This is where I’m going with my post this week on virtual API stacks. With so many individual resources available on the web, in the coming years we’ll see increased “mashing up” or “virtualization” of new stacks, that are meaningful to a particular app or group of developers.
An example of this in the wild is Singly, with their API aggregation or mashup, which is available as a service, but is also available as open source on Github as Hallway. Another recent example is potentially the OpenKit Gaming BaaS, which promised to be an open stack designed for game developers.
My predication is we’ll see many more returns from server side API mashups, than we did during the client side mashup gold rush days! Especially if providers open source these stacks, while also offering as a service.
We all have our own approaches to API design and development, many of which will never see the light of day. In the API space we hear a lot about API management and API success stories, but not much about the process of designing, developing and initial deployment of APIs. I just had a little taste of how the Wordnik team approaches it, using Swagger.
Often when you hear about Swagger in the industry, you hear about the UI portion. You know the sexy interactive documentation that is fast becoming a standard with APIs, but it’s just the tip of the iceberg--there is a whole lot more power to Swagger, than just interactive docs.
“The heart of Swagger is the specification, and from that, cool shit can get done!”, say Tony Tam Wordnik CEO and technical co-founder.
To demonstrate, Tony walked me through Wordnik’s approach to designing, developming and deploying a new API driven iPad app, using a team of 3:
- One person driving an editor writing JSON files, which are the Swagger spec for the needed API
- The all three discussed the operations, parameters, while adding them to the JSON, and re-running the swagger spec validator after each meaningful change
- When they were happy with the specs, they loaded the JSON files into the UI through apache installed on a local machine
- After they inspected each API and operation again, they wrote the models in the spec files, and reviewed again to make sure everything was good
- Then they ran the Swagger Codegen and generated a Scalatra (Scala) server from the spec files
- Then they ran the Swagger Codegen and generated an Objective-C client from the spec files
- The server developer went off and wired the server to the business logic
- The front-end guy went and wired the UI to the objective-c library
The process took 2.5 hours in total, from API to interface--a technique they call interface-driven development, which focuses on modeling the perfect interface for the problem they are trying to solve using an API.
The Wordnik approach to API design and development using Swagger is interesting. For me, it demonstrates that a clean API spec should not be an afterthought, and a means by which you generate interactive API documentation, or when API discovery becomes an issue. Your entire design, development and management process should center around a meaningful API spec, which will then allow you to deploy your API server, interactive documentation, client code, while also providing API discovery.
I was just reading a post via Buzzfeed, that Twitter was going to remove third-party image services from its apps. “According to a person who was briefed on the company's plans”, the changes will be coming in the next updates to the Twitter client(s).
"They're trying to control those eyeballs on their apps, they're an ad-based company, they make money that way,” says Twitpic founder Noah Everett, according to Buzzfeed.
One thing is clear. Twitter is serious in its effort to take control over its ecosystem. It has a plan, and it's systematically rolling it out, taking control over each area it needs to maximize "promoted" revenue.
If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.