Reflections on the Closure of Yahoo Pipes

Last night I popped up a quick post relaying the announcement of impending closure of Yahoo Pipes, recalling my first post on Yahoo Pipes, and rediscovering a manifesto I put together around the rallying cry We Ignore RSS at OUr Peril.

When Yahoo Pipes first came out, the web was full of the spirit of Web2.0 mashup goodness. At the time, the big web companies were opening all all manner of “open” web APIs – Amazon, Google, and perhaps more than any other, Yahoo – with Google and Yahoo particularly seeming to invest in developer evangelism events.

One of the reasons I became sos evangelical about Yahoo Pipes, particularly in working with library communities, was that it enabled non-coders to engage in programming the web. And more than that. It allowed non-coders to use web based programming tools to build out additional functionality for the web.

looking back, it seems to me now that the whole mashup thing arose from the idea of the web as a creative medium, and one which the core developers (the coders) were keen to make accessible to a wider community. Folk wanted to share, and folk wanted other folk to build on their services in interoperation with other services. It was an optimistic time for the tinkerers among us.

The web companies produced APIs that did useful things, used simple, standard representations (RSS, and then Atom, as simple protocols for communicating lists of content items, for example, then, later, JSON as a friendlier, more lightweight alternative to scary XML, which also reduced the need for casual web tinkerers to try to make sense of XMLHttpRequests), and seemed happy enough to support interoperability.

When Yahoo Pipes came online (and for a brief time, Microsoft’s Popfly mashup tool), the graphical drag-and-drop, wire it together, flow based programming model allowed non-coders to start trying developing, publishing, sharing and building on top of each others real web applications. You could inspect the internals of other peoples pipes, and clone those pipes so you could extend or modify them yourself, and put pipes inside pipes, fostering reuse and the notion of building stuff on top of and out of stuff you’ve learned how to do do before.

And it all seemed so hopeful…

And then the web companies started locking things down a bit more. First my Amazon Pipes started to break, and then my Twitter Pipes, as authentication was introduced to access the feeds published by those companies. It started to seem as if those companies didn’t want their content flows rewired, reflowed and repurposed. And so Yahoo Pipes started to become less useful to me. And a little bit of the spirit of a web as a place where the web companies allowed whosoever, coders and non-coders alike, to build a better web using their stuff started to die.

And perhaps with it, the openness and engagement of the core web developers – the coders – started to close off a little too. True, there are repeated initiatives about learning to code, but whilst I’ve fallen into that camp myself over the last few years, and especially over the last two years, having discovered IPython notebooks and the notion of coding, one line at a time, I think we are complicit in closing off opportunities that help people build out the web using bits of the web.

Perhaps the web is too complicated now. Perhaps the vested interests are too vested. Perhaps the barrage of content of and peck, peck, click, click, Like, addiction feeding, pigeon rat, behaviourist conditioning, screen based crack-Like business model has blinded us to the idea that we can use the web to build our own useful tools.

(I also posted yesterday about a planning application map I helped my local hyperlocal – OnTheWight – publish yesterday. If The Isle of Wight Council published current applications as an RSS feed, it would have been trivial to use the Yahoo Pipes to construct the map. It would have been a five minute hack. As it is, the process we used required building a scraper (in code) and hacking a some code to generate the map.)

There still are tools out there that help you build stuff on the web for the web. CartoDB makes map creation relatively straightforward, and things like Mozilla Popcorn allow you to build your own apps around content containers (I think? It’s been a long time since I looked at it).

Taking time out to reflect on this, it seems as if the web cos have become too inward looking. Rather than engaging wider communities to engage in building out the web, the companies get to a size where their systems become ever more complex, yet have to maintain their own coherence, and a cell wall goes up to contain that activity, and authentication starts to be used to limit access further.

At the time as the data flows become more controlled, the only way to access them comes through code. Non-coders are disenfranchised and the lightweight, open protocols that non-coding programming tools can work most effectively with become harder to justify.

When Pipes first appeared, it seemed as if the geeks were interested in building tools that increased opportunities to engage in programming the web, using the web.

And now we have Facebook. Tap, tap, peck, peck, click, click, Like. Ooh shiny… Tap, tap, peck, peck…

22 comments

  1. francesbell

    You capture my own feelings about the trends in web development but so much more articulately than I could manage. I have been struggling recently to learn how to visualise data obtained (indirectly) from Twitter and Facebook, then into Gephi to help with a research study. It occurs to me (as an ex-programmer many moons ago) that programs are always to some extent a black box that the programmer can look into (but of course there are more black boxes within the black box – compilers)- the user begs for more understanding. As a progammer, I remember how tedious in-program and user-focused documentation were to produce but i always thought them important. In my recent work, it has occurred to me that the black-boxing Facebook enacts with its vagueness over algorithms is probably more motivated by profit than the resource costs of documentation. Thanks Tony.

    • Tony Hirst

      @Frances There are several problems associated with visualising data:

      1) getting it;
      2) getting it into the thing you want to use to generate the visualisation.

      Getting the data into the visualisation tool often introduces gotchas – in the file format, in the way the data is presented in the file (spreadsheets with blank rows, headings all over the place, etc etc; CSV files with file encodings, various delimiters and quotation styles, UTF grief etc). Getting the visulisation too to produce the chart you want often requires reshaping the data somehow (in code, using R to ddply, melt, dcast data, or equivalents in Python/pandas; in not-code? Does Tableau support that?)

      At the end of the day, GUI tools just produce code. And they produce code that is perhaps less able to do exactly what you want to the standard you require. But they’re a starting point – and the more tools there are there at the level of programming-not-coding, the more likely folk are (?! or maybe not!) to produce stuff that works with those tools.

      • francesbell

        MIne was more a case of working with the data from extraction processes defined by someone else:) As the Twitter data was from Martin Hawksey’s TAGs Explorer and he kindly provided a lot of information about how he did that and what the data meant, I was OK there but the Facebook data was another can of worms. Even with the Twitter data, I have been very restrained in my choice of graphs/ representations to those whose assumptions (I think ) I understand. Anyway, I have decided in which simple way I will use the graphs (mainly as part of research process) and will write up more fully once we get the damn paper finished. At that point, I may ask for yours and Martin’s insights if you feel sp disposed.

  2. mhawksey

    where to begin. The news of Pipes closing isn’t surprising in fact it’s amazing it’s lasted so long. Authentication is main barrier and right now it appears there is no middle ground, you either need to code or it all get wrapped up into something shiny like IFTTT. For me Pipes was the perfect entry to better understand coding concepts, but more importantly appreciate the underlying data that forms the web. When people interested in taking more ownership of the web, coding and hacking it to solve their problems, ask me where to start all I’ve got is programming languages and APIs, which are very empowering but a more daunting start.

    I’m grateful I got to tinker with the web in that energised moment and grateful that I was able to witness you doing it.

    • Tony Hirst

      @martin We spent the afternoon in the back of the Wight yesterday, and walking into Brighstone through a kids playground noticed a child’s roundabout that was set into the ground – no gap between the roundabout and the ground to get your foot caught in. Playgrounds seemed to be completely managed environments now, rather than playgrounds where you could much around and try stuff out, and (learn how to) take a few risks, and maybe break something….

      I think we both joined the web at a time where there were tools available that let us play with it as a medium, at the wiring layer rather than just the content production layer. And that stepping stone helped us get the skills and confidence to move on to the next step. As I think s/one else mentioned either here or in the Twitter stream, and as you alluded to in your comment, where’s the first step now?

  3. Theo

    I have a very different interpretation of the demise of Yahoo! Pipes, and graphical programming in general. I have seen two dozen graphical programming approaches come and go over the past three decades, and the result is always the same: the paradigm lets anybody, novice and expert, build something trivial in no time flat, but to build something non-trivial is a logic and domain expert exercise and no graphical paradigm will help you with that. What is fascinating to me is that the graphical programming model still gets resurrected two or three times a decade. For example, all the BPM tools are now graphical drag-and-drop. Works great if you do something super-simple, but if you need a custom transformation because the data you receive is not 100% perfect, all the nice abstractions go out of the window. And that is the crux of the matter: in general, analytics is difficult to automate: you don’t know the error models of the streams you are consuming until you discover and characterize them, and then more often than not, you need some fancy footwork to correct these errors. 90% of the time is spent on that, and that requires commitment, domain expertise, programming skills, and typically, money to solve these problems. Once you solve them, the world shifts around and you have to start all over.

    That reality does not map well on graphical drag-and-drop interfaces.

    • Tony Hirst

      @theo I’m not claiming the GUI tools do all that code lets you do. But that’s the point. At the OU, we used to have in-house software developers who would produce applications that were built specifically to support teaching and learning. (For example, my colleague Jon Rosewell developed an application called RobotLab that we used to teach simple robot programming using Lego Mindstorms (and we’ll be using it again at residential school later this year). It’s a GUI environment that lets you point-click-and-edit short text based programmes that run in a simple 2D simulator or on an old Lego Mindstorms (RCX) robot.

      As my coding skills have improved, and I’ve found friendlier environments to work in, I’ve started working solely in code. But as a consequence, I’ve lost the opportunity to help people onto the first rung of the ladder: now folk have to get to grips with programming ideas and coding syntax. And that doesn’t work well in a half-hour demo session.

      Another benefit of GUI tools is that you can use them to generate code that can act as a starting point for coding. So for example, I used an online graphical ggplot tool to get started in learning how to to write ggplot code, and I use the Nomis point and click interface to help me knock up nomis API URLs I can then work around.

      There’s also a difference between good enough code and production code. None of my code is production quality, but chunks of it are good enough for what I need to do.

      • deepanalytics

        @Tony The goal of my reply was to add a piece of information synthesized by a dozen experiences with graphical programming languages: the point you make about the communication efficiency for the novice is perfect. I would add this is also the reason why these environments keep getting bought by enterprises: the managers with the budget are connecting to the paradigm without being able to extrapolate the productivity of the poor team that needs to solve business problems with it.

        Productivity is definitely the overriding quest for non-production and prototyping code. The management trick is to figure out a priori when code is just for a proof of concept, and when it will be integrated into a production system. As I have never seen a graphical system stand up to the rigors of production tooling, this might actually be a great feature in disguise. There are plenty of examples of DSLs that are very productive for prototyping but that get translated into different forms for production. MATLAB/Simulink and R are the examples that come to mind.

        • Tony Hirst

          @theo agreed… And also maybe worth adding that just because something is hacked together in code doesn’t mean that it is of production quality… I tinker with code all the time and keep trying to tell people that I wouldn’t trust it in a production environment. It works well enough for what I want it to (some of the time!;-) and that’s about the extent of it. Which is why I talk about /sketches/ all the time. Everything I produce is a /sketch/…

  4. francesbell

    Thanks Theo – that gives me great insight. I have struggled for weeks on visualising some Facebook data and it took a lot of trial and error with the data and the network graphs produced (I realise that’s not the same as the graphical programming interface you are referring to ) before I began to have some vague idea of what the resulting graph actually meant. Fortunately for me, the graphs are mainly a support for exploring and understanding qualitative data so I have some sense of cross-checking different meanings, otherwise I would feel confident to make any interpretation of the graph.

  5. Peter Murray

    Your second to last paragraph got me thinking about linked data:

    When Pipes first appeared, it seemed as if the geeks were interested in building tools that increased opportunities to engage in programming the web, using the web.

    Is there an analogy to be made about what will happen with semantic markup — RDFa, microformats, and Schema.org properties — on web pages? It seems like the use of schema.org properties is on the rise and people are starting to make good use of the data that is published there. In a decade, will we be wondering what happened to the promise of linked data in a dystopian world where such data is locked behind HTTP authentication headers?

    • Tony Hirst

      @Peter I think a big problem with Linked Data is the way it is evangelised: talk quickly descends into angle bracket nightmare hell… there is no easy on-ramp that helps make sense of it all for spreadsheet users, and reasoning is a really niche subject area compared to general skill levels when it comes to even writing effective web queries, for example…
      Each time I do a data talk, I try to make a bit more sense of the Linked Data thing to myself to try to find ways to pitch it to others, but often hear myself saying “triples” when I should really have found a more approachable way…

      • Peter Murray

        Very true. And if there was a platform like Yahoo Pipes that made it easy to query and mash up linked data from various web resources, it might get closer to being meaningful to end users. I know of tools like the Structured Data Linter that can make visible the data that is encoded on a web page (including, of course, this very blog post), but I’m not aware of tools that can actually do processing.

  6. ekoner

    Out of my head you! I see a rise in tools like Import.io, IFTTT, Zapier and the like as (not quite) filling the space of Yahoo pipes.

    R.I.P web 2.0

  7. Pingback: Emergent Code Chronicles 1.17 - Rewiring our ways
  8. Pingback: Block chain: “the only workable, distributed key value store in existence” | bavatuesdays
  9. Pingback: The drawn out death of Yahoo! Pipes and the steady rise of IFTTT | Online Journalism Blog
  10. Pingback: This week in API land #11 | Restlet - We Know About APIs
  11. Pingback: Yahoo! Pipes 😭 | 91 Percent Crud
  12. Pingback: Emergent Code » 1.17 Rewiring our ways