Via Jim Groom (Ghost in a Shell) and Tim Owens (Beyond LAMP), I note Cloudron.io, a cPanel/Installatron-like application (as far as the user is concerned) for launching dockerised applications from a digital application shelf:
The experience is a bit like having a hosted version of Kitematic that lets you launch containers on that host, or a revamped version of Sandstorm (Personal Application Hosting, Dreams of a Docker AppStore, and an Incoming Sandstorm?).
The applications themselves look as if they’re defined from a git repo containing a Dockerfile plus some Cloudron config info (examples).
To a certain extent, this simplifies the rigmarole of launching containers on a remote host (if you use something like Docker Cloud, you need to go in to DockerCloud, launch a server, then get the container running on the server (old example – Tutum became Docker Cloud; the process remains much the same).
The Docker Cloud route also allows you to launch either a single container or a stack of containers, which is to set, a set of linked containers run via Docker Compose. (For the use cases I’m interested in, we might calling such configurations linked applications).
I can see how Jim is excited by the idea of Cloudron as a way of extending the hosting service opportunities offered by Reclaim Hosting: it opens up the possibility of allowing users to host applications defined via Dockerfiles, rather than just the applications configured for use via cPanel.
But this is still exactly not what I am interested in.
Cloudron (and cPanel) provide UIs to allow “mortals” to start self hosting web based applications that they can start once, use thereafter. For example, you can use Cloudron to self-host your own version of WordPress. Every so often you go to your (self-hosted) WordPress blog and write a blog post, but the rest of the time it just sits there, running, and serving blog post web pages to your loyal readers or passing web search traffic.
But what I am interested in is are applications that I start when I want to use them, use them, then quit them (a start-use-quit model).
For example, consider something like Microsoft Word, as used to create or edit a text document. There are various ways of doing this:
- Using my desktop version of Word, I would probably: start the application, create the document, save the document, close the application.
- Using Office 365, a permanently running Word editor in the cloud, I would login to Office 365 via my browser, create the document, save it to my Office 365 online file area, and then close the browser tab (Office 365 is still running in the cloud).
But what if I wanted to have my own version of Word that I wanted to run in the cloud, much as I run my own copy of the Word application on my desktop?
If I was to run it permanently, as Office 365 runs permanently, as a self-hosted application like WordPress runs permanently, I would be paying for server costs permanently. I would also need to have some sort of authentication layer to stop other people using “my” version of Word online, and seeing my files stored there.
Instead, I want an environment that lets me start an application in the cloud, do whatever task I want in the application (create or edit a document), save the document, then close the application. I would only be hosting (in the sense of serving) the application as I used it, and then I would destroy it. Ideally, I would save the document I created somewhere persistent so that I could re-edit it using a newly started version of the editor at a later date.
In terms of resource usage, this is how I see the differences between the traditional self-hosted application, a personal desktop application, and what we might terms a personal (hosted) application (which might also be a personal self-hosted application):
In addition, I would expect to have privileged (authenticated access) to my personal applications. Unlike WordPress or Ghost, which run permanently and serve pages to the public as well as providing authenticated access to one or more (invited) users allowing them to edit posts, I would want to deny access to the site to anyone but me.
This means that either the personal hosted application should be visible to the user from their dashboard, or via an authenticated URL (with some ports perhaps open to the public). Something like this maybe?
Also, the public page might actually be an app specific authentication page (for example, a Jupyter notebook login page).
Unlike permanently running self-hosted apps, the personal apps are temporary, and only run when when the user wants to use them. The linked storage is, however, persistent.
The above architecture itself defines a generic self-hosted workbench environment, where the user can run applications on their workbench as personal applications as and when then need them (and hence only consume resources required to run them when they need them).
One possible way of gaining the insight as to why this is useful is to consider the following: a domain of one’s own gives you a presence you own (for some definition of “own”…) somewhere on the web; a server of one’s own provides a server that lets you easily run your own services (which can often be a b*****d for a novice to install), which may include permanently running services that populate your domain. A personal application server of your own (or maybe a workbench of your own?) lets you easily run software applications for personal use that can be a b*****d to install if you have to build and install them from scratch yourself (as is the case with a lot of scientific software applications; in addition, the workbench of your own makes it easy to launch linked applications (e.g. a stats analysis application linked to a database server) using things like pre-prepared docker compose scripts.