For the two new first year computing and IT courses in production (due out October 2017), I’ve been given the newly created slacker role of “course visionary” (or something like that?!). My original hope for this was that I might be able to chip in some ideas about current trends and possibilities for developing our technology enhanced learning that would have some legs when the courses start in October 2017, and remain viable for the several years of course presentation, but I suspect the reality will be something different…
However it turns out, I thought that one of the things I’d use fragments of the time for would be to explore different possible warp threads through the courses. For example, one thread might be to take a “View Source” stance towards various technologies that would show students something of the anatomy of the computing related stuff that populates our daily lives. This is very much in the spirit of the Relevant Knowledge short courses we used to run, where one of the motivating ideas was to help learners make sense of the technological world around them. (Relevant Knowledge courses typically also tried to explore the social, political and economic context of the technology under consideration.)
So as a quick starter for ten, here are some of the things that could be explored in a tech anatomy strand.
The Anatomy of a URL
Learning to read a URL is a really handy to skill to have for several reasons. In the first place, it lets you hack the URL directly to find resources, rather than having to navigate or search the website through its HTML UI. In the second, it can make you a better web searcher: some understanding of URL structure allows you make more effective use of advanced search limits (such as site:, inurl:, filetype:, and so on); third, it can give you clues as to how the backend works, or what backend is in place (if you can recognise a WordPress installation as such, you can use knowledge about how the URLs are put together to interact with the installation more knowledgeably. For example, add ?feed=rss2&withoutcomments=1 to the end of a WordPress blog URL (such as this one) and you’ll get a single item RSS version of the page content.)
The Anatomy of a Web Page
The web was built on the principle of View Source, with built-in support still respected by today’s desktop web browsers at least, that lets you view the HTML, Javascript and CSS source that makes a web page what it is. Browsers also tend to come replete with developer tools that let you explore how the page works in even more detail. For example, I use Chrome developer tools frequently to look up particular elements in a web page when I’m building a scraper:
(If you click the mobile phone icon, you can see what the page looks like on a selected class of mobile device.)
I also often look at the resources that have been loaded into the page:
Again, additional tools allow you to set the bandwidth rate (so you can feel how the page loads on a slower network connection) as well as recording a series of screenshots that show what the page looks like at various stages of its loading.
The Anatomy of a Tweet
As well as looking at how something like tweet is rendered in a webpage, it can also be instructive to see how a tweet is represented in machine terms by looking at what gets returned if you request the resource from the Twitter API. So for example, below is just part of what comes back when I ask the Twitter API for a single tweet:
You’ll see there’s quite a lot more information in there than just the tweet, including sender information.
The Anatomy of an Email Message
How does an email message get from the sender to the receiver? One thing you can do is to View Source on the header:
Again, part of the reason for looking at the actual email “data” is so you can see what your email client is revealing to you, and what it’s hiding…
The Anatomy of a Powerpoint File
Filetypes like .xlsx (Microsoft Excel file), .docx (Microsoft Word file) and .pptx (Microsoft Powerpoint file) are actually compressed zip files. Change the suffix (eg pptx to zip and you can unzip it:
Once you’re inside, you can have access to individual image files, or other media resources, that are included in the document, as well as the rest of the “source” material for the document.
The Anatomy of an Image File
Image files are packed with metadata, as this peek inside a photo on John Naughton’s blog shows:
We can also poke around with the actual image data, filtering the image in a variety of ways, changing the compression rate, and so on. We can even edit the image data directly…
Summary
Showing people how to poke around inside in a resource has several benefits: it gives you a strategy for exploring your curiosity about what makes a particular resource work (and perhaps also demonstrate that you can be curious about such things); it shows you how to start looking inside a resource (how to go about dissecting it doing the “View Source” thing); and it shows you how to start reading the entrails of the thing.
In so doing, it helps foster a sense of curiosity about how stuff works, as well as helping develop some of the skills that allow you to actually take things apart (and maybe put them back together again!) The detail also hooks you into the wider systemic considerations – why does a file need to record this or that field, for example, and how does the rest of the system make use of that information. (As MPs have recently been debating the Investigatory Powers Bill, I wonder how many of them have any clue about what sort of information can be gleaned from communications (meta)data, let alone what it looks like and how communications systems generate, collect and use it.)
PS Hmmm, thinks.. this could perhaps make sense as a series of OpenLearn posts?
I would be a total fan of this series- BTW thanks for the note about using Chrome inspector to get mobile views; I go into the inspector several times a day and I have never seen that! You could do many sections on how the inspector can be used.
There are the “look and see” time anatomy explorations and then the kind where you use that info to alter behavior or get at something the web page provider does not provide. I have a bunch of these I used with flickr. One is getting the flickr page from the file name of a static flickr URL/filename (if you want to find the source page of a flickr image someone is using on the web as an img scr=”….”
http://cogdogblog.com/2015/10/flickr-trickr/
And I do another trick because when I search on flickr images, sometimes I am doing it to find something adjacent in time- the little navigation icons you get from a search result let you paginate through other search results, whereas I want to locate that image in time, and find what is nearby in the timeline
http://cogdogblog.com/2014/09/from-javier-to-norbert/
Plus some hidden flickr search parameters
http://cogdogblog.com/2015/05/monkeying-around/
Also for poking around APIs, using consoles like
https://dev.twitter.com/rest/tools/console
or bottom of every entry in flickr api lets you experiment with calls
https://www.flickr.com/services/api/flickr.photos.search.html
Hi Alan
Thanks for those comments – I guess I should have linked to the Twitter console (I keep forgetting it’s there – I tend to use code rather than playgrounds to access Twitter API nowadays…)
Re: the flickr stuff – interesting – I like that time neighbourhood approach:-)
Re: trying to do a series – will ponder it and look for opportunities…; got a lot of other playing lined up atm though ;-)
Another one to play with – looking at DNS records, eg https://rud.is/b/2016/04/11/clandestine-dns-lookups-with-gdns/