The Obligatory Google Chrome Post – Sort Of…

Okay, so I’m a few days behind the rest of the web posting on this (though I tweeted it early;-), and I have to admit I still haven’t tried the Google Chrome browser out yet (it’s such a chore booting into Windows…), so here are some thoughts based on a reading of the the comic book and a viewing of the launch announcement.

Why Chrome? And how might it play out? (I’m not suggesting things were planned this way…) Here’s where you get to see how dazed and confused I am, and how very wrong I can be about stuff ;-)

First up – Chrome is for mobile devices, right? It may not have been designed for that purpose, but the tab view looks pretty odd to me, going against O/S UI style guides for pretty much everything. Each tab in its own process makes sense for mobile devices, where multiple parallel applications may be running at any time, but only one is in view. Rumbling’s around the web suggest Chrome for Android is on its way in a future Android release…

Secondly, Google Chrome draws heavily on Google Gears. Google Gears provides the browser with it’s own database, so the browser can store lots of state locally. (Does Gears also provide a lite, local webserver?) Google Gears lets you use web apps offline, and store lots of state without making a call on the host computer’s o/s…

So I’m guessing that Chrome would work well as a virtual appliance…? That is, it’s something that can be popped into a Jumpbox appliance, for example, and run…. anywhere…like from a live CD or bootable USB key (a “live USB”)? That is, run it as a “live virtual appliance”. So you don’t need a host operating system, just a boot manager? Or if all your apps are in the cloud, you just need a machine that runs Chrome (maybe with Flash and Silverlight plugins too).

Chrome lets you create standalone “desktop web apps” in the form of “single application browsers” – a preloaded tab that “runs” Gmail or Google docs, for example, (or equally, I guess, Zoho web applications), just as if they were any other desktop application. The browser becomes a container for applications. If you can run the browser, you can run the app. If you can run the browser in a virtual appliance (or on a mobile device – UI issues aside), you can run the app…

Chrome makes use of open source components – the layout engine, Javascript engine, Gears and so on. Open source presumably makes anti-trust claims harder to put together if the browser starts to take market share; if other browser developers use the code, it legitimises it, as well as increasing the developer community.

On the usability side, the major thing that jumped out at me was that there’s a single search’n’address “omnibox” within each tab. Compare that to current browsers, where the address bar and search box are separate and above the line of selectable tabs.

It’s worth noting here that many people don’t really understand the address bar and the browser search box – they just get to Google any way they can and type stuff into the Google search box: keywords, URLs, brandnames, cut’n’pasted natural language text, anything and everything…

What the omnibox appears to do is to provide a blend of Google Suggest, browser history suggest/URL autocompletion, (and maybe ultimately Google personal browsing history?) and automagically acquired site specific opensearch helpers within a single user entry text box. (I love psychic/ESP searchboxes… I even experimented with using one on searchfeedr, I think?) I guess it also makes migration of the browser easier to a mobile device – each tab satisfies most of the common UI requirements of a single window browser?

A couple of other things that struck me while pondering the above:
– what’s with the URL for the comicbook? http://www.google.com/googlebooks/chrome/ What else can we expect to appear on http://www.google.com/googlebooks/?
– has Google taken an interest in any of the virtual appliance players – Parallels, VMware, Jumpbox etc etc?

Link Love for Martin – “I Heart Twitter” Video

Just released, a follow up to Martin’s Edupunk response to my Changing Expectations video:

A Twitter Love Song (Weller)

Now available on Youtube…

PS (Weller)? Hmmm – any relation?!

PPS So it’ll be my turn again… hmm – you’ve been practising with Camtasia, haven’t you Martin? Maybe I’ll have to see what I can do with Jing and Jumpcut or Jaycut?

Orange Broadband ISP Hijacks Error Pages

An article in the FT last week (referenced here: British ISP Orange Shuns Phorm) described how ISP Orange have decided not to go with the behavioural advertising service Phorm, which profiles users internet activity in order to serve them with relevant ads.

But one thing I have noticed them doing over the last week is hijacking certain “domain not found” pages:

Orange broadband intercepting (some) page not found pages...

…which means that Orange must be looking at my HTML page headers for certain error codes?

Now I wonder if anyone from Orange Customer services or the Orange Press Office would like to comment on whether this is reasonable or not, and/or how they are doing it?

Just by the by, I found the Orange Customer Services web page interesting – not just the links to all the premium rate phine lines, more the font size;-) (click through for the full size image):

//www.orange.co.uk/contact/internet/default.htm?&article=contactussplitterwanadoo - check out the font size

I’ve also noticed what appear to be a few geo-targeted ads coming at me through my browser, so wonder if Orange is revealing my approximate location data to online ad targeting services (I’ll try to remember to grab a screenshot next time I see one). The reason I suspect it’s Orange is because I ran a test using a cookie blocking browser…

PS note to self: try to find out how ad services like NebuAd, Tacoda and of course Phorm make use of user data, and see just how far their reach goes…

PPS Hmmm… so just like there is a “junk mail opt out“, “unaddressed mail opt out and “junk phone call opt out” in the UK, it seems like there is a (cookie based….?!) initiative for opting out of online ad targeting from the Network Advertising Initiative. Does anyone know anything about this? Is it legitimate, or a gateway to yet more unwanted ads? I’d maybe trust it more if it was linked to from mydm, which I trust becasue it was linked to from the Royal Mail…

Innovation in Online Higher Education

In an article in the Guardian a couple of days ago – UK universities should take online lead, it was reported that “UK universities should push to become world leaders in online higher education”, with universities secretary, John Denham, “likely to call” for the development of a “global Open University in the UK”. (Can you imagine how well that call went down here?;-)

Anyway, the article gave me a heads-up about the imminent publication of a set of reports to feed into a Debate on the Future of Higher Education being run out of the Department for Innovation, Universities and Skills.

The reports cover

The “World leader in elearning” report, (properly titled “On-line Innovation in Higher Education“), by Professor Sir Ron Cooke is the only one I’ve had a chance to skim through so far, so here are some of the highlights from it for me…

HE and the research funding bodies should continue to support and promote a
world class ICT infrastructure and do more to encourage the innovative
exploitation of this infrastructure through … a new approach to virtual education based on a corpus of open learning content

Agreed – but just making more content available under an open license won’t necessarily mean that anyone will use this stuff… free content works when there’s an ecosystem around it capable of consuming that content, which means confusion about rights, personal attitudes towards reuse of third party material, and a way of delivering and consuming that material all need to be worked on.

The OERs “[need] to be supported by national centres of excellence to provide quality control, essential updating, skills training, and research and development in educational technology, e-pedagogy and educational psychology”.

“National Centres of Excellence”? Hmmm… I’d rather that networked communities had a chance of taking this role on. Another centre of excellence is another place to not read the reports from… Distributed (or Disaggregated) Centres of Excellence I could maybe live with… The distributed/disaggregated model is where the quality – and resilience – comes in. The noise the distributed centre would have to cope with because it is distributed, and because its “nodes” are subject to different local constraints, means that the good will out. Another centralised enclave (black hole, money sink, dev/null) is just another silo…

“[R]evitalised investment into e-infrastructures” – JISC wants more money…

[D]evelopment of institutional information strategies: HEIs should be encouraged and supported to develop integrated information strategies against their individual missions, which should include a more visionary and innovative use of ICT in management and administration

I think there’s a lot of valuable data locked up in HEIs, and not just research data; data about achievement, intent and sucessful learning pathways, for example. Google has just announced a service where it can track flu trends, which is “just the first launch in what we hope will be several public service applications of Google Trends in the future”. Google extracts value from search data and delivers services built on mining that data. So in a related vein, I’ve been thinking for a bit now about how HEIs should be helping alumni extract ongoing value from their relationship with their university, rather than just giving them 3 years of content, then tapping them every so often with a request to “donate us a fiver, guv?” or “remember us? We made you who you are… So don’t forget us in your will”. (I once had a chat with some university fundraisers who try to pull in bequests… vultures, all of ’em ;-)

“It is however essential that central expenditure on ICT infrastructure (both at the national level through JISC and within institutions in the form of ICT services and libraries) are maintained.” – JISC needs more cash. etc etc. I won’t mention any more of these – needless to say, similar statements appear every page or two… ;-)

“The education and research sectors are not short of strategies but a visionary thrust across the UK is lacking” – that’s because people like to do their own thing, in their own place, in their own way. And retain “ownership” of their ideas. And they aren’t lazy enough…;-) I’d like to see people trying to mash-up and lash-up the projects that are already out there…

the library as an institutional strategic player is often overlooked because the changes and new capabilities in library services over the past 15 years are not sufficiently recognised

Academic Teaching Library 2.0 = Teaching University 2.0 – discuss… The librarians need to get over their hang-ups about information (the networked, free text search environment is different – get over it, move on, and make the most of it…;-) and the academics need to get their heads round the fact that the content that was hard to access even 20 years ago is now googleable; academics are no longer the only gateways to esoteric academic content – get over it, move on, and make the most of it…;-)

Growth in UK HE can come from professional development, adult learning etc. but might be critically dependent on providing attractive educational offerings to this international market.

A different model would be to encourage some HEIs to make virtual education offerings aimed at the largely untapped market of national and overseas students who cannot find (or do not feel comfortable finding) places in traditional universities. This approach can exploit open educational resources but it would be naïve to expect all HEIs to contribute open education resources if only a few
exploit the potential offered. All HEIs should be enabled to provide virtual education but a few exemplar universities should be encouraged (the OU is an obvious candidate).

Because growth in business is good, right? (err….) and HE is a business, right? (err….) And is that a recommendation that the OU become a global online education provider?

A step change is required. To exploit ICT it follows that UK HEIs must be flexible, innovative and imaginative.

Flexible… innovative… imaginative…

ICT has greatly increased and simplified access by students to learning materials on the Internet. Where, as is nearly universal in HE, this is coupled with a Virtual Learning Environment to manage the learning process and to provide access to quality materials there has been significant advances in distance and flexible learning.

But there is reason to believe this ready access to content is not matched by training in the traditional skills of finding and using information and in “learning how to learn” in a technology, information and network-rich world. This is reducing the level of scholarship (e.g. the increase in plagiarism, and lack of critical judgement in assessing the quality of online material). The Google and Facebook generation are at ease with the Internet and the world wide web, but they do not use it well: they search shallowly and are easily content with their “finds”. It is also the case that many staff are not well skilled in using the Internet, are pushed beyond their comfort zones and do not fully exploit the potential of Virtual Learning Environments; and they are often not able to impart new skills to students.

The use of Web 2.0 technologies is greatly improving the student learning experience and many HEIs are enhancing their teaching practices as a result. A large majority of young people use online tools and environments to support social interaction and their own learning represents an important context for thinking about new models of delivery.

It’s all very well talking about networked learners, but how does the traditional teacher and mode of delivery and assessment fit into that world? I’m starting to think the educator role might well be fulfilled by the educator as “go to person” for a topic, but what we’re trying to achieve with assessment still confuses the hell out of me…

Open learning content has already proved popular…

A greater focus is needed on understanding how such content can be effectively used. Necessary academic skills and the associated online tutoring and support skills need to be fostered in exploiting open learning content to add value to the higher education experience. It is taken for granted in the research process that one builds on the work of others; the same culture can usefully be encouraged in creating learning materials.

Maybe if the materials were co-created, they would be more use? We’re already starting to see people reusing slides from presentations that people they know and converse with (either actively, by chatting, or passively, by ‘just’ following) have posted to Slideshare. It’d be interesting to know just how the rate of content reuse on Slideshare compares with the rate of reuse in the many learning object repositories? Or how image reuse from flickr compares with reuse from learning object repositories? Or how video reuse from Youtube compares with reuse from learning object repositories? Or how resource reuse from tweeting a link or sharing a bookmark compares with reuse from learning object repositories?

…”further research”… yawn… (and b******s;-) More playing with, certainly ;-) Question: do you need a “research question” if you or your students have an itch you can scratch…? We need a more playful attitude, not more research… What was that catchphrase again? “Flexible… innovative… imaginative…”

A comprehensive national resource of freely available open learning content should be established to provide an “infrastructure” for broadly based virtual education provision across the community. This needs to be curated and organised, based on common standards, to ensure coherence, comprehensive coverage and high quality.

Yay – another repository… lots of standards… maybe a bit of SOAP? Sigh…

There is also growing pressure for student data transfer between institutions across the whole educational system, requiring compliance with data specifications and the need for interoperable business systems.

HEIs should consider how to exploit strategically the world class ICT infrastructure they enjoy, particularly by taking an holistic approach to information management and considering how to use ICT more effectively in the management of their institution and in outreach and employer engagement activities.

There’s huge amount of work that needs doing there, and there may even be some interesting business opportunities. But I’m not allowed to talk about that…

ICT is also an important component in an institution’s outreach and business and community engagement activities. This is not appreciated by many HEIs. Small and medium enterprise (SME) managers need good ICT resources to help them deliver their learning needs. Online resources and e-learning are massively beneficial to work based learning. Too little is being done to exploit ICT in HE in this area although progress is being made.

I’ve started trying to argue – based on some of the traffic coming into my email inbox – that OUseful.info actually serves a useful purpose in IT skills development in the “IT consultancy” sector. OUseful.info is often a bit of a hard read at times, but I’m not necessarily trying to show SMEs how to solve their problems – this blog is my notebook, right? – though at times I do try to reach the people who go into SMEs, and hopefully give them a few ideas that they can make (re)use of in particular business contexts.

Okay – that was a bit longer and a bit more rambling than I’d anticipated… if you ewant to read the report, it’s at On-line Innovation in Higher Education. There’s also a discussion blog available at The future of Higher Education: On-Line Higher Education Learning.

Just by the by, here are a couple more reports I haven’t linked to before on related matters:

It’s just a shame there’s no time to read any of this stuff ;-) Far easier to participate in the debate in a conversational way, either by commenting on, or tracking back to, The future of Higher Education: On-Line Higher Education Learning.

PS here’s another report, just in… Macarthur Study: “Living and Learning with New Media: Summary of Findings from the Digital Youth Project”

Speedmash and Mashalong

Last week I attended the very enjoyable Mashed Library event, which was pulled together by Owen Stephens (review here).

My own contribution was in part a follow on to the APIs session I attended at CETIS08 – a quick demo of how to use Yahoo Pipes and Google spreadsheets as rapid mashing tools. I had intended to script what I was going to do quite carefully, but an extended dinner at Sagar (which I can heartily recommend:-) put paid to that, and the “script” I did put together just got left by the wayside…

However, I’ve started thinking that a proper demo session, lasting one to two hours, with 2-4 hrs playtime to follow, might be a Good Thing to do… (The timings would make for either a half day or full day session, with breaks etc.)

So just to scribble down a few thoughts and neologisms that cropped up last week, here’s what such an event might involve, drawing on cookery programmes to help guide the format:

Owen’s observation that the flavour of the Mashed Library hackathon was heavily influenced by the “presentations” was well made; and maybe why it’s worth trying to build a programme around pushing a certain small set of tools and APIs, effectively offering “micro-training” in them to start with, and then exploring their potential use in the hands-on sessions, makes sense? It might also mean we could get the tools’n’API providers to offer a bit of sponsorship, e.g. in terms of covering the catering costs?

So, whaddya think? Worth a try in the New Year? If you think it might work, prove your commitment by coming up with a T-shirt design for the event, were it to take place ;-)

PS hmm, all these cookery references remind me of the How Do I Cook? custom search engine. Have you tried searching it yet?

PPS I guess I should also point out the JISC Developer Happiness Days event that is booked for early next year. Have you signed up yet?;-)

Interactive Photos from Obama’s Inauguration

Now the dust has settled from last week’s US Presidential inauguration, I thought I’d have a look around for interactive photo exhibits that recorded the event. (I’ll maintain a list here if and when I find anything else to add.)

So here’s what I found…

Time Lapse photo (Washington Post)

Satellite Image of the National Mall (Washington Post)

A half-metre resolution satellite image over Washington taken around the time of the inauguration.

You can also see this GeoEye image in Google Earth.

Gigapixel Photo of the Inauguration (David Bergman)

Read more about how this photo was taken here: How I Made a 1,474-Megapixel Photo During President Obama’s Inaugural Address.

Interactive Panorama From the Crowds (New York Times)

PhotoSynth collage (CNN)

I suppose the next thing to consider is this: what sort of mashup is possible using these different sources?!;-)

[PS If you find any more interactive photo exhibits with a similar grandeur of scale, please add a link in a comment to this post:-)]

Subscriptions Not Courses? Idling Around Lifelong Learning

As yet more tales of woe appear around failing business models (it’s not just the newspapers that are struggling: it appears Youtube is onto a loser too…), I thought I’d take a coffee break out of course writing to jot down a cynical thought or two about lifelong learning

…because along with the widening participation agenda and blah, blah, blah, blah, lifelong learning is one of those things that we hear about every so often as being key to something or other.

Now I’d probably consider myself to be a lifelong learner: it’s been a good few years since I took a formal course, and yet every day I try to learn how to do something I didn’t know how to do when I got up that morning; and I try to be open to new academic ideas too (sometimes…;-)

But who, I wonder, is supposed to be delivering this lifelong learning stuff, whatever it is? Because there’s money to be made, surely? If lifelong learning is something that people are going to buy into, then who owns, or is trying to grab that business? Just imagine it: having acquired a punter, you may have them for thirty, forty, fifty years?

I guess class of major providers are the professional institutions? You give them an annual fee, and by osmosis you keep your credentials current (you can trust me, I’m a chartered widget fixer, etc.).

So here’s my plan: offering students an undergrad first degree is the loss leader. Just like the banks (used to?) give away loads of freebies to students in freshers week, knowing that if they took out an account they’d both run up a short term profitable debt, and then start to bring in a decent salary (allegedly), in all likelihood staying with the same bank for a lifetime, so too could the universities see 3 years of undergrad degree as the recruitment period for a beautiful lifetime relationship.

Because alumni aren’t just a source of funds once a year and when the bequest is due…

Instead, imagine this: when you start your degree, you sign up to the 100 quid a year subscription plan (maybe with subscription waiver while you’re still an undergrad). When you leave, you get a monthly degree top-up. Nothing too onerous, just a current awareness news bundle made up from content related to the undergrad courses you took. This content could be produced as a side effect of keeping currently taught courses current: as a lecturer updates their notes from one year to the next, the new slide becomes the basis for the top-up news item. Or you just tag each course, and then pass on a news story or two discovered using that tag (Martin, you wanted a use for the Guardian API?!;-)

Having the subscription in place means you get 100 quid a year per alumni, without having to do too much at all…and as I suspect we all know, and maybe most of us bear testament to, once the direct debit is in place, there can be quite a lot of inertia involved in stopping it…

But there’s more – because you also have an agreement with the alumni to send them stuff once a month (and as a result maybe keep the alumni contacts database up to date a little more reliably?). Like the top-up content that is keeping their degree current (err….? yeah, right…)…

…and adverts… adverts for proper top-up/CPD courses, maybe, that they can pay to take…

…or maybe they can get these CPD courses for “free” with the 1000 quid a year, all you can learn from, top-up your degree content plan (access to subscription content and library services extra…)

Or how about premium “perpetual degree” plans, that get you a monthly degree top-up and the right to attend one workshop a year “for free” (with extra workshops available at cost, plus overheads;-)

It never ceases to amaze me that we don’t see degrees as the start of continual process of professional education. Instead, we produce clumpy, clunky courses that are almost necessarily studied out of context (in part because they require you take out 100 hours, or 150 hours, or 300 hours) of study. Rather than give everyone drip feed CPD for an hour or two a week, or ten to twenty minutes a day, daily feed learning style, we try to flog them courses at Masters level, maybe, several years after they graduate…

In my vision of the world, we’d dump the courses, and all subscribe to an appropriate daily learning feed… ;-)

Maybe…

END: coffee break…

PS see also: New flexible degrees the key to growth in higher education:

The future higher education system will need to ensure greater diversity of methods of study, as well as of qualifications. Long-term trends suggest that part-time study will continue to rise, and it’s difficult to see how we can increase the supply of graduates as we must without an increase in part-time study.

“But we will surely need to move decisively away from the assumption that a part-time degree is a full time degree done in bits. I don’t have any doubt that the degree will remain the core outcome. But the trend to more flexible ways of learning will bring irresistible pressure for the development of credits which carry value in their own right, for the acceptance of credits by other institutions, and for the ability to complete a degree through study at more than one institution.”

… or so says John Denham…

Mulling Over What to Do Next on the F1 Race Day Strategist

It’s F1 race weekend again, so I’m back pondering what to do next on my F1 Race Day Strategist spreadsheets. Coming across an article on (BBC F1’s fuel-adjusted Monaco GP grid), I guess one thing I could do is look to try and model the fuel adjusted grid for each race. That post also identifies the speed penalty per kg (“each kilo of fuel slows it down by about 0.025 seconds”) so I need to factor that in too, somehow, into a laptime predictor spreadsheet, maybe?

Note that I didn’t really see many patterns in lap time changes when I tried to plot them previously (A Few More Tweaks to the Pit Stop Strategist Spreadsheet) so maybe the time gained by losing weight is offset by decreasing tyre performance?

One thing the spreadsheet has (badly) assumed to data was a fuel density of 1 kg/l. Checking the F1 2009 technical specification, the actual density can range between 0.72 and 0.775 kg/l (regulation 19.3), so relating fuel timings (l/s), lap distances/fuel efficiencies (km/l), and car starting weight (kg) means that the density measures need taking into account.

Unfortunately, I factored density into some of the formulae but not others, so the spreadsheets could take some picking apart trying to take density into account to keep the different calculations consistent. Hmm, maybe I should start a new spreadsheet from scratch to work out fuel adjusted grid positions, and then use the basic elements from that spreadsheet as the base elements for the other spreadsheets?

Something else that I need to start considering, particularly given that there won’t be any race day refuelling next year, is tyre performance (note to self: track temperature is important here). A quick scout around didn’t turn up any useful charts (I was using words like “model”, “tyre”, “performance”, “timing” and “envelope”) but what I think I want is a simple, first approximation model of tyres that models time “penalties” and “bonuses” about an arbitrary point, over number of laps, and as a function of track temperature.

For the spreadsheet, I’m thinking something like an “attack decay” or attack-decay-sustain-release (ADSR) envelope (something I came across originally in the context of sound synthesis many years ago…)

On the x-axis, I’m guessing I want laps, on the y-axis, a modifier to lap time (in seconds) relative to some nominal ideal lap time. The model should describe the number of laps it takes for the tyres to come on (a decreasing modifier to the point at which the tyres are working optimally), followed by an increasing penalty modifier as they go off.

Ho hum, quali over, so I’ve run out of time to actually do anything about any of this now… maybe tomorrow…?

Quick Viz – Australian Grand Prix

I seem to have no free time to do anything this week, or next, or the week after, so this is just a another placeholder – a couple of quick views over some of Hamilton’s telemetry from the Australian Grand Prix.

First, a quick Google Earth to check the geodata looks okay – the number labels on the pins show the gear the car is in:

Next, a quick look over the telemetry video (hopefully I’ll have managed to animate this properly for the next race…)

And finally, a Google map shows the locations where the pBrakeF (brake pedal force?) is greater than 10 %.

Oh to have some time to play with this more fully…;-)

PS for alternative views over the data, check out my other F1 telemetry data visualisations.

Fragments – Open Access Journal Data

Some time ago, I put together a recipe for watching over recent contents lists from a set of journals listed in a reading list (Mashlib Pipes Tutorial: Reading List Inspired Journal Watchlists). But what if we wanted to maintain a watchlist over content that is published in just open access journals?

Unfortunately, it seems that TicTocs (and the API version (I think), JournalTOCs, don’t include metadata that identifies whether or not a journal is open access. A quick scout around, as well as a request to the twitter lazyweb, turned up a few resources that might contribute to a service that, if nothing else, returns a simplistic “yes/no” response to the query “is the journal with this ISSN an open access journal?”

– a list of journals in TicTocs (CSV);
– a list of open access journals listed in DOAJ (csv);
SHERPA/RoMEO API (look up open access related metadata for journals?)

So – as a placeholder for myself: think about some sort of hack to annotate and filter TicTocs/JournalTOCs results based on open access licensing conditions

Following a quick bounce around of ideas with @kavubob on Twitter, what else might this be used for? Ranking journals based on extent to which articles cite articles from open access journals, or ones that support some sort of open access publication?

Also – is it easy enough to find citation data at gross level – eg number of citations from one journal to other journals over a period of time? Colour nodes by whether they are OA or note, size/weight of edges between journal nodes to show number of references from one journal to another? Maybe normalise edge weight as percentage of citations from one journal to another and size nodes by number of references/citations?