Archive for the ‘Stirring’ Category
A few minutes or so, @sou_airport tweeted:
Welcome to new followers of Southampton Airport. Since the snow, followers have doubled and we will keep you up to date with news and offers
With recent news stories deploring the state of information provision, the occasional tweets from @sou_airport regarding the status of the airport have been handy…
- “Southampton Airport is to open from 06:30am today- some knock-on delays are expected due to the weather. Pls check with airlines for info.”
- “Southampton Airport currently has capacity on Flybe flights to Amsterdam, Paris, Dusseldorf and Brussels up to Christmas.”
If they’re just going to start tweeting ads and offers, though, I’m not interested, and will likely unfollow… Just because a company suddenly opens up a comms channel that folk sign up doesn’t mean it needs to be a marketing channel – the payoff is in having fewer disgruntled passengers, and folk turning up to fly to find cancelled flights and adding to the airport’s problems…
If they want to consign me to following a backwater channel, such as @sou_airport_status, that’s fine… Just don’t add noise to the signal if all i want is signal…
Something that would be quite handy would be an autoresponder. The Southampton website already has live flight arrival/departure info, and a form that lets you enter either a flight number or a departure/destination airport for arrivals and departures respectively.
The “accessible” page provides a simpler view of the information, though the URL is not as friendly as it might be…:
The search URL is even more hostile:
which looks like it requires the presence of tokens or dynamically created data.
So what might an autoresponder look like? Even if all we do is redirect the flight status information, we might imagine an exchange like the following:
would return something like:
@example BE173 dep 15.40 Tues 21/10 LEEDS B/FORD SCHEDULED
or for BE3646:
@example BE3646 BERGERAC sch_dep 14:05 arr ESTIMATED 19:03
Alternatively, the airport could just re-factor its (paid for) SMS service (which I think is operated by BAA, as it seems to be available for several other UK airports):
Flying Messenger – check flights by SMS
Need flight updates on the move? Flying Messenger lets you check flights at Southampton Airport by mobile phone text message (SMS).
How to use it
Text sou (for Southampton) plus your flight number to 82222.
So for example, if your flight number is BE1846, your text should read:
You’ll receive a reply giving the current status of your flight.
When to use it
This service is available up to 12 hours ahead of the flight’s scheduled time, and up to four hours afterwards. Requests must be sent on the scheduled date.
If you need to set up an alert in advance, please see Flying Messenger PLUS.
What it costs
Each Flying Messenger request costs 25p plus your network’s SMS message rate. If you’re using a pre-pay phone, you need to have enough credit to cover the cost of the service.
As to the amount of distress caused to folk traveling over the last few days, and the stress that several UK airports have been under because of travelers turning up for already cancelled flights, it amazes me that BAA aren’t willing to buy a bundle of free texts and offer a free SMS autoresponder information service…
I haven’t managed to find an official announcement of this yet, but it seems as if there’s a new addition to the Apple App Store in the form of a “Study at the OU” app: StudyAtOU.
So what does it do? Essentially, it seems to provide and app way in to the Open University course catalogue. Here’s the opening screen
(Err – wot? No OU logo? Is it not an *official* app then?! Though the iTunes does carry branding… If it is an official app, how well does it sit with OU ice/brand police guidelines, I wonder…?;-)
The app provides a convenient way of navigating through the OU’s course catalogue, providing a brief overview of subject areas:
and research degrees:
Clicking through on individual course links also leads through to a description of the corresponding course.
Looking through some of the descriptions, it seems as if there isn’t any information about forthcoming presentation dates, course fees, and other ‘administrative’ information (such as level information), nor does there appear to be a ‘click here to register’ option. (Hmm… I don’t think the OU course registration system accepts Paypal yet? If it did, I guess something like the PayPal Mobile Payments library would allow this to be integrated into the StudyAtOU app?)
If you want to share a link to a course with other people, there are several ways of doing this:
Facebook and Twitter based sharing requires you share access to those services, of course. Here’s the prompt you get when you try to share a link using Facebook:
and the corresponding prompt from Twitter:
(By the by, seeing this app reminds me of my old, old WAP demo for navigating Relevant Knowledge short course descriptions… TSCP Experimental WAP Service;-)
Also in passing, I asked where the data that feeds the app come from. It seems to be an XML source that also feeds the data.open.ac.uk Linked Data store, rather than being fed from the Linked Data Store itself. Whilst the Linked Data store was presumably not available whilst the app was being developed, it would be nice to think that services such as the app might call on the datastore and actually start using the data contained within it.
One thing worth noting about the app is that it is not self-contained. The qualification, subject area and course descriptions are all pulled in live from the web, which means if you lose your network connection, you get this:
In that sense, the StudyAtOU app is therefore a hybrid app, with a small amount of offline functionality, complemented by network sourced content. I guess there’s a trade-off between going on here between connectivity/bandwidth requirements and memory requirements/app size.
If the app was displaying course start dates/pricing, then I could see why there might be a good reason for the app to want to display ‘live’ data, but for course descriptions? (I guess you could argue that you always want the catalogue to only list current courses, but that could easily be managed with a time to live field associated with each course on the app, given the review dates for the expiry of courses are well known?)
Anyway, the app’s there, and usable, and available for download, and offers a great starting point…but will it be allowed to evolve, I wonder, in an agile fashion, now that it’s there…?
PS just because, I wondered where http://www.open.ac.uk/studyatou might point to. At the moment, it seems to resolve to http://www3.open.ac.uk/study/explained/how-do-i-apply.shtml:
The top-level “Study at OU” web presence, and the one you get to from the OU website’s Study at the OU top level navigation actually seems to have its home at http://www3.open.ac.uk/study/. Just sayin’…;-)
As I type, there is a spinoff meeting (that I’m not at) from the #RSWebSci event operating from a location near Milton Keynes (@martinjemoore: “At tremendous Kavli Centre, Royal Society’s base in Buckinghamshire, for satellite meeting about future of the web & web science #rswebsci”) and being held under the Chatham House rule:
“When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed”.
In the RSWebSci event, from the backchannel we can identify the participants and their affiliations from any tweets they make from the event:
So for example, when @timdavies mentions: “Nigel Shadbolt asking “What are Chatham house rules for Twitter” at #rswebsci… Opps, er, I mean Someone asking.” we know that both Tim Davies (“Consultant and action researcher focussing on civic engagement and social technology. Specific focus on youth engagement & open data”) and Nigel Shadbolt are at the event. And from tweets like “#RSwebsci hilarious moment when one unattributable person forgets other unattributable person’s name”, we can assume that the originator of that tweet is also at the event (and maybe that they are not either of the unattributable persons mentioned?)
If we know an event is happening, and we know the sorts of people it is likely to attract (e.g. by looking at the Twitterers from the last couple of days of the #rswebsci event), if a Twitter blackout is in operation we can look to Twitter histories to see who was not tweeting during the event who might normally be expected to be tweeting over that period, and tentatively locate them at the event. We can also rule out people who have declared they aren’t there (@cameronneylon: “I decided not to go to #RSWebsci and satellite meeting because I had too much “proper” work to do. Think I probably picked wrong…”), unless they’re bluffing…?!;-)
From tweets so far, we know via @lescarr that there are several sessions taking place (“”Breakthroughs in Web Science”, “Dark Web”, “Networks in web science”, “Govt open data” and “Collaborative Science” sessions at #RSwebsci”). From clustering the folk who we know to be at, and suspect to be at the event, we might tentatively allocate them to different sessions, with a particular probability. If different hashtags are used for each session, the sort of thing @briankelly (who I don’t think is at the event) often lobbies for, it makes conversation analysis maybe a little easier?
On the topic of conversation analysis, or at least time series analysis (using a tool such as TimeFLow, for example?), we might be able to use some form of it to identify who said what from inspecting the timeline. For example, if @ianmulvany is a truth teller, and says at 9.47 “#RSWebsci time to pitch my idea”, we can monitor tweets over the next few minutes to see if any ideas that are reported are the sort of thing he might have come up with, given we can find out easily enough that he works for Mendeley. So maybe @timdavies’ mention at 9.53 that “#rswebsci @? “Crowdsourcing & crowdcurating more an art than a science right now” <– Shd it develop as science? Or best in domain of art…", that crowdsourcing thing is something that I could imagine Ian saying (P=0.7?) The idea as to whether it's science or art is presumably Tim Davies'?
Just by the by, TIm's use of @? comes from a suggestion I made about a possible "chatham bot" that would accept DMs, anonymise the sender and replace any @name attributions with @?. Thinking about it a little more, it would be easy enough for folk to see who was friended by the chatham bot, and narrow down at least the sender of the tweet to someone on that list. [If by implication we assume @? is a twitterer, rather than a participant not on twitter, we might further narrow down who said what in this case to someone on Twitter whose Twitter username the person who 'mentioned' them knows.] Chris Gutteridge, who is also not at the event ("@lescarr eh? I didn't know there was a Wednesday bit! #rswebsci any of it streamed?"), suggested "…creat[ing] a rswebscichatham twitter account and tell all people in the room the username/password. #rswebsci" which gets round this problem of preserving anonymity of the sender, which the creation of a Birdherd account (via @jamestoon) might also do?
Okay, enough of that… except to wonder: what other sorts of traffic analysis might we apply to a hashtag twitter stream and a “likely candidates” twitter use analysis over the duration of the event. Would it be easier to preserve the sense of the Chatham House rule if a hashtag was not used?
PS doh! I forgot to raise the point that first came to mind: how would it be possible to remotely attend a Chatham House event via a public backchannel? (Which is where the chathambot anonymiser came in…)
PPS just to note, as the clock ticks on, and the day warms up, other folk who were at #rswebsci on Monday and Tuesday, but who are not at today;s event, are now tweeting again using the hashtag, which means that the channel now has added noise on top of the discussions from today’s satellite event… The easiest way I can think of following today’s events is to create a list of folk known or suspected to be there, and follow that list through an additional #rswebsci filter?
PPPS [via @timdavies] Chatham House rule FAQ covers Twitter as follows:
Can I ‘tweet’ whilst at an event under the Chatham House Rule?
A. The Rule can be used effectively on social media sites such as Twitter as as long as the person tweeting or messaging reports only what was said at an event and does not identify – directly or indirectly – the speaker or another participant. This consideration should always guide the way in which event information is disseminated – online as well as offline.
It also says:
Q. Can a list of attendees at the meeting be published?
A. No – the list of attendees should not be circulated beyond those participating in the meeting.
which can in part be inferred from various uses of Twitter, and maybe also any public geolocation services used by participants. Which is to say, if you know where an event is, you can maybe look for people near there..?
Consternation on the twittertubes this morning about Wolverhampton’s i-CD: Intelligent Career Development, which seeks to offer “a completely new approach to higher education”:
Historically, people have either gone to university or, more recently, universities have tried to come to them. That is to say, they have opened themselves to part-time students in the evenings or projected learning materials via distance learning or tailored their programmes to employers’ needs. However, they have never previously attempted to do all these things in a single programme. Via i-CD, the University of Wolverhampton is for the first time providing low-cost, flexibly-delivered, workplace-based, market-driven, fully-accredited, higher education.
(Err… I think the OU does that actually, through work based learning, an increasing number of vendor qualifications from Cisco and Microsoft that also provide academic credit, sector based courses and qualifications, and so on… All part-time, at a distance, with support (and online community), and some of them in the workplace too.)
So for example, I think @dkernohan sees parts of his nightmare vision coming true?
What struck me is that the Wolverhampton offering is being built around 10 week courses, the same length as the OU short courses (which in the OU case result in 10 CAT points of academic credit, corresponding to a nominal 10 hours study a week).
Also coming in at 10 weeks is the currently running PLENK2010 Massive Online Open Course (hmm.. does that URL scale for other courses?), and close behind, at 12 weeks, the forthcoming openEd 2.0 course on “Business and management competencies in a Web 2.0 world”:
a FREE/OPEN course targeting business students and practitioners alike. The course consists of two strands: an academic and a professional practice based strand, though both strands can be taken together. Furthermore, the openEd 2.0 course is MODULAR, thus learners can also “pick” the individual modules they are interested at.
Whilst I’m encouraged to see the rise of open courses (and there’s an increasing number of them: for example, P2PU are currently running a course on Open Journalism on the Open Web, I do think the OU is maybe missing a trick, and not leading the way in terms of innovating around open online courses…
…becuase the OU has being doing online education for years. Our first fully online course (T171, as authored by Martin Weller and John Naughton, amongst others) first presented in 1999 (I think), with thousands of students per presentation. The current Royal Photographic Society (RPS) recognised short course on Digital Photography regularly pulls in large numbers of students (in the OU, courses with less than 250 students are small…) and the new CompTIA approved Linux course is already a middle sized course… (Notice anything about those courses…? Recognition from outside academia too…)
So why isn’t the OU experimenting with running massive open online courses, with an option to “upsell” accreditation to students who want the formal academic credit? Maybe providing the support typically offered to students taking OU courses wouldn’t be cost-effective in an open course, although the wholly online short courses at least have already foregone personal tutor support. Expecting forum moderators to act as sales reps for accreditation is maybe not the sort of support we’d like to see being offered…?!
I’ve mentioned before that open educational resources might benefit from being created in public, possibly in an open course setting… SO maybe the time is now right to start trialing open courses (uncourses?;-), maybe informed by requests from (potential) students about the courses they’d like to see, creating the materials in near real-time (and drawing on other open resources, “educational” and otherwise) for the open presentation, then providing students who want to gain formal credit with some sort of assessment and accreditation?
How might this formal recognition be achieved?
- possibly via a semi-formal OU certificate that can be formally recognised through a credit transfer route?
- maybe using variant of the Career development and employability course container that lets students “use [their] workplace as a context for learning, and develop [their] ability to apply [their] learning to improve [their] practice at work”)?
- or how about the Make your experience count course container, which “gives you the opportunity to gain 30 credit points towards higher education qualifications by drawing on your past learning experiences”?
With a little bit of wit and imagination, I’m sure we could wither finesse one of our current “prior experience” courses to support the award of credit to open online courses, or come up with a new 10 point container: Open Education Course Credit
PS Hmmm, as an experiment, I wonder what would happen if someone who had taken an open online course tried to get it accepted “in partial fulfilment” of one of the accreditation of prior experience containers mentioned above? If you try it, let me know how you get on…;-)
I was at a meeting yesterday looking at rebooting the OU’s Facebook strategy. With a bit of luck, this means that we’ll be doing another push on the OU Facebook apps that were developed several years ago now and which I still believe provide a sound basis for a range of community building and social learning support services (Course Profiles – A Facebook Application for Open University Students and Alumni).
The apps were largely developed out of time and in stolen time, and it seems that things are likely to continue in this way (which is both a plus – freeing us from constraints of interminable committees wanting to plan strategies rather than jfdi, and a minus – @liamgh is the only person we trust with the code which means any maintenance falls to him ;-)
For those who don’t remember the apps we developed, there were two: Course Profiles, which allowed students to declare the courses that had taken were taking and intended to take, and then provided a range of services around that information (find friends on a course, find a study buddy, link to course information or course related OpenLearn resources, get course recommendations); and My OU Story, where students could maintain a “status diary” about their progress on a course, along with a mood indicator so they could track their mood over a course, and other app users could add supportive comments. (I’d be surprised if anyone in the Student Services retention project has even heard about this project, but looking at some of the peer support that has gone on within the context that app, I’d argue it might be contributing to retention…)
Course Profiles quickly attracted several thousand users following the initial push just after it was first launched, so it evidently served a need then that presumably still exists today, i.e. a badging mechansims for celebrating course achievements and declaring future study intentions. One thing that might be worth looking at is the rate at which early adopters of Course Profiles have continued to update it, and report on the extent to which their original “future study” intentions converted to actual course registrations.
There’s also going to be a push on growing the number of fans on the official OU profile page. I’m not sure what plan @stuartbrown has for growing the numbers (for the task appears to have fallen to him…;-) but with a bit of luck the apps as well as the fan page will get highlighted through some of the official communication channels.
We also had a bit of discussion around other potential apps. Something I’d quite like to see would be a gallery app pulling images from the various flickr groups that have popped up around the T189 Digital Photography short course. Alumni of that group are already pretty active, and have just launched their first online exhibition, so if we could provide a channel that increases the audience for their show, and if they’re happy for us to amplify it via an OU Facebook app, that might be quite a fun thing to try as a community building app… (For more about the background to the exhibition, see Inspiring Learners; also see the T189 Graduates’ Exhibition).
(I also wonder if a similar gallery style app might work to showcase some of the games that students on T151 Digital worlds manage to create, all with their permission of course…)
Someone (I forget who) also suggested a “Share on Facebook” button within the gallery environment students use to build their portfolio whilst they take T189 (limited so that sharing was limited to photographs that a student had uploaded themselves, of course). This would amplify a student’s work and progress on a course to their Facebook friends, and provide their friends with a glimpse of what sorts of activities are involved in this particular OU course.
One thing I never even half managed to convince anybody that it was important was the data that was collected by the Course Profiles app in particular, though I did have a go at a few quick’n'dirty takes on this, such as OU Course Profiles Facebook App – Treemaps, Hierarchical Course Clusters from Course Profiles App and Tinkering with Google Charts (which started to consider what a course team dashboard view might look like). I was mulling this over again last night, and the following uses came to mind if we started to reconcile Course Profiles with institutional data (something we were always wary of, but anyway – here’s the thinking…;-)
- predictors and conversion rates: I’m not sure if Liam is logging when/how users change their status updates, but it’d be useful to know what percentage of users are updating their Course Profiles (e.g. from ‘currently taking’ to ‘taken’ courses, or more interestingly ‘intend to take’ to taking) and whether an “intend to take” course declaration is a good predictor of whether students do actually take a course. There’s an obvious quick win here for a possibly intrusive marketing campaign chasing folk who’ve declared an ‘intend to take’ course but don’t appear to have followed up on it;
- predicting course sizes: with several thousand users, does the sample of users on Course Profiles predict future course enrollment numbers? As far as I know, no-one in planning ever came to us asking to peak at our data to explore this. Nor did any more than a couple of Course Chairs ever seem to think it was interesting that we had stated intentions about course pathways, and that for new courses in particular we might be able to spot whether students were signing up for a course based on a pathway the course team was hoping for?
- retention: is the retention rate of students on a course who are on Facebook with Course Profiles and/or My OU Story different to the retention rate across the course as a whole? Does the fact that students who have declared ‘intend to take’ courses on the Course Profile correlate with their likelihood of completing an award?
- course planning and recommendation: on the one hand, courses appear to have natural numbers; on the other, working out what courses to take in what order for a particular degree given various factors (such as courses already taken, course exclusions etc) can be a confusing affair. At the moment, I believe a rule based support tool is being explored to help with course recommendations, but how well do those suggestions compare with a simple clustering based on Course Profiles data?
PS Just in passing, it’s worth noting that as with other groups who’ve used Facebook to mount campaigns against unpopular corporate decisions, OU students are no different… Open University curbs Tesco ‘clubcard degree’ scheme .
Go to any of the data.gov.uk SPARQL endpoints (that’s geeky techie scary speak for places where you can run geeky techie datastore query language queries and get back what looks to the eye like a whole jumble of confusing Radical Dance Faction lyrics [in joke;-0]) and you see a search box, of sorts… Like this one on the front of the finance datastore:
So, pop pickers:
One thing that I think would make the SPARQL page easier to use would be to have a list of links that would launch one of the last 10 or queries that had run in a reasonable time, returned more than no results, displayed down the left hand side – so n00bs like me could at least have a chance at seeing what a successful query looked like. Appreciating that some folk might want to keep their query secret (more on this another day…;-), there should probably be a ‘tick this box to keep your query out of the demo queries listing’ option when folk submit a query.
(A more adventurous solution, but one that I’m not suggesting at the moment, might allow folk who have run a query from the SPARQL page on the data.gov.uk site “share this query” to a database of (shared) queries. Or if you’ve logged in to the site, there may be an option of saving it as a private query.)
That is all…
PS if you have some interesting SPARQL queries, please feel free to share them below or e.g. via the link on here: Bookmarking and Sharing Open Data Queries.
PPS from @iand “shouldnt that post link to the similar http://tw.rpi.edu/weblog/2009/10/23/probing-the-sparql-endpoint-of-datagovuk/“; and here’s one from @gothwin: /location /location /location – exploring Ordnance Survey Linked Data.
PPPS for anyone who took the middle way in the vote, then if there are any example queries in the comments to this post, do they help you get started at all? If you voted “what are you talking about?” please add a comment below about what you think data.gov.uk, Linked Data and SPARQL might be, and what you’d like to be able to with them…
A blog post on the Google GeoDevelopers blog last week announced:
Currently we are in the process of piloting certifications for several new APIs. We are building out certifications for KML, Google Earth Enterprise, and 3D in preparation for our first master certification, the Google Qualified Geo Web Developer. We’re also working on certifications for the AJAX Search API, Enterprise Apps, and Android.
(It seems like I was a little ahead of the curve when I blogged this almost 4 years ago: Google/Yahoo/Amazon Certified Professionals…;-)
There are already certified programmes for Cisco and Microsoft, of course, so it was only a matter of time before we started seeing badges like this one:
I wonder when we’ll be seeing a Google curriculum for computer science degrees too, building on the resources collected as part of the Google Code University? It seems they’re already trying to compete with the OU’s new short course Linux: an introduction with their Tools 101 tutorials, which includes intros to the Linux command line and grep;-) (It would be no loss to HE, of Google did take on compsci education, of course, because Computer Science degrees are ever harder to find, and much harder to do (too much reliance on logic and algorithm design) than Computing degrees… (Hmmm, a case of HE dumnping the academic in favour of the, err, more practical?!;-)
Of course, it may be that the Goog will get into delivering teaching qualifications?
One school subject area I think they could drive curriculum development is in geography – you do know they have a Geo Education website, don’t you…?;-)
Why does this matter? The internet based communications revolution hasn’t yet had a huge impact on the way we examine, assess and validate learning in formal academic education or on the curricula that are delivered. Maybe it shouldn’t. But whilst corporates have always produced educational promo packs, their reach has been limited to those students studying under teachers who have made use of those materials. And now we have search engines, and students will be coming across learning materials with corporate branding in the course of their own research. Maybe the kids will discount these materials as ‘tainted’ in some corporate way? Maybe they’ll see them as training materials and discount them as irrelevant to their academic educational studies? Or maybe they’ll see them as part of that userguide to the world that they’ll be referring to for the rest of their lives?
When I first joined the OU as a lecturer, I was self-motivated, research active, publishing to peer reviewed academic conferences outside of the context of a formal research group. That didn’t last more than a couple of years, though… In that context, and at that time, one of the things that struck me about the OU was that research active academics were expected to produce written work for publication in two ways: for research, through academic conferences and journals; and for teaching, via OU course materials.
The internal course material production route was, and still is, managed through a process of course team review in the authoring stage and then supported by editors, artists and picture researchers for publication, although I don’t remember so much involvement from media project managers ten years or so ago, if they even existed then? Pagination and layout was managed elsewhere, and for authors who struggled to use the provided document templates, the editor was at hand for technical review as well as typos and grammar, as well as reference checking, and a course secretary could be brought in to style the document appropriately. Third party rights were handled by the course manager, and so on.
In contrast, researchers had to research and write their papers, produce images, charts, tables as required, and style the document as a camera ready document using a provided style sheet. In addition, published researchers would also review (and essentially help edit) works submitted to other journals and conferences. Th publisher contributed nothing except perhaps project management and the production and distribution of the actual print material (though I seem to remember getting offprints, receiving requests for them, and mailing them out with an OU stamp on an OU envelope).
Although I haven’t published research formally for some time, I suspect the same is still largely true nowadays…
Given that the OU is a publication house, publishing research and teaching materials as a way of generating income, I wonder if there is an opportunity for the Library to support the research publication process providing specialist support for research authors, including optimising them for discovery!
At the current time, many academic libraries host their institution’s repository, providing a central location within which are lodge copies of academic research publications produced by members of that institution. Some academic publishers even offer an ‘added value’ service in their publication route whereby a published article, as written, corrected, layed out, paginated, rights cleared, and rights waived by the author (and reviewed for free by one or more of their peers) will be submitted back to the institution’s repository.
[Cue bad Catherine Tate impression]: what a f*****g liberty… [!]
So as the year ends, here’s a thought I’ve ranted to several people over the year: academic libraries should seize the initiative from the academic publishers, adopt the view that the content being produced by the academy is valuable to publishers as well as academics, that the reputation of journals is in part built on the reputation of the institutions and academics responsible for producing the research papers, and set up a system in which:
- academics submit articles to the repository using an institutional XML template (no more faffing around with different style sheets from different publishers), at which point they are released using a preview stylesheet as a preprint;
- journals to which articles are to be submitted are required to collect the articles from the repository. Layout and pagination is for them to do, before getting it signed off by the author;
- optionally, journal editors might be invited to bid for the right to publish an article formally. The benefit of formal publication for the publisher is that when a work is cited, the journal gets the credit for having published the work.
That is all… ;-)
PS RAE/REF style accounting could also be used in part to set journal pricing and payments. Crap journals that no-on cites content in would get nothing. Well cited journals would be recompensed more generously… There would of course bee opportunities for gaming the system, but addressing this would be similar in kind to implementing measures that search engines based on PageRank style algorithms take against link farms, etc.
So it seems that Lord Mandelson “says he expects students to adopt a more consumer-led approach to their university education” (Mandelson backs consumer students).
So as well as students championing their (consumer) rights, I guess that means the marketing folk will also get the opportunity to hatch all sorts of new marketing plans…
…like Tesco ClubCard Deals:
and Gift Vouchers:
As well as the sweatshirt and scarf merchandise, there’ll also be the course materials shop?
And I guess a second hand market in course materials will also emerge?
So what’s new? ;-)