In part for a possible OU Library workshop, in part trying to mull over possible ideas for an upcoming ILI2015 workshop with Brian Kelly, I’ve been pondering what sorts of “data literacy” skills are in-scope for a typical academic library.
As a starting point, I wonder if this slicing is useful, based on the ideas of data management, discovery, reporting and sensemaking.
It identifies four different, though interconnected, sorts of activity, or concern:
- Data curation questions – research focus – covering the management and dissemination of research data, as well as dissemination issues. This is mainly about policy, but begs the question about who to go to for the technical “data engineering” issues, and assumes that the researcher can do the data analysis/data science bits.
- Data resourcing – teaching focus – finding and perhaps helping identify processes to preserve data for use in teaching context.
- Data reporting – internal process focus – capturing, making sense of/analysing, and communicating data relating to library related resources or activities; to what extent should each librarian be able to use and invoke data as evidence relating to day job activities. Could include giving data to course teams about resource utilisation, research teams to demonstrate impact ito tracking downloads and use of OU published resources.
- Date sensemaking – info skills focus – PROMPT in a data context, but also begging the question about who to go to for “data computing” applications or skills support (cf academic/scientific computing support, application training); also relates to ‘visual literacy’ in sense of interpreting data visualisations, methods for engaging in data storytelling and academic communication.
Poking in to each of those areas a little further, here’s what comes to mind at first thought…
The library is often the nexus of activity around archiving and publishing research papers as part of an open access archive (in the OU, this is via ORO: Open Research Online). Increasingly, funders (and publishers) require that researchers make data available too, often under an open data license. Into this box I’m thinking of those activities related to supporting the organisation, management, archiving, and publication of data related to research. It probably makes sense to frame this in the context of a formal lifecycle of a research project and either the various touchpoints that the lifecycle might have with the library, or those areas of the lifecycle where particular data issues arise. I’m sure such things exists, but what follows is an off-the-of-my-head informal take on it…!
Initial questions might relate to putting together (and costing) a research data management plan (planning/bidding, data quality policies, metadata plans etc). There might also be requests for advice about sharing data across research partners (which might extend privacy or data protection issues over and above any immediate local ones). In many cases, there may be concerns about linking to other datasets (for example, in terms of licensing or permissions, or relating to linked or derived data use; mapping is often a big concern here), or other, more mundane, operational issues (how do I share large datafiles that are too big to email?). Increasingly, there are likely to be publication/dissemination issues (how/where/in what format do I publish my data so it can be reused, how should I license it?) and legacy data management issues (how/where can I archive my data? what file formats should I use?). A researcher might also need support in thinking through consequences – or requirements – of managing data in a particular way. For example, particular dissemination or archiving requirements might inform the choice of data management solution from the start: if you use an Access database, or directory full of spreadsheets, during the project with one set of indexing, search or analysis requirements, you might find a certain amount of re-engineering work needs to be done in the dissemination phase if there is a requirement that the data is published at record level on a public webpage with different search or organisational requirements.
What is probably out of scope for the library in general terms, although it may be in scope for more specialised support units working out of the library, is providing support in actual technology decisions (as opposed to raising technology specification concerns…) or operations: choice of DBMS, for example, or database schema design. That said, who does provide this support, or whom should the library suggest might be able to provide such support services?
(Note that these practical, technical issues are totally in scope for the forthcoming OU course TM351 – Data management and analysis…;-)
For the reference librarian, requests are likely to come in from teaching staff, students, or researchers about where to locate or access different sources of data for a particular task. For teaching staff, this might include identifying datasets that can be used in the context of a particular course, possibly over several years. This might require continuity of access via a persistent URL to different sorts of dataset: a fixed (historical) dataset, for example, or a current, “live” dataset, reporting the most recent figures month on month or year on year. Note that there may be some overlap with data management issues, for example, ensuring that data is both persistent and provided in a format that will remain appropriate for student use over several years.
Researchers too might have third party data discovery or access requests, particularly with respect to accessing commercial or privately licensed data. Again, there may be overlaps with data management concerns, such as how to managing secondary data/third party data appropriately so it doesn’t taint the future licensing or distribution of first party or derived data, for example.
Students, like researchers, might have very specific data access requests – either for particular datasets, or for specific facts – or require more general support, such as advice in citing or referencing sources of secondary data they have accessed or used.
In the data reporting bin, I’m thinking of various data reporting tasks the library might be asked to perform by teaching staff or researchers, as well data stuff that has to be done as internally within the library, by librarians, for themselves. That is, tasks within the library that require librarians to employ their own data handling skills.
So for example, a course team might want to know what library managed resources referenced from course material are being when and by how many students. Or learning analytics projects may request access to data to help build learner retention models.
A research team might be interested in number of research paper or data downloads from the local repository, or citation analyses, or other sources of bibliometric data, such as journal metrics or altmetrics, for assessing the impact of a particular project.
And within the library, there may be a need for working with and analysing data to support the daily operations of the library – staffing requirements on the helpdesk based on an analysis of how and when students call on it, perhaps – or to feed into future planning. Looking at journal productivity, for example, (how often journals are accessed, or cited, within the institution) when it comes to renewal (or subscription checking) time; or at a more technical level, building recommendation systems on top of library usage data. Monitoring the performance of particular areas of the library website through website analytic, or even linking out to other datasets and looking at the impact of library resource utilisation by individual students on their performance.
In this category, I’m lumping together a range of practical tools and skills to complement to the tools and skills that a library might nurture through information skills training activities (something that’s also in scope for TM351…). So for example, one are might be providing advice about how to visualise data as part of a communication or reporting activity, both in terms of general data literacy (use a bar chart, not a pie chart for this sort of data; switch the misleading colours off; sort the data to better communicate this rather than that, etc) as well as tool recommendations (try using this app to generate these sorts of charts, or this webservice to plot that sort of map). Another might be how to read, interpret, or critique a data visualisation (looking at crappy visualisations can help here!;-), or rate the quality of a dataset in much the same way you might rate the quality of an article.
At a more specialist level, there may be a need to service requests about what tools to use to work with a particular dataset, for example, a digital humanities researcher looking for advice on a text mining project?
I’m also not sure how far along the scale of search skills library support needs to go, or whether different levels of (specialist?) support need to be provided for undergrads, postgrads and researchers? Certainly, if your data is in a tabular format, even just as a Google spreadsheet, you become much more powerful as a user if you can frame complex data queries (pivot tables, any one?) or start customising SQL queries. Being able to merge datasets, filter them (by row, or by column), or facet them, cluster them or fuzzy join them are really powerful dataskills to have – and that can conveniently be developed within a single application such as OpenRefine!;-)
Note that there is likely to be some cross-over here also between the resource discovery role described above and helping folk develop their own data discovery and criticism skills. And there may also be requirements for folk in the library to work on their own data sensemaking skills in order to do the data reporting stuff…
So, is that a useful way of carving up the world of data, as the library might see it?
The four different perspectives on data related activities within the library described above cover not only data related support services offered by the library to other units, but also suggest a need for data related skills within the library to service its own operations.
What I guess I need to do is flesh out each of the topics with particular questions that exemplify the sort of question that might be asked in each context by different sorts of patron (researcher, educator, learner). If you have any suggestions/examples, please feel free to chip them in to the comments below…;-)
In the context of something else, I mooted whether a particular project required an “open access academic library” as a throwaway comment, but it’s a phrase that’s been niggling at me, along with the associated “open access academic librarian”, so I’ll let my fingers do the talking and see what words come out…
Traditional academic libraries provide a range a services: they’re a home to physical content, and an access point to online subscription content; they provide managed collections that support discovery and retrieval of “quality” content; they promote skills development that allow folk to discover and retrieve content, and rate its quality, as well as providing expert levels of support for discovery and retrieval. They support teaching by forcing reading lists out of academics and making sure corresponding items are available to students. They have a role to play in managing a university’s research knowledge outputs, maintaining repositories of published papers and, in previous years, operating university presses. They are looking to support the data management needs of researchers, particularly with respect to the data publication requirements being placed on researchers by their funders. If they were IT empire builders, they’d insist that all academics can only engage with publishers through a library system that would act as an intermediary with the academic publishers and could automate the capture of pre-prints and supporting data; but they’re too gentle for that, preferring to ask politely for a copy, if we may… And they do cake – at least, they do if you go to meetings with the librarians on a regular basis.
To a certain extent, libraries are already wide-open access institutions, subject to attack, offering few barriers to entry, at least to their members, though unlikely to turn anyone with a good reason away, providing free-at-the-point of use access to materials held, or subscribed to, and often a peaceful physical location conducive to exploring ideas.
But what if the library needed to support an fully open-access student body, such as students engaged in an open education course of study, or an open research project, for a strict, rather than openwashed, definition of open? Or perhaps the library serves a wider community of people with problems that access to appropriate “academic” knowledge might help them solve? What would – could – the role of the library be, and what of the role of the librarian?
First, the library would have to be open to everyone. An open course has soft boundaries. A truly open course has no boundaries.
Secondly, the library would need to ensure that all the resources it provided a gateway to were openly licensed. So collections would be built from items listed on the Directory of Open Access Journals (DOAJ), perhaps? Indeed, open access academic librarians could go further and curate “meta-journal” readers of interest to their patrons (for example, I seem to remember Martin Weller experimenting with just such a thing a few years ago: Launching Meta EdTech Journal).
Thirdly, the open access academic library should also offer a gateway to good quality open textbook shelves and other open educational resources. As I found to my cost only recently, searching for useful OERs is not a trivial matter. Many OERs come in the form of lecture or tutorial notes, and as such are decontextualised, piecemeal trinkets. If you’re already at that part of the learning journey, another take on the “Mech Eng Structures, week 7” lecture might help. If you want to know out of nowhere how to work out the deflection of a shaped beam, finding some basic lecture notes – and trying to make sense of them – only gets you so far; other pieces (such as the method of superposition) seem to be required. Which is to say, you also need the backstory and a sensible trail that can walk you up to that resource so that you can start to make sense of it. And you might also need other bits of knowledge to answer the question you have to hand. (Which is where textbooks come in again – they embed separate resources in a coherent knowledge structure.)
Fourthly, to mitigate against commercial constraints on its activities, the open access library should explore open sustainability. Such as being built on, and contributing to, the development of open infrastructure (see also Principles for Open Scholarly Infrastructures; I don’t know whether things like Public Knowledge Project (PKP) would count as legitimate technology parts of such an infrastructure? Presumably things like CKAN and EPrints would?).
Sixthly, the open access digital library could provide access to online applications or online digital workbenches (of which, more in another post). For example, I noticed the other day that Bryn Mawr College provide student access to Jupyter (IPython) notebooks. Several years ago, the OU’s KMI made RStudio available online to researchers as part of KMI Crunch, and so on. You might argue that this is not really the role of the library – but physical academic libraries often provide computer access points to digital services and applications subscribed to by the university on behalf of the students, student desktops replete the software tools and applications the student needs for their courses. If I’m an open access learner with a netbook or a tablet, I couldn’t install desktop software on my computer even if I wanted to.
Seventhly, there probably is a seventh, and eigth, and maybe even a ninth and tenth, but my time’s up for this post.. (If only there were room in the margin of my time to write this post properly…;-)
As I scanned my feeds this morning, a table in a blog post (Thoughts on KOS (Part 3): Trends in knowledge organization) summarising the results from a survey reported in a paywalled academic journal article – Saumure, Kristie, and Ali Shiri. “Knowledge organization trends in library and information studies: a preliminary comparison of the pre-and post-web eras.” Journal of information science 34.5 (2008): 651-666 [pay content] – really wound me up:
My immediate reaction to this was: so why isn’t cataloguing about metadata? (Or indexing, for that matter?)
In passing, I note that the actual paper presented the results in a couple of totally rubbish (technical term;-) pie charts:
More recently (that was a report from 2008 on a lit review going back before then), JISC have just announced a job ad for a role as Head of scholarly and library futures to “provide leadership on medium and long-term trends in the digital scholarly communication process, and the digital library.“. (They didn’t call… You going for it, Owen?!;-)
The brief includes “[k]eep[ing] a close watch on developments in the library and research support communities, and practices in digital scholarship, and also in digital technology, data, on-line resources and behavioural analytics” and providing:
Oversight and responsibility for practical projects and experimentation in that context in areas such as, but not limited to:
- Digital scholarly communication and publishing
- Digital preservation
- Management of research data
- Resource discovery infrastructure
- Citation indices and other measures of impact
- Digital library systems and services
- Standards, protocols and techniques that allow on-line services to interface securely
So the provision of library services at a technical level, then (which presumably also covers things like intellectual property rights and tendering – making sure the libraries don’t give their data and organisation’s copyrights to the commercial publishers – but perhaps not providing a home for policy and information ethical issue considerations such as algorithmic accountability?), rather than identifying and meeting the information skills needs of upcoming generations (sensemaking, data management and all the other day to day chores that benefit from being a skilled worker with information).
It would be interesting to know what a new appointee to the role would make of the recently announced Hague Declaration on Knowledge Discovery in the Digital Age (possibly in terms of a wider “publishing data” complement to “management of research data”), which provides a call for opening up digitally represented content to the content miners.
I’d need to read it more carefully, but at the very briefest of first glances, it appears to call for some sort of de facto open licensing when it comes to making content available to machines for processing by machines:
Generally, licences and contract terms that regulate and restrict how individuals may analyse and use facts, data and ideas are unacceptable and inhibit innovation and the creation of new knowledge and, therefore, should not be adopted. Similarly, it is unacceptable that technical measures in digital rights management systems should inhibit the lawful right to perform content mining.
The declaration also seems to be quite dismissive of database rights. A well-put together database makes it easier – or harder – to ask particular sorts of question and to a certain respect reflects the amount of creative effort involved in determining a database schema, leaving aside the physical effort involved in compiling, cleaning and normalising the data that secures the database right.
Also, if I was Google, I think I’d be loving this… As ever, the promise of open is one thing, the reality may be different, as those who are geared up to work at scale, and concentrate power further, inevitably do so…
By the by, the declaration also got me thinking: who do I go to in the library to help me get content out of APIs so that I can start analysing it? That is, who do I go to get help with with “resource discovery infrastructure” and perhaps more importantly in this context, “resource retrieval infrastructure”? The library developer (i.e. someone with programming skills who works with librarians;-)?
And that aside from the question I keep asking myself: who do I go to to ask for help in storing data, managing data, cleaning data, visualising data, making sense of data, putting data into a start where I can even start to make sense of it, etc etc… (Given those pie charts, I probably wouldn’t trust the library!;-) Though I keep thinking: that should be the place I’d go.)
The JISC Library Futures role appears silent on this (but then, JISC exists to make money from selling services and consultancy to institutions, right, not necessarily helping or representing the end academic or student user?)
But that’s a shame; because as things like the Stanford Center for Interdisciplinary Digital Research (CIDR) show, libraries can act as a hub and go to place for sharing – and developing – digital skills, which increasingly includes digital skills that extend out of the scientific and engineering disciplines, out of the social sciences, and into the (digital) humanities.
When I started going into academic libraries, the librarian was the guardian of “the databases” and the CD-ROMs. Slowly access to these information resources opened up to the end user – though librarian support was still available. Now I’m as likely to need help with textmining and making calendar maps: so which bit of the library do I go to?
A week late on posting this, catching up with Brian’s notes on the ILI 2013: Future Technologies and Their Applications Workshop workshop we ran last week, and his follow up – What Have You Noticed Recently? – inspired by not properly paying attention to what I had to say, here are few of my own reflections on what I heard myself saying at the event, along with additional (minor) comments around the set of ‘resource’ slides I’d prepped for the event, though I didn’t refer to many of them…
- slides 2-6 – some thoughts on getting your eye into some tech trends: OU Innovating Pedagogy reports (2012, 2013), possible data-sources and reports;
- slides 6-11 – what can we learn from Google Trends and related tools? A big thing: the importance of segmenting your stats; means are often meaningless. The Mothers’ Day example demonstrates two signal causes (in different territories – i.e. different segments) for the compound flowers trend. The Google Correlate example show how one signal may lead – or lag – another. So the question: do you segment your library data? Do you look for leading or lagging indicators?
- slides 12-18 – what role should/does/could the library play in developing the reputation of the organisation’s knowledge producers/knowledge outputs, not least as a way of making them more discoverable; this builds on the question of whose role it is to facilitate access to knowledge (along with the question: facilitate access for whom?)? – my take is this fits in the role librarians often take of organising an institution’s knowledge.
- slides 19-27 – what is a library for? Supporting discovery (of what, by whom)? (Helping others) organise knowledge, and gain access to information? Do research?
- slides 28-30 – the main focus of my own presentation during the main ILI2013 conference (I’ll post the slides/brief commentary in another post): if the information we want to discover is buried in data, who’s there to help us extract or discover the information from within the data?
- slides 31-32 – sometimes reframing your perception of an organisation’s offerings can help you rethink the proposition, and sometimes using an analogy helps you switch into that frame of mind. So if energy utilities provide “warm house” and “clean, dry clothes” service, rather than gas or electricity, what shift might libraries adopt?
- slides 33-39 – a few idle idea prompts around the question of just what is it that libraries do, what services do they provide?
- slide 40 – one of the items from this slide caused a nightmare tangent! The riff started with a trivial observation – a telling off I received for trying to use the phone on my camera to take a photo of a sign saying “no cameras in the library”, with a photocopier as a backdrop (original story). The purpose of this story was two-fold: 1) to get folk into the idea of spotting anachronisms or situations where one technology is acceptable where an equivalent or alternative is not (and then wonder why/what fun can be had around that thought;-); 2) to get folk into wondering how users might appropriate technology they have to hand to make their lives easier, even if it “goes against the rules”.
- slide 41 – a thought experiment that I still have high hopes for in the right workshop setting…! if you overheard someone answer a question you didn’t hear with the phrase “did you try the library?”, what might the question be? You can then also pivot the question to identify possible competitors; for example, if a sensible answer to the same question is “did you try Amazon?”, Amazon might be a competitor for the delivery of that service.
- slide 42 – this can lead on from the previous slide, either directly (replace “library” with “Amazon” or “Google”), or as way of generating ideas about how else a service might be delivered.
Slide not there – a riff on the question of: what did you notice for the first time today? This can be important for trend spotting – it may signify that something is becoming mainstream that you hadn’t appreciated before. To illustrate, I’ve started trying to capture the first time I spot tech in the wild with a photo, such as this one of an Amazon locker in a Co-Op in Cambridge, or a noticing from the first time I saw video screens on the Underground.
As with many idea generating techniques, things can be combined. For example, having introduced the notion of Amazon lockers, we might then ask: so what use might libraries make of such a system, or thing? Or if such things become commonplace, how might this affect or influence the expectations of our users??
I though this was handy on the OER-DISCUSS mailing list:
Our copyright officer writes:
… US Copyright ‘Fair Use’ or S29 copying for non-commercial research and private study which allows copying but the key word here is ‘private’. i.e. the provisos are that you don’t make the work or copies available to anyone else.
Although there are UK Exceptions for education, they are very limited or obsolete.
S.32 (1) and (2A) do have the proviso “is not done by reprographic process” which basically means that any copying by any mechanical means is excluded, i.e. you may only copy by hand.
S36 educational provision in law for reprographic copying is
a) only applicable to passages in published works i.e. books journals etc and
b) negated becauses licences are now available S.36 (3)
S.32 (2) permits only students studying courses in making Films or Film soundtracks to copy Film, broacasts or sound recordings.
The only educational exception students can rely on is s.32(3) for Examination athough this also is potentially restrictive. For the exception to apply, the work must count towards their final grade/award and any further dealing with the work after the examination process, becomes infringement.
I’m not sure how they are using Voicethread, but if the presentations are part of their assessed coursework and only available to students, staff and examiners on the course, they may use any Copyright protected content, provided it’s all removed from availability after the assessment (not sure how this works with cloud applications though)
There is also exception s.30 for Criticism or Review, which is a general exception for all, and the copying is necessary for a genuine critique or review of it.
If the students can’t rely on the last 3 exceptions, using Copyright free or licenced material (e.g. Creative Commons), would be highly recommended.
Kate Vasili – Copyright Officer, Middlesex University, Sheppard Library
One of the possible barriers to widespread adoption of open notebook science is knowing where to start. Video reports of lab experiments hosted on Youtube can be easily embedded in a hosted WordPress blog; a MediaWiki wiki can be used to provide one page per experiment, with change tracking/history on each page and a shadow page for commentary and discussion; Github can be used to provide a version control environment for software code, results data, project pages and documentation. For tabulated data, Google Spreadsheets provides a hosting environment and an API that lets you treat the data as a database and also explore it dashboard style via a range of interactive visual filtering and charting components. Alternatively, a CKAN instance (such as is used to run thedatahub.org) offers data management and preview tools.
Keeping track of data analysis in an open way is also getting easier. In An R-chitecture for Reproducible Research/Reporting/Data Journalism, I briefly mentioned RPubs.com, a site that can be used to 1-click publish HTML reports of statistical analyses executed within the RStudio environment (I really need to do a proper post about this). But now there’s an example of another hosted solution from Fridolin Wild of the OU’s KMi: Crunch.
Crunch offers a hosted RStudio environment (so you can access RStudio via a browser) with public and private areas. The public areas allow you to post datasets, run scripts as a service, or publish results (Sweave generated PDFs, or knitr generated HTML reports, for example).
Crunch also incorporates a MySQL database for each user. (Scheduling and pipelining are also on the cards…)
Whilst developed as an application to support learning analytics (I think?), Crunch also provides a great demonstration of a more general open research data workbench. You can store – and publish – data sets, along with analysis scripts and reports generated by executing those scripts over your data set. Version control isn’t available at the moment (I think?) but RSTudio does have git/github support, so that may be coming. The provision of a MySql database means that data collections can be managed within a database environment. (From a data journalism, rather than an open/reproducible research, perspective, I did wonder whether it would be possible to situate something like Scraperwiki on the same platform and replace its SQLite support with MySQL support, so a Scraperwiki scraper could be used to scrape data into a MySQL database that was then accessed from RStudio? Being able to wire MySQL read/write access into Google Refine on the same platform could also be interesting..;-)
I’m not sure about the extent to which the OU LIbrary is taking an interest in the development of Crunch, but providing best practice support and advice in the orchestration of information and data handling tools seems to me to be in-scope for the academic research librarian, in much the same way as advising on the use of bibliography data management tools used to be…? (For a recent take on this, see Dorothea Salo’s recent Ariadne article Retooling Libraries for the Data Challenge.)
I had the honour of being invited to talk at the JIBS User Group 20 Anniversary AGM yesterday, and as well as having a bit of a rant in the closing plenary about opening up and making internal reuse of data and making FOI requests about SCONUL data*, I also gave this sideways take on Ranganathan’s Five Laws of Library Science for the current age (The Frictionless Library).
Amongst other things, the presentation sketches a possible project (that I think could make for a good workshop day) revisiting each of the laws in network context using the various techniques of constitutional interpretation and (briefly) revisits at least one of the notions of the Invisible Library (see also The Invisible Library (ILI, 2009), another meaningless set of slides…;-)
* Note to self: read up about the current HESA HE Information Landscape Project (Redesigning the higher education data and information landscape). Also check out the “KB+” JISC project (programme?) that will “develo[p] a shared community service that will improve the quality, accuracy, coverage and availability of data for the management, selection, licensing, negotiation, review and access of electronic resources for UK HE” (via @benshowers) and the Talis Aspire Community Edition (aggregated reading lists across several HEIs).
PS I’m working out how to make the slides a little bit more useful as a post hoc/legacy resource by posting them with a bit a context and commentary… But it may take a bit of time…
PPS on the way home, I listened to this Long Now Foundation seminar by Brewster Kahle on Universal Access to All Knowledge, which got me wondering about the extent to which University libraries are depositing resources into the Internet Archive..? There’s a nice piece at the end that makes the point that IPR is such that in terms of the digital record, there’s likely to be a gap in the timeline of archived content right around the 20th century…
PPPS as far as library futures go, here’s a loosely related Roadmapping TEL activity on “Ideas that influence the future of technology enhanced learning” that is currently running on Ideascale.
There were also several discussions during the day relating to information skills needs for 21st century librarians. Some of the ANCIL reports from the Arcadia project on a new information literacy curriculum may be of interest to JIBS members in this regard, I think? Arcadia Project Report
I think there’s a real need for librarians to help folk make sense of the wealth of data out there, and this in part requires a good understanding of network structures and organisations, not just a concentration on hierarchical models.
Hear (sic) also, for example, OU Vice Chancellor Martin Bean on ‘sensemaking’ and the role of the library from his 2010 ALT-C Keynote:
I think it’s also time to start seeing people as information and knowledge resources, as well as just texts…