Saturday, November 30, 2013

And another fail scenario for EZproxy: GeoScienceWorld and the Google Maps API

This issue resolved by GSW December 8

To borrow a line from Jonathan Rochkind's Bibliographic Wilderness 'EZProxy is terrible, it's just the best we've got.' 

I don't think it's terrible but it's always been a pain that TOC services don't work off campus and that RSS feeds occasionally don't work at all, but this week I got a something new.

GeoScienceWorld's search interface includes a world map that uses the Google Maps API. -  and if we access the search page via ezproxy (and we default all users through EZProxy - even on campus) Google Maps refuses to serve data because the API Key is not registered for use in our domain (i.e. it works if you strip out the EZProxy suffix '')
This is the message
Google has disabled use of the Maps API for this application. This site is not authorized to use the Google Maps client ID provided. If you are the owner of this application, you can learn more about registering URLs here:
This happens as soon as you try to access GSW's search because the map is built into the search form - even if you don't want to use it.  That message appears in a dialog box - it scares off most users, who don't realise clicking 'OK' will get you to vanilla searching (sans Map) but that gives the impression search might be broken because the map disappears leaving a worryingly blank blue square.

If we remove the proxying from the link in our A-Z of database pages it will work for on campus users but off campus users won't get access to the website at all because of the IP restriction.

I haven't worked with Google Maps API - but my reading of their authorisation doco suggests there isn't a simple workaround. The prevention of other sites using your app is intentional - in a library context it's a barrier. I'm going to guess that GSW isn't interested in registering every subscriber's EZProxy domain even if it's possible.

Will be getting our people to talk to their people to see what can be done, and if anyone else has mentioned it.

People have suggested that in the future the EZProxy fail situations might be resolved by Shibboleth, or some other middle tier authentication broker. Right now I'm suggesting that power users use the VPN and strip the EZProxy suffix. Being a specialist resource most users should be easy to define (Earth Sciences), and therefore easy to inform of the problem and workarounds.

Will comment if GSW get back to  us.

Monday, November 25, 2013

Google Scholar, WoS and Informit

Not sure why Google Scholar has suddenly intersected with my job again after a seeming hiatus. The renewed activity seems to indicate Google is committed to developing and maintaining Scholar - (have already seem some conspiracy theory postings about this being the next step in googlizing the information universe now that the Google Book law suit has been dismissed)

Anyway at least I have a theme uniting a bunch of thoughts.

Google Scholar as Research Platform

I don't know when it happened but Google Scholar has upped the ante on its functionality and is horizontally integrating services provided to researchers.

They've introduced:
  • A web-based citation manager (My Library)
  • A quasi research portfolio (a list of your research output that appears in Scholar)
  • An alerting service (you get emails when new items match your criteria)
  • H5 metrics
While fiddling around I found that in settings you can configure Scholar to show 'Export to Endnote' links (also BibTeX, RefMan and RefWorks)

All you need for this functionality is a Google Account.

Google Scholar home page showing services

WoS (Wusses Out Speedily)

First there wast Thomson Reuter's (TR) announcement last week that they were going integrate Web of Science (WoS) with Google Scholar and simultaneously stop integrating it with the other 'discovery layers' (Summon, Primo and Ebscohost).

TR's email stating that this was all about improving user experience didn't gel with my version of reality and I suspected (wrongly, I assume) that this was an example of Google being evil - that TR had to be exclusive to Google to jump on their bandwagon. But a couple of days of whingeing tweets and blogs and listserv emails (and who knows what unpublic communication) from libraryland and TR announced that it would continue working with the other discovery layers. That the new announcement was so quick seemingly disproves there was any exclusivity agreement with Google - it seems it must have been a business decision to lessen their workload and thus increasing profit - pretty sure no discount would have come had they proceeded with discontinuing DL support.

Anyway, except for some raised blood pressure no harm done. Now we wait and see how Google Scholar uses the WoS data.  I'm pretty sure it will still require an institutional subscription for a user to see the data. I assume IP restriction will be used, so citation counts will appear much like our Link Resolver appears if you access Scholar on campus or through EZproxy.

Given that Scholar already has citation counts it may be that WoS will simply replace whatever Scholar is doing to create those counts.

Not so sure if users will able to attach themselves to an institutional subscription without EZproxy (like you currently can in your Scholar Settings).


The Summon developers announced that the Informit suite is now included in their unified index. I found a problem with full text links to Find It@JCU for previous titles/ISSNs, and I wanted to see if Scholar was building OpenURLs that worked - only to discover that Informit is a 'special case' in Scholar.  If you click on the title of a citation harvested from Informit in Scholar you are taken to full text in Informit - works great if you are on campus. Not sure what happens if you aren't.

I've submitted a request to RMIT Publishing asking that they work with Scholar to provide some sort of visual indication in the SERP that fulltext is available (it looks like a typical citation only hit - there is no link in the right column to either the link resolver or a URL)

Google Scholar Informit citation without full text indicator
Informit citation without full text indicator

Wednesday, August 7, 2013

My UX Frustrations Visualised

I am turning into a cranky old man - I can only apologise to my colleagues for having to put up with my tirades. They seemed to like Steve Krug's 'Don't Make Me Think', but the home page remains a regular battleground. I know compromises have to be made - but the compromises always seem to cost the users.  So to vent some steam and publicly apologise for my crankiness I resort to faux visualisation:

This is what you want, this is what you get CC-BY-SA Alan Cockerill

Monday, July 29, 2013

Shifting shores of digital music

Just some random thoughts sparked by seeing the 'Spotify is killing iTunes' story in the Australian Financial Review that got a mention on ABC24 this morning.

First it was underlining what you read in the tech press about the future all the time, i.e. things are changing quicker.

That the iTunes store has moved from being the snarky young punk the of music distribution business to an overlord in decline, in barely a decade,  is semi-startling.

The ultra-personalised digital world is here, well it's been here for a while but now it's slapping us in the face.

There are some pretty obvious parallels between the ebook and music publishing businesses. What significance for libraries does the apparent coming triumph of streaming/rental over download/own have for us?

Does our ingrained love of the book (the owned object containing fixed information) have as much cachet with Gen Z as it does with boomers? Will information be completely fluid, will all knowledge be a constantly moving mashup?  Maybe it is already. Maybe it always was.

Extrapolating on the moves:
  • from ownership, to rental, to instant access; 
  • from album to song;  
  • from labels dictating taste on large scale to an explosion of gatekeepers directing increasingly specialised taste groups; 
  • from music news from formal publishing and broadcast channels to social media-enhanced word-of-mouth
  • from the recording being the income generator to being merely a promotional tool for gig attendance
 I had a vision.

A vision where a cadre of multitalented individuals visit tiny online communities, opening their ears to great sounds from far off (out) places and moving to the next community.  Ladies and gentlemen, I give you the digital troubadour.

Thursday, July 18, 2013

Random thought: The limits of Google Analytics - it's just data, not information

I'm still wading through Google Analytics to get an idea of how the web site is used so I can have some 'evidence-based' proposals for the site home page and persistent navigation.

One thing that I'm pondering after my last blog post is referrals from Search Engines. If a page is linked to much more often from a search engine results page (SERP) than from another page in your site does that indicate your site is failing the user, or that the user prefers to use a search engine?

If a page is a common exit point does that mean it satisfied the user need, or did it just frustrate them enough to give up?  Does an elongated 'time on page' mean the content engrossed the reader or that they glazed over into catatonia?

If your total page hits go up down after a redesign does that mean you lost popularity or your site is providing the required content with a lot less clicks?

I think I could argue two opposite sides to almost anything Analytics appears to hint at - each supposition on a facet of Analytics would probably make an ideal topic for a formal debate in the vein of 'can money by happiness' or 'is honesty the best policy'

The answer is site Analytics on their own are ambiguous guides to user behaviour - you really need to observe, consult and 'know' your users. 

Analytics' value is in aggregating data so you can visualise behaviours that prompts you to formulate questions like WHY IS IT DOING THAT? IS THAT A GOOD THING?

If only users could be consulted in large numbers at any time that was convenient to me.

PS GA has some tables of Google Queries mapped against how often a page from your site  appeared in a SERP and how often one of your pages was clicked on, referred to as CTR (Click Through Rate).  It makes for interesting perusing, maybe one approach would be to interpret user goal from the search, and then see how closely the target page matched or referred to the goal.  An iterative approach that would probably improve user experience over time, but it would be difficult evaluate the impact.

Analysis of that information is making me think about using Summon's Best Bets

Thursday, July 11, 2013

Google Analytics 101: tracking down causes of page hit anomalies

I'm no Google Analytics guru, by any stretch, but over time my understanding and GA's powers seem to be incrementally increasing.  This post is about how GA helped me understand why a particular page ranks high in page hits on our site. If you are not very familiar with GA it may help you a little.

I'm preparing to revamp our web site to work with new responsive web design templates - these will significantly change the access points to our information architecture, but not the architecture itself. Anyway, part of my prep is confirming what clients are accessing most often and greasing the path to it.

This page comes to my attention:
Types of Information Sources - Primary, Secondary, Tertiary & Refereed Journals
Pretty dry supplementary information for information literacy programs I thought. Might get a few clicks at the start of each semester, maybe.

According to GA it was the twelfth most popular page on our website in the first 6 months of 2013 (of currently 1106 pages). Over 12,000 unique page hits.

'Oh noes' I think. Do I have to put a link to it in a prominent place? Who is using this. Why?

So first I use GA  to get a sense of how people are getting to this page, using the content  and navigation summary features (see screencast below)

So what can I learn about why people are being referred to a particular page? What does GA tell us about Google search referrals? Using Traffic Sources, Landing Pages and Search/Organic and Keywords ('nother screencast)

Mystery solved, that particular page appears at the top the Google SERP for types of information.  I can with confidence not worry about whether it should be more prominent in our site structure and nav - the vast majority of use is from a Google search done by the wider public.

Lesson learned - don't accept page hits the concrete truth about how your primary clients are using your site.

Thank you Mister Google.

Monday, April 29, 2013

Our Imaginary Scorecard (Part 2 of Library Intranets in Academia: Chicken nipples and fish boots?)

Welcome to part 2 of this journey into an Academic Library Intranet Review.
In Part 1 I sketched out the technologies, content and management practices of our current Intranet.

What is an Intranet?

To me an intranet is a business tool that provides access (in the broadest sense) to information and applications using network protocols (in the broadest sense) to assist the staff achieve organisational goals (not going to say 'in the broadest sense' this time but will add the caveat that the assistance to achievement can be indirect, say for example if an corporate goal was to sell x widgets, then it's conceivable that an intranet with bulletin board for staff interaction could lead to a conversation that identified a previously untapped market segment in a different geographic location).

For maximum impact minimum effort I think it is useful to consider what Intranet is meant to do and consider our experience. I'm basing this on the Wikipedia description of an Intranet.

Imaginary Scorecard

So how does our Intranet stack up against the key functions laid out in Wikipedia in my hypercritical opinion?

1. Workforce productivity

Wikipedia says: Intranets can help users to locate and view information faster and use applications relevant to their roles and responsibilities (...) increasing employees' ability to perform their jobs faster, more accurately, and with confidence that they have the right information. It also helps to improve the services provided to the users.

My Subjective Score 2/10

Alan Says: With no search engine and an Information Architecture partially based on organisational structure finding relevant information is only slightly easier than on our team silo file shares. Due to poor (or no) information management compliance there is lots of duplication, no way of checking if the version of a document you have found is the latest, and massive document bloat from the archiving of items without thought to their lifecycle.  When functions cut across team boundaries more confusion and duplication ensues.  All staff I've talked to bemoan the lack of a search engine and new staff are very bemused when faced with where to look for a document, often not realising there are team file shares as well as the Intranet. There are no codified rules for what type of information goes where.

2. Time

Wikipedia says: Intranets allow organizations to distribute information to employees on an as-needed basis; Employees may link to relevant information at their convenience, rather than being distracted indiscriminately by email.

My Subjective Score 3/10

Alan Says: While we are generally quite good at storing information that will be retrieved on an as needed basis (various statistics, team reports, workshop videos, strategic plans and project monitoring documents). We fail on making them findable - and often we opt to link to these documents in emails to help people find them, so apart from the email storage space we save we haven't really cut down indiscriminate email. Often there isn't an awareness in staff that the Intranet will hold the information they seek.

3. Communication

Wikipedia says: Intranets can serve as powerful tools for communication within an organization, vertically strategic initiatives that have a global reach throughout the organization. The type of information that can easily be conveyed is the purpose of the initiative and what the initiative is aiming to achieve, who is driving the initiative, results achieved to date, and who to speak to for more information. By providing this information on the intranet, staff have the opportunity to keep up-to-date with the strategic focus of the organization. Some examples of communication would be chat, email, and or blogs. A great real world example of where an intranet helped a company communicate is when Nestle had a number of food processing plants in Scandinavia. Their central support system had to deal with a number of queries every day. When Nestle decided to invest in an intranet, they quickly realized the savings. McGovern says the savings from the reduction in query calls was substantially greater than the investment in the intranet.

My Subjective Score 3/10

Alan Says: We are pretty good about top down communication via our Intranet in the more strategic end of the spectrum, but on operational matters we fall down, there is no one clear channel for sharing current issues/workarounds/fixes.  We've tried so many things, the JustBetweenUs newsletter has died another death after we tried rehosting it within Blackboard. We don't maintain any sort of a searchable knowledge base of common issues/solutions. The documentation for a known issue is just as likely to be in a web page, libguide, intranet page, poster, printed handout, blog entry, email, streaming media outlet or someone's head.

Particularly disappointing was the University's roll out of ServiceNow (complete with CRM/KB functionality) that we were not invited to participate in.

Being multi campus and geographically isolated from each other we are trying various techniques to share information - we have sessions where local experts discuss something widely relevant (and these are recorded and stored on the Intranet) and we are increasingly using video conferencing and telepresence tools. We are also toying with twitter and tumblr as possible ways of more generally sharing internal news and professional development materials.

In this year's operational plan there are a skills audit and reviews of our internal communications and marketing which overlap fortuitously with an Intranet review.

4. Web publishing 

Wikipedia says: Allows cumbersome corporate knowledge to be maintained and easily accessed throughout the company using hypermedia and Web technologies. Examples include: employee manuals, benefits documents, company policies, business standards, news feeds, and even training, can be accessed using common Internet standards (Acrobat files, Flash files, CGI applications). Because each business unit can update the online copy of a document, the most recent version is usually available to employees using the intranet.

My Subjective Score 4/10

Alan Says: We do make procedures, training materials, even blog and twitter feeds available through the Intranet - but we fall down in the idea that the only version is the Intranet version, a lot of operational procedure documentation is stored on file shares we have little confidence that the version of a procedure is the latest, or only available, one.

There is patchy understanding across the organisation of information management practices as they relate to document structure and life cycle. Few staff use embedded metadata to record additional information about the document and its creation and maintenance. Word is our most popular document creation tool, HTML skills are not widely held, and the primacy of print is still evident. An understanding of the value of marking up content in heading hierarchies in Word documents is not widespread, although the public website through UCM does require a knowledge of style application for successful web publishing (Word source documents are harvested into XML and UCM then publishes them as HTML for viewing (or Word for editing), the process limits you from higher Word and HTML functionality.

5. Business operations and management 

Wikipedia says: Intranets are also being used as a platform for developing and deploying applications to support business operations and decisions across the internetworked enterprise.

My Subjective Score 5/10

Alan Says:We don't do a lot of this, although we do have some gadgets using PHP and JS to perform repetitive tasks that are vulnerable to human error if done manually (constructing proxied URLs for example).  Enterprise systems for the most part fall outside the scope of the Library Intranet, and are more likely provided by other business unit's extranets (Finance reports and HR systems, for example).  The intranet has been used to develop packages for later deployment into Blackboard - but for the most part the current Operating Environment is very cutback Apache  - no scope for introducing PHP libraries or MySQL for example, we do this on other servers as either public web or extranet.

6. Cost-effective 

Wikipedia says: Users can view information and data via web-browser rather than maintaining physical documents such as procedure manuals, internal phone list and requisition forms. This can potentially save the business money on printing, duplicating documents, and the environment as well as document maintenance overhead. [HR example removed]

My Subjective Score 7/10

Alan Says: Mostly we do not rely on large printed procedural manuals, although quite often it is staff preference to print out a copy of an oft repeated procedure - which can make promoting awareness to  the changes in master documents problematic, particularly if a staff member is absent when a change notification is circulated. We are long past needing a printed phone directory, but printed lists of library staff telephone numbers are still regularly distributed.

7. Enhance collaboration 

Wikipedia says: Information is easily accessible by all authorised users, which enables teamwork.

My Subjective Score 4/10

Alan Says: While all staff have access to the Intranet there is little document collaboration. Publishing is restricted to a small group of staff, based on skills alone (everyone has write permissions). Dreamweaver use (or html editing) is not a widespread skill.  The team silo file shares have been partially opened up but I doubt many staff have access to every share.  The faculty and liaison librarians in Townsville and Cairns did initiate using the Intranet to share Information Literacy training materials (saving documents to it as a SAMBA file share, but navigating via the web using some simple scripts I hacked together to list files and traverse folders).  Usage stats indicate it isn't getting much use. My gut tells me what collaboration on documents there is, is managed using email attachments and 'Track Changes'.

8. Built for one audience

Wikipedia says: Many companies dictate computer specifications which, in turn, may allow Intranet developers to write applications that only have to work on one browser (no cross-browser compatibility issues). Being able to specifically address your "viewer" is a great advantage. Since Intranets are user-specific (requiring database/network authentication prior to access), you know exactly who you are interfacing with and can personalize your Intranet based on role (job title, department) or individual ("Congratulations Jane, on your 3rd year with our company!").

My Subjective Score 0/10

Alan Says: We do have homogenous IT (Windows 7) although any browser is possible most staff use one of the three most popular  (IE, FF and Chrome). The number of tablet computers is expanding as well, I think least a quarter of staff would have one or a smart phone. We have the technical capability of customising the Intranet experience based on the identity of the user this have never been done.

9. Promote common corporate culture 

Wikipedia says: Every user has the ability to view the same information within the Intranet.

My Subjective Score 8/10

Alan Says: Generally the higher levels of management are committed to placing documents on the Intranet where all staff can see them. I mark our Intranet down on how easily findable these documents are, and also subtract a mark because it appears our corporate culture is such that we would rarely look for those documents in any case.

10. Immediate updates

Wikipedia says: When dealing with the public in any capacity, laws, specifications, and parameters can change. Intranets make it possible to provide your audience with "live" changes so they are kept up-to-date, which can limit a company's liability.

My Subjective Score 2/10

Alan Says: Editing and adding items is cumbersome, there is no organisational commitment to using the Intranet for this purpose, and generally it is not in our corporate DNA to look for information when we can ask someone else.

11. Supports a distributed computing architecture

Wikipedia says: The intranet can also be linked to a company’s management information system, for example a time keeping system.

My Subjective Score 2/10

Alan Says: The current operating environment is restrictive in terms of coding possibilities, and generally management information systems are either rudimentary (if in house) or are black boxes controlled by other business units with other priorities.

Library Intranets in Academia: chicken nipples and fish boots? Part 1 - Introduction and Where We Are


Welcome to a rambling series of posts that will be equal parts catharsis, brain dump, diary and progress report of the Intranet Review/Redevelopment Project JCU Library is embarking on.

I will try and maintain backwards and forwards links in this series.

I've been thinking a lot about what an Intranet should/could be in an academic library setting.  I posted a request on Twitter for any intranet practitioners out there working in that space - I even got a retweet from the lovely James Robertson, of StepTwo Designs, doyen of Intranets in Australia who I learnt a lot from back when he helped form Intranet Peers In Government in Canberra in the early 2000s.  But not one response.

LinkedIn has an Intranet group, but not any discussion about tertiary education or libraries. Literature searches show bubbles of interest, around 2006 and 2012, coinciding with the 'Gartner hype cycle' rise in interest in Web 2.0 and social media/gamification, sadly I don't have immediate access to the Journal of Web Librarianship which appeared to have some interesting recent work in academic library intranets.

A query to QULOC-ICT members about their intranets solicited no comments.

So just maybe I've found a niche no-one else cares to occupy, in Australia anyway.

More generally recent hot topics on Intranets surround social media integration (tools like Yammer), gamification, collaboration, and whether the intranet and knowledge management are dead or just transformed.

So welcome to the first in a series of posts about my evolving thoughts on Intranets in Academic Libraries - focusing particularly on JCU Library and where we're headed.

Intranets/Extranets - the wikipedia explanation

First up the boring status description:

What We Have

Our intranet is a Dreamweaver managed site that uses Apache server side includes to give consistent header/footer/navigation/CSS presentation.  The includes have a little bit of built in js trickiness in that the Heading 1 at the top of each page is pulled from the title tags.  It's a clone of how the entire University's web site used to look and work (even the colour scheme).

It isn't managed centrally (if at all) bits are added as people see fit.

When I first arrived at JCU Library an irregular internal newsletter formed the home page, and the left nav got you into the meatier stuff.  The structure is fundamentally based on the team silos that existed at the time it was built. With manually maintained A-Z, policy and form indexes. There's a photo gallery (actually several disconnected galleries), a corporate area (with plans, strategies, committee minutes, monthly team reports etc). Overtime other functions have been added to the main nav, eg:
  • Work Health and Safety
  • Web management (about half our staff are directly involved in the management of our web presence using the corporate CMS and some related tools like Accenture reports) 
  •  Marketing
  • Innovation and Creativity (basically a suggestion board and video and related information from our internal knowledge sharing sessions)
  • Integrated Desk project files
The files are also viewable as a network share (using SAMBA)  and while some use it this way to store working files the majority preferring using their own team share drives (yet more siloing).

There is no user customisation. Everything is served up as static html (apart from PHP directory listing scripts I've used in folders containing periodic statistics and to facilitate the sharing of IL documents by treating a subsection as a file share browsable by the web.

There are only a small group even moderately comfortable with Dreamweaver as an HTML editor, and as far as I know I'm the only one using any of its site management functions.

With around 50 EFTSU we are not big, but we generate as much data, policy and procedure as any Library I've worked with.

There is no formal regime of reviewing or removing aging content - during my site audit and I found some internal newsletters from 1994 - somewhat surprising for two reasons:
  1. that's the year Netscape was born - and about half our students.
  2. Some of the commentary on organisational change was still relevant
In this form it's about 10 years old and was a clone of the technology that used to present the public web site for the university before it was replaced by the current CMS (Oracle's Universal Content Management UCM system).

It has never had a search engine (aaaaaagh!). Usage averaged just over 2000 hits a month in 2012 - and most of those are for a few key documents like rosters with the occasional spike for documents linked to from emails to all library staff.

Last year the 'newsletter' home page was replaced by RSS feeds from the Library's 'Community' on the JCU installation of Blackboard. This was a trial to see if it would work as replacement for the old model of an irregular compilation of items converted into HTML by a single editor.  Over time the gaps between issues were getting longer and it was thought that enabling self-publishing would encourage more involvement, lessening the burden on the editor.

It was also hoped that it's 'blog-like' nature would mean a constant stream of relevant up to date content. There has been no formal evaluation but it seems to have had no impact on currency or engagement. Items are added rarely and only by a small group of people - apparently Blackboard's wiki-ish interface is just as off-putting as Dreamweaver - or staff don't have any interest or time to contribute.

The newsletter had no clarity of purpose, it mixes the Director's messages about big picture issues for the Library with conference reports, procedural changes, newly introduced services, library hosted events, and much more social news (news from ex-staff, recipes, trivia, holiday photos etc). Some of the newsletter's implied purposes are:
  • Top down communication
  • Intercampus bonding (getting to know you and what you're doing)
  • Knowledge sharing (conference report backs)
A 2011 survey of staff opinions of the newsletter format and content didn't really condone or refute this approach.
The whole Intranet has never been reviewed and as far as I can tell has no articulated purpose or raison d'etre but I deduce the tacit point was/is
  • Well, you have to have an intranet don't you?
  • Storage bin for:
    • Corporate documents like operational plans, organisational charts, reviews etc
    • Low level adminstrative functions (desk rosters)
    • Operational documents (minutes of roughly 10 active committees, managers monthly reports, documents from our biannual planning conferences)
    • General policy and procedure documents (collection management, donations, discards, publications, course accreditation, staff orientation)
    • Dumping ground for old images from events
    • Somewhere to store recipes
    • Local copies of system manuals
    • Some procedures ... maybe lots of procedures but it is pretty clear the Intranet copy does not take primacy over the team files hare copy.
    • Library systems documentation
    • Decade old documents (forms, procedures, images, signs, posters, promotional materials) that have never been given information or record management lifecycle consideration
    • Information literacy training materials
    • Web-based projects that had to be tested somewhere and were later published on other hosts (like Blackboard) or simply died on the vine
    • Passwords for systems used at service points 
    • Statistical time series for various services

Every new staff member I've talked to about it is 'unenthusiastic' about its utility.  My own observation is that it is largely a web browsable file share, and that it is just one of many uncoordinated file shares that generally sit in team silos.

Part 2 Our Imaginary Scorecard  I look at what an intranet is in the big world and rate ours in comparison with an ideal intranet.

Thursday, April 11, 2013

Comment on "Teacher Knows if You’ve Done the E-Reading" NYT article

 The twittersphere told me about this article in the New York Times online by David Streitfeld Published: April 8, 2013.

Basically it's about an online text book provider CourseSmart providing analytics to lecturers that show individual students' engagement with the textbook.
I don't actually have a problem with this - but I do have some problems with how the data might be interpreted - particular what's correlation and what's causation.

It bought to mind Dave Pattern's U of Huddersfield data analysis that showed a correlation between academic success and the number of library books borrowed.  It doesn't prove causation - maybe the type of people who borrow books are the type to shine using our methods of measuring academic achievement - it doesn't necessarily mean borrowing books will improve your grades.
Which in turn reminds of the urban legend that said social researchers had analysed data that showed the children of parents  that owned Ferraris ranked significantly higher in national indicators for education, nutrition, health, employment prospects, income earning potential and life expectancy. So recommended buying a Ferrari for every LSES family in lieu of other forms of government assistance.
But my real point is about this apparently throwaway couple of paragraphs:
Adrian Guardia, a Texas A&M instructor in management, took notice the other day of a student who was apparently doing well. His quiz grades were solid, and so was what CourseSmart calls his “engagement index.” But Mr. Guardia also saw something else: that the student had opened his textbook only once.

“It was one of those aha moments,” said Mr. Guardia, who is tracking 70 students in three classes. “Are you really learning if you only open the book the night before the test? I knew I had to reach out to him to discuss his studying habits.”

 There is no follow up to this anecdote and it wasn't until the end of the article realised there wasn't going to be. I had made the erroneous assumption Mr. Guardia was going to talk to the student to find out what he did to succeed without consulting the textbook - as the article moves on you come to realise that Guardia's apparent intention was to make encourage the student to read the student text book for his own good.
 What an odd but all too common approach - 'you're not succeeding the way we want you too - you should change', rather than 'this approach is succeeding maybe we can learn about it, build on it and help other people'.

Tuesday, February 12, 2013

No Nerdvana: Mobile Devices and Remote Access to eJournal Subscriptions 101

As the academic year to kicks in I'm getting queries referred to me about accessing particular eJournals via remote handheld devices (maybe iPads were the xmas present of choice for lecturers and researchers?).

I just wanted to outline how these work as part of my strategy to help our staff at service points who get these queries.

First up it's not rocket science - but it's not consistent across publishing platforms. I see three basic models of how publishers have approached this (but it's still early days in my learning curve).

Note: in case you didn't know I've been manually maintaining a list of publishers who provide access to some sort of mobile interface. where I either point to the mobile version of the service or the instructions for access the mobile version.
AttributionShare Alike Some rights reserved by

Model 1: Mobile Web Interface

Publisher provides the ejournal through a web site designed for small screens.  In these cases as long as the link to the publisher was through EZproxy all should be fine (remember any link they can find in our web presence to a resource will authenticate them through EZproxy, including: the catalogue, the link resolver, the ejournal portal, libguides, the A-Z of databases), so as long as they haven't googled to find the site there shouldn't be any problems.

Best thing about these is the platform isn't a barrier (although the browser might be).

Model 2: Publisher-specific Account

User has to create an account with the publisher - and that account has to be created from the JCU network so that the publisher can ensure the user is entitled to access our subscribed material. Once the user clicks on the link in the resulting confirmation email their account is 'approved' for off campus access.

Some publishers require the email account to have the same domain as the subscribing organisation. It is probably a logical idea in any case for clients to use their JCU emails.

The user downloads an app and uses the approved account (they are prompted for the details of the account they registered).

Account creation is usually very simple (just an email address and password).  I feel silly saying it but for obvious security reasons always advise clients not to use their 'real' JCU password. This can be a stumbling point though, as our clients often use an email/password dialog to connect to other services (email and eduroam for example).

Because you have to register your username from within the JCU network off campus clients can stumble at the registration point - if you know the registration page address you can use the elibrary tool to create an ezproxied address to it - which should overcome that hurdle for off campus users.  Where we might have problems is if the registration page has a different domain to the publisher and has not been set up for EZproxy access.

Model 3: Device pairing

I've seen this a couple of times.  Basically you have to install the app from the appstore, within the JCU network. The app itself stores a magic cookie that lets the publisher site know you have the right to access JCU Library subscribed material. The cookie expires over time and is renewed by using the app from within the JCU network.

I can't see a way of forcing app store access through EZproxy if all you have is a mobile device. So the device would physically have to be on campus (although I wonder if accessing eduroam at another site might be sufficient). 

If the user is tech enough you could suggest that they set up the JCU VPN on their desktop computer and share that internet connection with the handheld device - but it's not something I've tried, just a thought.


It's a selling point of Browzine I hadn't really considered - because it uses EZproxy it overcomes the off campus issues of the publisher platforms it accesses - and couples it with a single interface across multiple publishers (provided Third Iron have had permission and time develop a connection)

Summary of Common Problems

Q: App not available for user's platform
A: Try web site access

Q: Won't accept my password
A: Won't accept which password and where, i.e. if web site - are they using EZproxied link? If app have they created a publisher account? If yes, are they using the password for the account they created? Does the app require in network pairing?

Q: I never come on campus, I'm in another country, I need it now and I'm at the airport
A: There are workarounds for on campus registration for off campus users, described above

My two main tools for diagnosis (so far) are reading the publisher's instructions, and following them on a similar device.

As always, if you get stuck, ask me. I think we need to deal with any issues in this space quickly and professionally - small screen access is the next wave to crash on us so let's wax our boards.

Monday, January 21, 2013

Removing Library Jargon from our Home page - what Google Analytics tells us

In January I tend to catch up on all my statistics collection for the last year - and if I can, do some cursory analysis.

I've been using Google Analytics Campaigns to track the use of links on the Library's home page - which was  revamped around this time last year after usability testing with students.

After finally biting the 'less library jargon' bullet. Here are some of the changes we made and the differences in clicks between 2011 and 2012.
The Catalogue and the more meaningless Tropicat were replaced by Books, DVDs & more.
Hits up 10%
Bounce rate steady
 Reserve Online replaced by Readings & Past Exams.
Hits up 100%
Bounce rate down 25%
Databases replaced by Journal Articles
Hits up 90%
Bounce rate down 60% (but meaningless as most links are to external sites in the old target page)

Interestingly in the case of Databases we restored some button navigation labelled Databases at the request of some academic staff. Hits about a third of  the number of the Journal Articles link. Overall use of our static listing of databases has dropped around 5%. Rather than go straight to the A-Z listing of databases the new link lists a number or resources and tips (including the A-Z).

The focus of this phase of review/redesign was very much on undergraduates, particularly first year.

Our user testing showed confusion about article searching with significant numbers going to the eJournal portal (an A-Z listing from Serials Solutions) browsing for a likely journal title, then browsing the issues for a relevant article.

Reserve Online had users thinking it was about reserving books, and students didn't draw the link between video materials and our catalogue.

Our corporate CMS template forces us into four columns - previously the column header for these links was 'Finding Resources' - which we trimmed to 'Find' which gave some added context to the link names and hopefully amplified our goal-based approach.

Overall the bounce rate for all home page links dropped 21%, a fair indication that users are having more success in finding what they are looking for when selecting a link.

We also have a Summon search box as the default centrepiece of our home page, its use continues to grow strongly.