The Snipping Tool is on your PC, waiting to make life a tiny bit better

If you already use the Snipping Tool, you know it's changed your life in a tiny way. You remember the days before you found it as extraordinarily wasteful. You shudder a little bit.

If you've NOT found the Snipping Tool before now: welcome. Everything up to now has been pre-Snipping Tool. You will remember this day.

The Snipping Tool allows you to draw a box around any section of your PC screen (or all of it) and then instantly saves whatever is in the box as an image. You can copy and paste that image into slides, posters, twitter, etc etc - or save it as JPG if you wish.

I know it doesn't sound like a big deal but trust me, when you prepare a lot of slides it saves AN AGE compared to taking the full print-screen then cropping. It's easier to set the margins just right than with cropping, too. So for screen-grabs in presentations, it makes things so much easier.

Here's a gif (I've never made a gif before) of the Snipping Tool doing its thing:

Look how quick it is to take the screengrab and then make it the background of the slide! Then just insert a text box, or an arrow, or a circle, and highlight the key things. Use it get images of logos, websites, databases, stills from youtube, stills from your own videos to act as thumbnails and to use in social media. It's useful in so many ways and the few seconds it saves you each time really do add up. Pin it to your taskbar forthwith.

The Snipping Tool is on all PCs already, you don't have to install it. Go to the Start Menu, type 'Snip' and there it is. It's been there all along!

[Wildly off topic]: Drums and Drumming

This is very much a one-off post, I think, in that it's nothing to do with communication or libraries, but is instead about drumming, my other passion. I only ever get to play drums in rehearsals or gigs because I can't set a kit up at home, but I think about drums and music about 90% of the time... 

I started drumming when I was about 15 or 16, having previously played trumpet, and found it the most liberating and exciting thing I'd ever done. It all felt very natural and I got quite good quite quickly - sadly because I've not had lessons and don't have much self-discipline, I've not really improved that much since then!

I've played in all sorts of bands, spanning all sorts of musical styles - including a live drum & bass / jungle group, which was amazing - and for the last few years it's been nice to go back to where I started: a good old fashioned rock'n'roll covers band, called Lightbulb Moment. We're actually playing the prime 9pm slot at a small festival this weekend, and amazingly my favourite ever originals band that I've been in, Western Scifi, are reforming for this gig only to celebrate 15 years since we recorded our album. I cannot wait.

Western Scifi & Lightbulb Moment are playing at 8pm and 9pm on the main stage

Western Scifi & Lightbulb Moment are playing at 8pm and 9pm on the main stage

In Lightbulb Moment we play the songs we really like rather than the usual party band stuff, and in Feb we went into the studio to record four videos - live takes of some great covers. They're finally online and I'm so excited about them I'm writing this drumming blogpost, and embedding them below.

I recently created a Drums section to this website. It's hidden in that it's not listed in the main navigation along the top, but if you're interested there's more videos, audio, and a drum related bio, all accessable via the Drums homepage: ned-potter.com/drums/home.

Here are the vids. The first is Grace by Jeff Buckley - it's one of my favourite songs of all time (and that was the case before I had a daughter called Grace!) and because it's Jeff Buckley it's a really hard song to do well. But our vocalist, Chris Harte, is pretty amazing on this track and I'm so happy with how it came out. Unlike the other vids below this one has a very Ned-cam heavy visual mix! Thanks to Dave (our keyboard player, co-lead singer, 2nd guitarist and video creator...) for making this for me.

The next tune is something a little more conventional - Don't Matter by Kings of Leon. A short, sharp burst of rock.

I've never been a particular fan of Ocean Colour Scene but Dave brought this tune in for us to do and I really like it. It has twin vocals all the way through, it rocks along, and we made a nice stabby ending for the drums to mess about over the top of... This is called Hundred Mile High City.

And finally the last tune we recorded was Message in a Bottle by the Police. We were all a bit ragged by this point, and we'd only learned the song the previous night, and it's actually a different arrangement to the original version which complicates things further! But in the end it turned out okay. I even sing some of the 'Sending out an S.O.S's at the end...

So there are the new videos. If you've got this far, thank you for sticking with me on this drumming tangent... My entirely-drum-focused Instagram is @ned_potter, the same name as my Twitter.

Any other librarian musicans out there? Leave me a comment with some links to your stuff!

Gallery Block
This is an example. To display your Instagram posts, double-click here to add an account or select an existing connected account. Learn more

And finally, if you've made it this far, here's the header pic in full, which is my favourite picture of my drums which, forgive me, I really, really, love...

A photo posted by Ned Potter (@ned_potter) on

Ask yourselves, libraries: are surveys a bit bobbins?

We all agree we need data on the needs and wants of our users.

We all agree that asking our users what they want and need has traditionally been a good way of finding that out.

But do we all agree surveys really work? Are they really getting the job done - providing us with the info we need to make changes to our services?

Personally I wouldn't do away with surveys entirely, but I would like to see their level of importance downgraded and the way they're often administered changed. Because I know what it's like to fill in a survey, especially the larger ones. Sometimes you just tick boxes without really thinking too much about it. Sometimes you tell people what they want to hear.  Sometimes you can't get all the way through it. Sometimes by the end you're just clicking answers so you can leave the survey.

I made this. CC-BY.

I made this. CC-BY.

How can we de-bobbins* our surveys? Let me know below. Here are some ideas for starters:

  1. Have a very clear goal of what the survey is helping to achieve before it is launched. What's the objective here? ('It's the time of year we do the survey' does not count as an objective)
     
  2. Spend as much time interpretting and analysing and ACTING ON the results as we do formatting, preparing and promoting the survey (ideally, more time)
     
  3. Acknowledge that surveys don't tell the whole story, and then do something about it. Use surveys for the big picture, and use UX techniques to zoom in on the details. It doesn't have to be pointless data. We can collect meaningful, insightful data.
     
  4. Run them less frequently. LibQual every 2 years max, anyone?
     
  5. Only ever ask questions that give answers you can act on
     
  6. Run smaller surveys more frequently rather than large surveys annually: 3 questions a month, with FOCUS on one theme per month, that allows you to tweak the user experience based on what you learn
     
  7. Speak the language of the user. Avoid confusion by referring to our stuff in the terms our users refer to our stuff
     
  8. [**MANAGEMENT-SPEAK KLAXON**] Complete the feedback loop. When you make changes based on what you learn, tell people you've done it. People need to know their investment of time in the survey is worth it.

Any more?


*International readers! Bobbins is a UK term for 'not very good'.

The problem with peer review (by @LibGoddess)

 

I am ridiculously excited to introduce a new guest post.

I've been wrestling for a while with the validity or otherwise of the peer review process, and where that leaves us as librarians teaching information literacy. I can't say 'if you use databases you'll find good quality information' because that isn't neccessarily true - but nor is it true to say that what one finds on Google is always just as good as what one finds in a journal.

There was only one person I thought of turning to in order to make sense of this: Emma Coonan. She writes brilliantly about teaching and information on her blog and elsewhere - have a look at her fantastic post on post-Brexit infolit, here.


The Problem With Peer Review | Emma Coonan

Well, peer review is broken. Again. Or, if you prefer, still.

The problems are well known and often repeated: self-serving reviewers demanding citations to their own work, however irrelevant, or dismissing competing research outright; bad data not being picked up; completely fake articles sailing through review. A recent discussion on the ALA Infolit mailing list centred on a peer-reviewed article in a reputable journal (indexed, indeed, in an expensive academic database) whose references consisted solely of Wikipedia entries. This wonderfully wry PNIS article - one of the most approachable and most entertaining overviews of the issues with scholarly publishing - claims that peer reviewers are “terrible at spotting weaknesses and errors in papers”.

As for how peer review makes authors feel, well, there’s a Tumblr for that. This cartoon by Jason McDermott sums it up:

Click the pic to open the original on jasonya.com in a new window

Click the pic to open the original on jasonya.com in a new window

- and that’s from a self-proclaimed fan of peer review.

For teaching librarians, the problems with peer review have a particularly troubling dimension because we spend so much of our time telling students of the vital need to evaluate information for quality, reliability, validity and authority. We stress the importance of using scholarly sources over open web ones. What’s more our discovery services even have a little tickbox that limits searches to peer reviewed articles, because they’re the ones you can rely on. Right? …

So what do we do if peer review fails to act as the guarantee of scholarly quality that we expect and need it to be? Where does it leave us if “peer review is a joke”?

The purpose of peer review

From my point of view as a journal editor, peer review is far from being a joke. On the contrary, it has a number of very useful functions:

·        It lets me see how the article will be received by the community

The reviewers act as trial readers who have certain expectations about the kind of material they’re going to find in any given journal. This means I can get an idea of how relevant the work is to the journal’s audience, and whether this particular journal is the best place for it to appear and be appreciated.

·        It tests the flow of the argument

Because peer reviewers read actively and critically, they are alert to any breaks in the logical construction of the work. They’ll spot any discontinuities in the argument, any assumptions left unquestioned, and any disconnection between the method, the results and the conclusions, and will suggest ways to fix them.

·        It suggests new literature or different viewpoints that add to the research context

One of the hardest aspects of academic writing is reconciling variant views on a topic, but a partial – in any sense – approach does no service to research. Every argument will have its counter-position, just as every research method has its limitations. Ignoring these doesn’t make them go away; it just makes for an unbalanced article. Reviewers can bring a complementary perspective on the literature that will make for a more considered background to the research.

·        It helps refine and clarify a writing style which is governed by rigid conventions and in which complex ideas are discussed

If you’ve ever written an essay, you’ll know that the scholarly register can work a terrible transformation on our ability to articulate things clearly. The desire to sound objective, knowledgeable, or just plain ‘academic’ can completely obscure what we’re trying to say. When this happens (and it does to us all) the best service anyone can do is to ask (gently) “What the heck does this mean?”

In my journal’s guidelines for authors and reviewers we put all this a bit more succinctly:

The role of the peer reviewer is twofold: Firstly, to advise the editor as to whether the paper is suitable for publication and, if so, what stage of development it has reached. [ ….] Secondly, the peer reviewer will act as a constructively critical friend to the author, providing detailed and practical feedback on all the aspects of the article.

But you’ll notice that these functions aren’t to do with the research as such, but with the presentation of the research. Scholarly communication always, necessarily, happens after the fact. It’s worth remembering that the reviewers weren’t there when the research was designed, or when the participants were selected, or when the audio recorder didn’t work properly, or the coding frame got covered in coffee stains. The reviewers aren’t responsible for the design of the research, or its outputs: all they can do is help authors make the best possible communication of the work after the research process itself is concluded.

Objective incredulity

Despite this undeniable fact, many of the “it’s a joke” articles seem to suggest that reviewers should take personal responsibility for the bad datasets, the faulty research design, or the inflated results. However, you can’t necessarily locate and expose those problems on reading alone. The only way to truly test the quality and validity of a research study is to replicate it.

Replication - the principle of reproducibility - is the whole point of the scientific method, which is basically a highly refined and very polite form of disbelief. Scholarly thinking never accepts assertions at face value, but always tests the evidence and asks uncomfortable, probing questions: is that really the case? Is it always the case? Supposing we changed the population, the dosage, one of the experimental conditions: what would the findings, and the implications we draw from them, look like then?

And here’s the nub of the whole problem: it’s not the peer reviewer’s job to replicate the research and tell us whether it’s valid or not. It’s our job - the job of the academic community as a whole, the researcher, the reader. In fact, you and me. Peer reviewers can’t certify an article as ‘true’ so that we know it meets all those criteria of authority, validity, reliability and the rest of them. All a reviewer can do is warrant that the report of a study has been composed in the appropriate register and carries the signifiers of academic authority, and that the study itself - seen only through this linguistic lens - appears to have been designed and executed in accordance with the methodological and analytical standards of the discipline. Publication in a peer-reviewed journal isn’t a simple binary qualifier that will tell you whether an article is good or bad, true or false; it’s only one of many nuanced and contextual evaluative factors we must weigh up for ourselves.

So when we talk to our students about sources and databases, we should also talk about peer review; and when we talk about peer review, we need to talk about where the authority for deciding whether something is true really rests.

Tickboxing truth

This brings us to one of the biggest challenges about learning in higher education: the need to rethink how we conceive of truth.

We generally start out by imagining that the goal of research is to discover the truth or find the answer - as though ‘Truth’ is a simple, singular entity that’s lying concealed out there, waiting to be for us to unearth it. And many of us experience frustration and dismay at university as a direct result of this way of thinking. We learn, slowly, that the goal of a research study is not to ‘find out the truth’, nor even to find out ‘a’ truth. It’s to test the validity of a hypothesis under certain conditions. Research will never let us say “This is what we know”, but only “This is what we believe - for now”.

Research doesn’t solve problems and say we can close the book on them. Rather it frames problems in new ways, which give rise to further questions, debate, discussion and further research. Occasionally these new ways of framing problems can painfully disrupt our entire understanding of the world. Yet once we understand that knowledge is a fluid construct created by communities, not a buried secret waiting for us to discover, then we also come to understand that there can be no last word in research: it is, rather, an ongoing conversation.

The real problem with peer review is that we’ve elevated it to a status it can’t legitimately occupy. We’ve tried to turn it into a truth guarantee, a kind of kitemark of veracity, but in doing so we’ve shut our eyes to the reality that truth in research is a shifting and slippery beast.

Ultimately, we don’t get to outsource evaluation: it’s up to each one of us to make the judgement on how far a study is valid, authoritative, and relevant. As teaching librarians, it’s our job to help our learners develop a critical mindset - that same objective incredulity that underlies scientific method, that challenges assertions and questions authority. And that being so, it’s imperative that we not only foster certain attitudes to information in our students, but model them ourselves in our own behaviour. In particular, our own approach to information should never be a blind acceptance of any rubber-stamp, any external warrant, any authority - no matter how eminent.

This means that the little tickbox that says ‘peer reviewed’ may be the greatest disservice we do to the thoughtful scepticism we seek to help develop in our students, and in our society at large. Encouraging people to think that the job of assessing quality happens somewhere else, by someone else, leads to a populace which is alternatively complacent and outraged, and in both states unwilling to undertake the critical engagement with information that leads us to be able to speak truth to power.

The only joke is to think that peer review can stand in for that.

UXLibs II: This Time It's Political

At 9am on Day 2 of the UXLibs II conference, 154 information professionals sat in a large room feeling collectively desolate. I don’t want to be glib or melodramatic but the feeling of communal sadness at what had happened in the EU Referendum overnight felt to me akin to grief, like someone close to the conference had actually died the night before.

Was there anyone present who voted Leave? Possibly. But it seemed everyone was devastated. There were tears. UXLibs is, as Library Conferences go, relatively diverse (although it's still something we need to work on), not least because well over a third of the delegates - 60 this time around - are from outside England. Our North American and Singaporean friends felt our pain, our European friends were sad our country had chosen to leave them, and for the Brits it was already clear what an omnishambles the vote had caused.

The committee had met for an early breakfast to process how we should proceed. We agreed on two things: first that however we all felt, organisers and delegates had to deliver the best possible conference experience in the circumstances; and second that this was not time for neutrality. (In fact I was talking to Lawrie Phipps from JISC a little later that morning and we agreed that perhaps if so many libraries and educational institutions generally weren’t so neutral by habit, people might have a better idea of when they were being systematically lied to by politicians.) Conference Chair Andy Priestner was due to open the conference: say what you want to say, don’t hold back, we agreed. There had been a lot of jokes the day before - humour is an important part of the UXLibs conference as it leads to informality, which in turn most often leads to better and deeper communication, proper relationships – but there would be no attempt at making light of this. Don’t gloss over it. Don’t be glib. Don’t be neutral. But do be political.

So he was. You can read Andy’s reflections on his opening address here, and this is what he said:

Today is not a good day.

I’ve worried for several months about this moment in case unthinkably it might go the way it has gone. I am devastated. Everyone I speak to is devastated. This is a victory for fear, hate and stupidity.

But as Donna said yesterday when describing her experiences in Northern Ireland – ethnographers have to get on with it. WE have to get on with it. Perhaps it’s a good thing that we will all have less time to dwell on what has just happened. Perhaps it’s good that we’ll be busy.

What I do know for a fact is that we have to be kind to each other today however we might feel. Let there be hugs. Let there be understanding.

For me one of the most precious things about UXLibs is the networking and sharing we enjoy from beyond the UK. The collaboration across countries, the realisation that despite the different languages, cultures and traditions that we are all the same and can learn so much from each other.

But it’s too soon to be cheerful. It’s too soon for silver linings.

Today is not a good day.’

I was proud of him.

And then Day 2 happened, and I was proud of EVERYONE. What an amazing group of people. Shelley Gullikson put it like this:

“Last year I said that UXLibs was the best conference I’d ever been to. UXLibs II feels like it might be the best community I’ve ever belonged to.”

Everyone found a way to help each other, support each other, make each other laugh, and work together – after Lawrie’s keynote the first thing on the agenda was the Team Challenge so no one could spend any time sitting in dejected silence, there was too much to do… Collectively everyone not just got through the day but made it brilliant. It wasn’t a good day overall – a good conference doesn’t transcend political and socio-economic catastrophe. But it was the best day it could possibly be.


I attended the first UXLibs conference in 2015 and I was blown away by it. It felt like the organising committee had started from scratch, as if there were no legacy of how a conference should be, and designed it from the ground up. They kept some elements, the ones that work most, and replaced others with new and more engaging things, especially the Team Challenge. It was the best conference I’ve ever attended.

The follow up, UXLibs II, had something of an advocacy theme – as I put it in the conference, if UXLibs I was ‘how do we do UX?’ then UXLibs II was ‘how do we actually make it happen?’. As communication and marketing is something I do a lot of work around and, as Andy so kindly put it, he wanted to see if we’d actually get on and not hate each other if we worked together, I was invited to join he and Matt Borg as the main organising committee (although we had a huge amount of input from several other people in planning the event). This was in September last year; Andy and Matt had already been planning for a while and by October we had our first provisional programme in place.

Andy and Matt...

Andy and Matt...

Matt and me...

Matt and me...

I find organising events approximately three trillion times more stressful than speaking at them, and hadn’t got fully involved in putting on a conference since 2011 when I swore ‘never again’. But I couldn’t resist the chance to work with Andy and Matt because we are pretty much on the same page about a lot of things, but disagree on a lot of the details, which makes for an interesting and productive working arrangement. So, around 400 emails later, a couple of face-to-face meetings later, many online meetings using Google Hangouts later, we were in thestudio, Manchester for the event itself. At the end of the two days, despite the dark cloud of Brexit hanging over us, everyone seemed exhausted but fulfilled. We’d built the event around the community and what that community said it needed, and I think it worked. It’s a great community and I felt excited to be part of it – challenged, stimulated, and I’d echo the delegate who came up to me at the end and said she’d never laughed so much at a conference: it was FUN.

Several things made this conference different, for me, apart from just the content. There's the fact that all the delegates have to be active participants (they were 'doing doing' as I put it, somewhat to my own surprise and certainly to my own mortification, when introducing the team challenge), there's the mixture of keynotes, workshops, delegate talks and team challenge, there's the informality and fun but with the Code of Conduct to ensure people can work together appropriately, there's the fact we individually emailed 100 delegates from UXLibs I to find out what their challenges were so we could help shape the conference, there's the fact that 150 people got to choose which workshops and papers they attended, there's the blind reviewing process for accepting papers, there's the scoring system for the best paper prize that was far more complicated than 'highest number of votes' because different papers were seen by different sizes of audience... There's the fact there's less fracture and division than in most conferences: I truly feel we're moving forward together as a UX in Libraries community. There's the fact that the venue was not only excellent but had a trainset running around near its ceiling that you could stop and start by tweeting at it!

Turns out it's quite easy to avoid All Male Panel. What you do, conference organisers, is you don't put all males on the panel. (Pic by @GeekEmilia)

Turns out it's quite easy to avoid All Male Panel. What you do, conference organisers, is you don't put all males on the panel. (Pic by @GeekEmilia)

There's the fact that Matt made completely bespoke badges with individual timetables for all 154 attendees! I can't tell you what mine said (let's say Matt was experiencing some remorse at saying he'd do the badges by the time he got to 'P') but so many people commented on the good-luck messages he put in to all presenters for their slots...

So it was pretty great, overall, despite everything. Thank you everyone invovled.

UXLibs III planning has already begun.