Browsed by
Category: interwebs

Librarians and Code

Librarians and Code

New librarians, or librarians still in school, often ask me how they can get a job like mine. I think this is probably a question all librarians get, but mine comes with an extra question: “should I learn to code?”

My answer to this has always been something along the lines of: “Well, no.” I know many people would say the opposite.

I don’t think code is important to my job because I do not write code. I shouldn’t write code, actually…I have colleagues who are responsible for any code that might come near me. Code is not my territory. So no: you don’t need to code to be a librarian who works in tech. Content management systems handle the HTML. You won’t be the one messing around with CSS, probably. If you work in larger library, in any case. Librarians are usually not the best qualified people to tweak stylesheets or write software. Those things are halmarks of a whole other profession, actually. If you learned a tiny bit of code, the truth is, you’d be a terrible coder anyway.

But still: there’s something there. Lots of folks in my shoes would say the opposite: yes! Dear god, yes, please learn to code! I genuinely have no idea who’s right and who’s wrong here.

My feeling on this is that librarians who picked up code learned a lot by doing so, and think that others will learn the same things if they too pick up code. And that might be true. But in the end, the code isn’t the thing. The code is a catalyst for the thing that’s really valuable for a librarian. Somehow the process of learning even a tiny bit of code might be the easiest way to understand the basics of how the internet works, and that understanding helps you to ask better questions, form better plans, make more realistic requests, and integrate your services and your collection more thoughtfully into the wider digital world. But the code isn’t what does it: the code is just the catalyst. Right?

My fear, I suppose, is that in this drive to learn code, someone will actually just focus on the code and will miss the catalytic moment. Because we’re not being very clear about what we actually need you to learn. We haven’t specified. I’m not even sure I know how to articulate it all even now. We need you to understand what’s possible, and what’s impossible. How data travels and is taken up into new places. You need to know that it’s not magic, it’s just content drawn out and drawn upon. You need to really understand what “database-driven” means, and be able to apply that knowledge. You will probably get that from learning some code. But I think it might be more efficient to be clear about the kinds of lessons we need you to learn from it.

And I suspect it’s possible to learn those things without code specifically. I think learning by doing, by figuring things out, is probably going to work for most people, but what are you figuring out? It’s a set of problem-solving skills, it’s not a skill at coding, necessarily. And some people get that understanding it other ways altogether. I know several outstanding library leaders who never learned code at all, but can make rational, thoughtful decisions around tech. I think they just listen to the people they hire, to be honest. They trust the people who understand it better than they do.

But I suspect code will get the majority of folks where they need to be. I suspect that’s true. But it might be the hard way. I’m not sure. Either way, it’s true they need to get there, one way or another.

Maybe I’ve been giving bad advice all along. Or good advice. I have no idea.

Reading, Paper, and e-readers

Reading, Paper, and e-readers

I’m frustrated by the current state of research that claims that we read better and retain more from paper than from an ereader, and that this is because of the form, that somehow we need the permanency of paper in order to form memories of the plot of a novel. This makes zero sense to me, but I’ve heard this argument enough times at this point. Fortunately Spark did an episode that investigated this, and came to a better conclusion.

If you gave someone a short story and told them to read it in an empty library, you’d probably get a better result than taking someone to an empty carnival and telling them to read a short story there. Not because the empty library is quieter than the empty carnival, or because libraries are just naturally better places for reading. It would be because the person walking into a carnival isn’t prepared and primed for reading while the person walking into the library is. We already know this is true; this is why they tell you not to bring your computer to bed with you to finish up some work, because if you do work in bed on a regular basis, when you go to bed your head will be primed for work rather than sleep.

So I have doubts that these experiments with ereaders and books are telling anyone which form is better for the reading experience per se. It’s only telling us that people are currently primed to think of computers/tablets/screens as things to watch movies on, or play games on, or browse the internet on. Most people are not primed to consider a screen a reading surface.

But some people are. Some people read on screens all the time, for academic work or for fun. For books that don’t and won’t exist in paper, there are audiences who have already made the switch. They must have other cues that prime them for reading from the same screen they use for other tasks. Of course, readers of online books are always sitting in the bookstore as they read. If they don’t like the turn a story takes, I suspect they will back-button out quicker than a paper-book reader will give up on a book they’ve borrowed or purchased. With online novels, there is always a universe of other stories waiting if the current one doesn’t suit.

I would be interested to see studies like this done with more context. How do those who read fiction on a screen all the time fare against people who don’t? As ereaders get into the hands of more and more people and reading ebooks becomes just as common as reading any other kind of book, do the results change? If a person starts reading an ebook and has poorer comprehension results, do those results improve after a month of reading ebooks? A year?

I remember in the late nineties there was some discussion about how to talk about interaction with the internet. Browse won, but I remember someone on the news talking about “looking at the internet,” or “watching the internet.” As someone who was already far beyond “watching” or merely “looking” at digital material, I cringed. You can watch things online, that presenter wasn’t wrong. You do look at stuff on the internet. That guy saw a screen that looked a lot like a tv, and transferred the language and the modes of thinking to it. He was a passive viewer of internet content, and that’s how he framed his experience.

Ipads are not about being looked at, they’re about being interacted with. An ipad in particular is the first device to fit into that strange niche between smartphone and computer, a device driven entirely without a proxy roller ball or mouse or stylus or keyboard. You touch the content and it reacts. It’s an engagement device, not a device to be looked at or watched (though you can look at and watch things on ipads, too). It doesn’t really surprise me that giving a bunch of people ipads or ereaders doesn’t yet prime people to sink into deep contemplative thought. People are still primed to look at how their physical touch is interacting with digital activity.

Likewise, I wonder if anyone’s done any experiments on audiobooks. Read a page, hear a page: is one better than the other? I suspect it’s what you’re used to.

For many years I’ve been painfully aware of the anti-ebook league who are extremely keen to point out how inferior ebooks are. I know there was a similar group who objected to the written word in the first place (“if you don’t need to memorize it, everyone will become a gibbering idiot!”), and then to the printing press (“Bad! Cheap! Sloppy!”). While I still have a too-steady stream of paper books coming into my house, I’m glad books are going digital. To me, the story, the information, the content is the most important thing. Digital text isn’t limited by its font size. It can be read aloud by a screenreader. It can be translated by a braille display. I can twist it, add more notes to it than it contains in the first place. Like Dickens did it, it can be delivered serially. Digital text might mean more text, and to me that’s a plus.

Space and Audio

Space and Audio

I feel like this is a bit of a tangent, but I keep noticing these things, and I keep thinking they’re interesting. It’s my research leave, right? So I should investigate the things that strike me as worthy of observation, shouldn’t I?

The thing I keep noticing, and keep being intrigued by, is how various services and spaces make use of audio to provide  critical information.

As I’ve expressed before, I’m very impressed by the signage on the London Underground. I will have to delve into this again more thoroughly, because I still find it very inspiring. The thing I immediately liked best about it was that it delivers small pieces of information to patrons exactly when they need it, and not a second before. It also works to provide “confirmation” signage, the sole purpose of which is to reassure the patron that they’re in the right place. I’m generally excited to see any acknowledgment of the emotions associated with an experience built right into the placement and content of signage; fear is always a key factor, in public transit as in library services. With the pressure to keep people moving along platforms, through long tunnels and up stairs in crowded and busy tube stations, it makes sense that the London Underground would place so much emphasis on answering patron questions exactly in the places where those questions get asked so that no one has to stop, block traffic, and figure out whether to turn right or left.

That’s not the end of the information-giving. Once on the train itself, there are maps on the walls of the entire line, so you can watch your  progress. There are digital signs telling you where you are and where you’re going. This is surely enough, but on top of all this, there’s a voice that tells you where you are, which stop is next, where the train terminates, and, famously, to mind the gap.

It’s overkill, surely. I can see the map, I can see the station names on the walls of the stations as they appear, I can see it on the digital sign. Is it there purely for those with visual impairments? Possibly. But it also infuses the space with very reassuring information that’s frankly easy to ignore if you don’t need it, and easy to cling on to otherwise. Even if I know where I’m going, it marks my progress and punctuates a relatively dark journey with no real views (most of the time). It supports the written information and pins it not in space, but in time.

photo(1)

I’m a fan of Westminster chimes. I grew up with them; my parents had (and still have) a mantel clock that chimes every fifteen minutes, day and night. Lots of people find that horrifying, but I don’t. It’s reassuring to me. I don’t especially trust my internal clock; sometimes three minutes feels like ten, and an hour feels like five minutes. When you wake up in the night you feel like you’ve been lying there awake for hours. But the Westminster chime grounds you in reality: I’ve only heard it go off once since I’ve been awake, so I’ve only been awake for fifteen minutes, tops. I like how the chime infuses space with the knowledge of how time is passing. The sound changes from chime to chime; you can tell from the chime which part of the hour you’re in. It’s an audio version of a physical object. It’s an ancient means of providing ambient information.

I think the voice on the tube is similar. It’s providing me with ambient knowledge that I can half ignore.

There was a documentary, or news piece some time ago, about the unlock sound on an iPhone. The sound is gone from OS7, but until recently, there was a very specific, very metallic sound always accompanied the action. You can’t feel it unlocking, since an iPhone uses only software keys. But there had to be a sound so we could understand what was happening, and to be assured the action was successful. In place of a sensation, a sound for reassurance. A sound to make a digital action feel real.

Libraries are generally aiming to be silent spaces. Having things announced is a nuisance to most people. I think it’s possible that audio cues have a place in the good functioning of a library; it’s just a matter of being supremely thoughtful about it and determining what kinds of ambient information is valuable to patrons, and what kinds of audio cues would be comforting rather than annoying. There’s also the question of placement; there are no voices telling me things when I’m walking through tunnels and following signs, but there are when I’m heading up an escalator, or sitting inside a train waiting for the right moment to head for the door.

My parents have been visiting me for the last few days, so we went on a few city tours on open-topped double-decker buses. I seem to recall seeing these kinds of buses drive past me, with a tour guide shouting into a microphone. Those days are gone. Now they give you cheap earphones, and you plug into the bus to hear a pre-recorded tour in the language of your choice.

photo 1

This struck me as kind of genius. The pre-recorded tour told me stories and history about the places I was looking at; it wasn’t ambient information, it was a packaged lecture based on my choice of language and volume, and the location of the vehicle I was sitting in. Initially I assumed it was hooked up with GPS so that it would play based very specifically on location, but I discovered when we got cold and came inside the bus that the driver was pressing a button to advance the script. I found that oddly disappointing. I liked the idea of a system tracking traffic and our location and giving me stories based on it. It’s a shared experience, but it’s personal. It’s utterly silent to the people around you, but immensely informative for the people listening. It’s carefully planned and thought out, and no piece of the story is forgotten. The recorded tour goes on all day, and you can jump on and off the bus where you like. That means you can listen to the tour over and over again and pick up the things you missed without asking a human tour guide to be at your disposal. That got me thinking too. How can we help pace information to keep it in line with the place a patron finds herself?

I’ve seen similar things on a different and more self-driven scale. The Murmur project in Toronto placed ear-shaped signs all over to Toronto with phone numbers on them which played stories about that spot to the caller. We can do better than that now with QR codes or just URLs, since smart phones have become so ubiquitous.

One of the very best examples of audio in libraries I’ve seen is the pre-recorded audio push-cubes in the Rama public library. You know those teddy bears with audio buttons in their hands? Or cards with audio that plays when you open them? You can buy those devices empty now, with 20 to 200 seconds of space for audio. They’re cheap, and they’re even reusable. In an effort to expose children to the Ojibwe language in order to preserve it, the brilliant Sherry Lawson of the Rama Nation uses the cubes to record words and phrases, and places them in relevant areas in her library. A patron can approach an object, see it’s written Ojibwe name, then press the cube to hear it spoken. She is providing ambient exposure to the children of the reservation by inserting her voice and her language in places where they can easily interact with it.

Perhaps it’s Mike Ridley’s fault for introducing me to the concept of a post-literate world, but there’s something about getting audio information in situ that really appeals to me.  Where it provides new information, I think it’s fascinating, letting people lift up the dog ear on a place and see a whole new world underneath. Where it focuses on reassurance, I think it provides a critical service in allowing people to feel found and safe. This is technology that isn’t infrastructure; it’s a bolt on, an ambient addition, and a simple and cheap one at that. Letting audio form a feeling: that’s the kicker, for me.

Navigation is Dead: Long Live Navigation

Navigation is Dead: Long Live Navigation

For the last year or so I’ve been toying with the idea that website navigation is basically dead. Not to say that it’s not still important, but I’ve come to think about a website’s internal navigation structure (by that I mean tabs and dropdown menus, side navigation, that sort of thing) as the absolute final, last ditch, if-all-else-fails means by which the average internet user will find content on your website.

It’s possible I’m jumping the gun, but here’s why I’m increasingly thinking this way.

When was the last time you went to the front page of a newspaper’s site? Most of us read articles from newspapers online, but I suspect most of us don’t do so by navigating to the front page of their site. The latest navigation for newspapers and news organizations generally is probably Facebook, Twitter, and/or Tumblr. You don’t visit the front page, you follow a link someone’s posted in your path that strikes you as interesting, and read it there. I’m not sure I’ve ever actually seen the front page of the Guardian, for example (I certainly can’t conjure up an image of it), but I read Guardian articles all the time. I go in through side doors that directs me exactly where I want to be.

When I come home from work, I catch up on The Daily Show and The Colbert Report. They’re on too late for me to watch live, so I watch the online versions via the Comedy Network or CTV. I have never once gone to the Comedy Network or CTV’s main site to find them. It’s way too many clicks that way. I just type “Daily Show Canada” into my Google search box and it takes me right there. I don’t even care if I’m watching it via the Comedy Network or CTV’s interface; I just click one of the links and get the content I was looking for.

There are some sites that are staples of people’s every day, and in that case, there are things you want on the page that make it easy for you to navigate around. For instance, your email: I don’t want to have to use a Google search to find my inbox and my sent mail. I want links to those. Functional links to things I use every day. Likewise, Twitter needs to put up clear links to my @replies so I don’t struggle to find those. But that’s inching into application design decisions rather than strictly navigation, I’d argue. And applications I use all the time every day are different than websites I use from time to time for information when I need them.

When I’m looking for this Historical Studies department on the UTM website, I don’t go to the main UTM front page. I don’t, even though I work there. I look at a lot of sites every day, and there is no one true classification method we can use that will always be clear to everyone. Every site is different, every site uses different metaphors to organize their content; how can I be expected to remember how any site has decided to arrange content to make it easy for me to find? I don’t remember what navigation decisions UTM made in it’s site design. I could take a moment to look at the site, scan it’s existing top level nagivation terms, use my critical thinking skills to work out where the department might be, or I could just type “UTM historical studies” in my Google search box and be done with it. Type and enter, and click. That’s way easier on the brain than trying to understand someone’s thoughtfully-designed navigation structure.

When I say things like this, people remind me that I’m in the rarefied world of academia, for one (true), and that my job title includes the word “technologies” (also true), so my perspective on browsing the internet based on my own experience and habits is highly unlikely to be universal (absolutely true). However, let me show you some  statistics:

Three Years of Web Stats

This is a graph of web traffic for the months of August and September for one page on our library’s website (http://library.utm.utoronto.ca/faculty/blackboard). It’s the front page for frontline support for courseware for instructors at UTM, and the portal to all our how-tos and instructions on using all the courseware tools available to faculty at UTM. We are a busy service, and get lots of questions and phone calls, so we know our instructors want and need this information. There has always been clear navigation to arrive at this page. We printed it on brochures, inside documentation we handed out, had it on business cards, etc. That clear navigation’s utility can be seen as the blue line in that graph, which is our data for 2010. Very low traffic, in spite of the fact that it’s a busy service. Those are the stats when we just put good content up and wait for people to navigate to it if they need it.

The red line in that graph is our data from 2011. That’s the year we stopped expecting people to navigate to the site, and instead emailed out short messages (we call them “protips”) when the questions are likely to come in. For instance, instructors usually ask us how to add TAs to their course websites somewhere around 5 days before the first day of class, so 6 days before the first day of class we send out an email to this page with instructions on how to add TAs to a course website. For the last week in August and the first couple of weeks in September, we send out nearly one message a day, with a tiny amount of information in it, and a link to this page. See what happened? That’s something like an 8000% increase in web traffic. This page became the second or third most hit page on our site. The internal navigation was exactly the same.

The green line in the graph is our data for 2012, and we were extremely surprised to see another 50% increase from the year before. We learned from our faculty that some of them had started to forward our messages on to colleagues on other campuses, which might account for some of it.

It’s not a revelation to say that publicizing a web page gets you more traffic; it’s probably the most basic of basics from web communications 101. We pushed content, therefore we got traffic. But it made me realize that, like me, other people are much more likely to dive into the interior of a website from the outside (in this case, from email) rather than trying to navigate through from the front page. Being directed to the one thing you need is way more attractive than wading through lots of useful but not immediately needful things in order to find the one thing you want. Obviously the need for the content in question is there; if our instructors weren’t interested, they wouldn’t be clicking on the link in the first place. They would just delete the message and move on. So the interest is clearly there and our traffic is growing.

At this point I think I could probably remove this entire section of our website from the main navigation and see absolutely no dip in traffic. I’m tempted to do that as an experiment, to be honest. I have a feeling no one would even notice.

So I’ve started to really question the basic utility of top level navigation. In a pinch, if you’re really lost and don’t even know what’s available or where to start, I can see it being useful. But for our client base, people we know and we know how to contact, I don’t expect my thoughtful navigation decisions to ever even register. I am building navigation for them through a variety of media, not just through our website as we traditionally think of it. Their interface to our website happens to come through email messages; it’s current, topical, and ephemeral. Their interface, essentially, is us. We dole it out over time and place it in the places where their eyes already are, much like my librarian colleagues and friends do when they post messages on Twitter and I click on them.

glass whiteboard calendar

It’s a weird way to think, but it’s where I’m sitting just now. I don’t want web traffic for the sake of web traffic; I want our patrons to have this information when they need it, and I realize I can’t change their behaviour to make that happen. I can’t rely on their need to bring them to me and muddle through my navigation to find it.  I can’t sit behind a desk/website with all the good news and wait for them to come see me. I want to answer questions before they have to be asked; I want to be on the path of their success, and that’s something they define. So I find and build up the navigation that demonstrably works for them, even if it’s unorthodox. In this case, the navigation that appears to work best for this kind of information and for this kind of audience is us, outlook, and our calendar of needful topics, and a series of targeted email messages sent out like clockwork every year.

There are of course many such solutions; or me, the key part of this whole experience was rethinking what navigation is and what it means, and to stop thinking in such two-dimensional terms. As creatures of the internet, as the majority of us now are, we find information in a wide variety of ways; top level navigation has got to be somewhere down at the bottom of the list.

National Post Alters a Web Article…based on a Tweet

National Post Alters a Web Article…based on a Tweet

Well, this is the last thing I expected to happen today.

I read an article online from the National Post about Tom Gabel from the band Against Me! coming out as trans in Rolling Stone today. Unlike the Rolling Stone article, the National Post article kept the male pronouns. So I tweeted the writer. Here’s our exchange:

Not only was this not what I expected from a journalist, this isn’t what I expected from the National Post. I thought we’d end up having a snarky back and forth (like I did recently with @jessehirsh, who treated me like an idiot for raising a question with him about something he said on the radio), and everyone would end up feeling annoyed and wronged. But that’s not what happened.

Colour me impressed. Some random nobody on the interwebs tweets you and you actually alter content because they have a point? Thanks, man. Thanks for listening to me. Thanks for being willing to listen to me. Fantastic. That’s really not what I thought would happen.

I’m not sure what the lesson is here, but the bar has been raised. I will expect other content creators to follow suit now! My pesky tweets will never stop!

Books vs. Screens: The Disingenuous Argument

Books vs. Screens: The Disingenuous Argument

The UT Librarians Blog posted another authorless post I have attempted to comment on; while they announced some time ago that the blog would no longer put comments in a moderation queue, I seem to be stuck in one. Again. And thus:

The post in question is a link to the Globe and Mail article entitled, “Books Vs. Screens: Which should Your Kids be Reading?” The article contains such wisdom as:

In Britain, University of Oxford neuroscientist and former Royal Institution director Susan Greenfield revealed a far different vision – one that could have come straight out of an Atwoodian dystopia – when she warned that Internet-driven “mind change” was comparable with climate change as a threat to the species, “skewing the brain” to operate in an infantalized mode and creating “a world in which we are all required to become autistic.”

Less dire but no less pointed warnings have come from Maryanne Wolf, director of the Center for Reading and Language Research at Tufts University in Massachusetts. “I do think something is going to be lost with the Twitter brain,” she said in an interview.

The UT Librarians (apparently collectively) said:

Is this something we should be thinking about? Deep Reading vs. Screen Reading? In today’s Globe & Mail, Dec. 12, 2011, John Barber, examines recent studies on screen reading vs. what is being called deep reading – something to consider as educators and leaders in our fields.

On the platform, reading

And now, finally, my reply from the moderation queue:

This is blatant scare-mongering, and disingenuous to boot. Comparing reading novels to reading tweets is like saying the card catalogue, with it’s tiny bits of information, was a threat to “deep thinking”.

There are many kinds of reading, and literate people engage in many of them, sometimes within the same afternoon. People who follow Margaret Atwood also, as a general rule, read novels. “Screen reading” pontificators need to spend some time looking at the actual reading (and writing) going on on the internet. Like BookCountry, from Penguin, which is practically brand new, and fictionpress. Look at all that reading and writing going on! Reading and writing of lengthy bits of writing, no less, and on screens! If you’re brave, look at Fanfiction.net (there are 56k stories on there about the television show Glee alone) or AO3 (which, for the record, has works over 100k words long with as many views and thousands of comments from readers). Lots of people read online, and form communities around texts. It might not be the kind of reading you want to see, but it’s sustained, lengthy, uninterrupted, and on screens.

We need to stop fixating on the form content takes. What the screen is providing is a platform for people who would never get their work passed through publishing houses and editors, and while you may scoff at that (because we all know money is the ultimate test of whether or not something has value, right?), there is more text to read and engage with now than ever before, and people are engaging. Young people are engaging. Some of that text is in short format (like twitter). Some of it is so long publishers would balk at the idea of trying to publish it in physical form. It doesn’t matter if it’s on a screen. Content in content. This new form has the potential to save the monograph, not just to kill it. The form of the novel, the short story, the extended series, the monograph are all alive and well and being published online.

I think, as librarians, we should be concerned with providing access to content, and, perhaps, providing platforms for content to be published, found, and engaged with on every level (deep or browse). Marrying ourselves to paper is the death knell of this profession.

Spooky and I enjoy the Nook--Daily Image 2011--October 2

The Technology Trifecta

The Technology Trifecta

I work with the soft side of technology. I don’t write code (I only have the tiniest bit of coding ability, and I haven’t used it in years), I don’t do hardware. I don’t monitor servers. The soft side of technology is all about working with the people trying to use it, and helping them to understand it. I’ve come to believe that there are three key things required to help other people use technology effectively. I’ve come to this realization as part of the rethink and reworking of our faculty training program this year, and it’s forced me to think about the whole experience from another angle.

Granted, my background in theological studies and my penchant for writing fiction in my spare time probably play a role in my perspective on this, but I’m going to run with it.

The (soft side) Technology Trifecta

1. A Good Metaphor

Metaphor

All technology requires a good metaphor, something people can seize onto. The wrong metaphor can leave a technology languishing for ages. Metaphor is how the brain learns what to do with a thing. When they called it “email” (a stroke of genius) everyone knew what they could do with this network messaging system: send and receive, store, forward, add attachments. That metaphor is what, I believe, makes email the most obvious and easiest-to-learn application we’ve got. Blogs had a good one with old school journaling and diaries (and explains why the first run of blogs were all intensely personal). Without decent metaphors, our patrons will struggle with the web. A good metaphor might take years to think up, and we might only come up with one really good metaphor in our lifetimes, but I think coming up with them is a worthy pursuit.

2. Faith

Faith Street

I had the experience recently of having to investigate something pretty dire, and then relay my findings back to a distressed and disconcerted instructor. He had to take my word for it that the thing he was afraid had happened had not in fact happened. I had to reassure him that he could still trust the system. If you don’t have faith in the system you’re using, if you think it’s possible that, without your knowledge or understanding, it’s revealing secrets or displaying your content to the world without your permission, your willingness to be creative with it will rapidly vanish.

There’s a difference, however, between selling someone a system and helping them to have faith in it. You don’t have to adore a bit of software in order to have faith in it. You need to know that when you trust it with information it will do what you expect it to with that information. Setting those expectations appropriately helps people develop faith in a system. I see my role not as making you love the institutional system, but to have faith in it.

The best gift I could receive in this situation is to have the instructor believe me when I explain what’s happened. I want him to have faith in me, too. (He did.)

3. A Mac Friend

Geek Squad to the Rescue

This one takes a bit of explaining. Back in the 90s when I first started using macs, I wasn’t comfortable making that decision on my own. Everyone I knew was a PC user: what if I ran into a problem? There were no mac stores then. I would have been on my own. I might not have stayed a mac user if it had not been for the one guy I knew who used macs. I had my mac friend, and I knew he could help me with the things I didn’t understand. Knowing I had a mac friend meant I could try things and feel comfortable knowing there was someone I could turn to.

In a meeting several months ago, a retiring librarian told me she wanted to switch to a mac but wasn’t sure she knew what she was doing. I said to her, “It’s okay. I’ll be your mac friend.” That was when I realized that I didn’t need a mac friend anymore. But I had become one for other people.

Of course, this is the genius of the apple genius bar: they sell mac friends.

I think every technology needs a mac friend, and that’s how I’m currently framing faculty technology support. They may not need you to walk them through every “click here” and array of options. They may just need your help to get them started, and your reassurance that you are there for them when they hit a wall. They have a mac friend; they can try things and not be afraid of having to dig themselves back out on their own. It’s like a safety net; personal, one-on-one, on call reassurance.

We’ve spent years focusing on the content of training when it comes to technology, not realizing that the most important thing we were doing while giving that training was just demonstrating that we know what we’re doing and we’re here to help.

So that’s what I’m focusing on now. I know what I’m doing, you can trust me. I’m here to help you, not just now, but all year long. See this thing? It thinks its an archive. Go play with it. If you run into trouble, I’m always here to help.

Co-Working in the Library

Co-Working in the Library

I read this great post about coworking in the library that recommends public libraries doling out space for freelancers who’d rather work somewhere other than home. People put out real dollars for coworking space; why not use the library? In place of cash, they could donate their time and skills.

I like this idea. I don’t work in a public library, so I got to thinking about the academic library equivalent: doling out office space in the library to faculty/postdocs/graduate students in exchange for their time and skills.

(Probably doctoral students. They’re the loneliest.)

In the article the focus was on educating the public via these coworkers. Our students generally already have access to the faculty and graduate students through courses and as TAs, but they might appreciate these folks for doing different kinds of work. Maybe very (very very) specific help, or a workshop or two, or something like that.

But what if these doctoral students could cowork in the library, working on their dissertations in the company of other doctoral students? A crowd noting when they’re missing, someone keeping tabs on them, a tribe looking out for them and bringing them a coffee every once in a while? And then, for a bit of time every term, they help us with library projects?

Say: help us learn R and see it applied? I know a doctoral student with some really fascinating work on power dynamics in the classroom, that could be extremely useful for our instruction librarians. Someone like that could help us rethink our teaching and training strategies. I’m sure there are some sociologists who wouldn’t mind hanging out with us for a term and examining the social capital under our roof. I’m sure there’s lots of interesting research going on that could improve the workings of a library, or help us see our work from a different angle. And doctoral students could form a little tribe and help each other get their dissertations done.

We could accept applications, and try to put a group together that had something to offer each other. Some odd connections, maybe. Or none at all, who knows.

Not that we have any office space to dole out at my place or work. We’re massively too full of that. But still. Neat idea.

Retro Web Design

Retro Web Design

It happens in fashion, but I’ve never seen it happen in web design before. I guess the web is now officially old enough that old trends can make a return in a new way, because that’s exactly what’s happening.

Frames
Frames are generally a faux pas in web design. As we move into the semantic web where content lives in one place and is mashed together in another, it makes sense to use whatever options are available to push content around the web is as many ways as possible. We used to rely on the embed tag for this, but lately frames, or iframes, are making a return. But they’re not doing layout work anymore; now it’s just seamlessly bringing content from one place to another. Youtube is the big example of the return of frames: the new embed code you snag from youtube is actually just a frame.

This is one trend I’m particularly grateful for; I have a little project that involves pushing local campus content into the front page of our course management system using frames. We create tiny little web pages hosted locally, and frame them into the existing courseware system. The advantage of the frames is primarily that it keeps 100% control of the content in the hands of local content creators; no one needs admin access to the courseware system in order to update the content. No need to overhaul everything just to do one little thing. It’s just a matter of updating the tiny web page, and voila! Students get to see fresh content via a website they already log into every day. Win win win!


[from here]

Animated GIFs
This one took me by surprise. I’ve been paying more attention to tumblr lately, since it’s just such a radically different approach to online community and communication than I’m used to, and I keep seeing all these wicked animated gifs there. Animated gifs used to be the poor mans video, and the poor web designer’s idea of cute, interesting content, but these things are works of art. We can have video on demand whenever we want now, so this is more targeted, more subtle. Bringing a tiny bit of movement to an otherwise still picture, capturing a single moment in a still. We haven’t integrated this trend into the library yet, but I’m working on it. Simple, low key, and really cool. No more rotating mail or under construction signs, oh no. These are reminiscent of Harry Potter-esque moving images. A touch of motion in a still space. Amazing.

What’s the next thing to get an overhaul, cursor trails? MIDIs? Banner ads? What?

iPhone: week one

iPhone: week one

For all my tech-geekery, I’ve never had a smartphone. There hasn’t been a really good reason for this, aside from a vague attempt at fiscal responsibility and the reality that I spend my life essentially in one of two wifi zones (home, work). I figured I didn’t really need a truly mobile device that connected to the internet. Couldn’t I have my (short) commute time away from it? It just never seemed that important. I’ve been following the developments, and while never anti-smartphone, I’ve just never been a good phone person. (At least: not since I was 16 and on the phone constantly.) There are so many other interesting ways to communicate: talking on on the phone just seemed like the least imaginative. I don’t have a home phone, and my work voicemail is something I have to remind myself to check.

The internet is, largely, my passion in life: communication, productivity, creative thinking with internet tech, that’s what I do for a living. It’s also something I enjoy in my off-time; I’m genuinely interested in web innovation, and my explorations and thinking don’t stop when I leave the office. I understand the app revolution, and while I’m on the side that believes the apps are probably only temporarily in power and the mobile web will probably take over, I’m intrigued by the apps and the interesting things developers and users are doing with them. So you’d think I’d have been on this smartphone thing ages ago, but no.

In spite of my obvious interest in all things online, it wouldn’t be fair to classify my web experiences as addictive or compulsive. I’m absolutely okay with pulling the plug at pretty much any time. I can take a long road trip without the internet, and I don’t miss it. I love to read, I love to talk to people, I love to sit and think and muse. Contrary to the “information overload” debate (which I think is code for “I procrastinate and the internet makes it too easy”), I don’t find my connection to the internet either overwhelming or demanding. It’s a give and take. If I don’t want to pay attention, I don’t. When I want it to entertain me, or confuse me, or engage me and make me think in new ways, it does. So while I thought the smartphone thing was pretty cool and clearly an intriguing and useful development, I didn’t actually have one of my own.

Until last week, that is. I finally got on the bandwagon. And I’ve been diving in head first. No holds barred, no panic about the 3G useage. Not in the first week, at least. I gave myself permission to be gluttonous with it, to roll around in it and see how it felt.

The only times prior to now that I thought I’d like to have a smartphone is when I’m out to dinner. Not because my dining companions have been sub par, but because I have an ongoing fascination with food history. I like to know how the composition on my plate came to be, and what historical events I can credit for it. This is easy with things like potatoes and tomatoes (“New World”, obviously), but garlic, carrots (did you know medieval Europeans ate not the orange root, but only the green tops of carrots?), bean sprouts, onions, cows, pigs, chickens, saffron, pepper, etc. It’s really the only time I’ve felt the lack of the internet. I want to look up some historical details at very odd times. I figured a smartphone would be helpful for that. (I can’t really carry around a comprehensive food history book everywhere I go, can I.) Filling specific information needs: in spite of my own certainty that search is basically dead, in the back of my head I figured this is how I would use a smartphone. I was not right.

But it’s been different than I expected. First, and most obvious, I suddenly always know when I have email. I bet people hate that. Email is my second least favourite means of communication, so putting it at the front of the line has mixed results. As I said, I’m reasonably good at not feeling pressure to look at anything when I don’t want to, but the thing pings when I get new email, and it makes me curious. But even in the first week, I don’t look every time. I didn’t stop my conversation with my mother when I heard it ping. I did, however, answer a question from an instructor while on the Go train back home on Saturday. If you want to be distracted, access to the internet via smartphone will certainly act as a decent distraction.

My best experience with it so far as been a trip to my home town, Guelph. It’s early October, and suddenly this week autumn appeared in full colour. If you’ve never experienced a southern Ontario fall, you’re missing something great. The cool temperatures at night mixed with the remaining warm days turns out a crazy quilt of colour across the landscape. It’s only when there’s enough cold that you get the firey reds and deep oranges. We’re in a banner year here, and on the bus on the way to Guelph I saw this awe-inspiring riot of colour out the window. Purple brush along the side of the road, a scintillating blue sky, red, orange, yellow and green leaves on the trees; this is the kind of thing that makes me happy to be living. The kind of thing I want to share, just out of the sheer unbelievability of it. It’s incredibly ephemeral, these fall colours, so capturing them and sharing them has additional appeal.

So this phone I had in my hand, it has a camera. This was actually my first experience using it. And I discovered quite by accident that I could snap a picture and then post it to twitter with a matter of a few swipes of a finger. So there I was, first on the bus, then walking down Gordon St. in Guelph, 22 degree weather, the sun warm on my skin, and while I was away from home, away from my computer, I was sharing my delight in the beauty around me, capturing it and sharing it effortlessly. It was one of those days when I felt like I could hardly believe the intensity of what I was seeing, but I was able to share it, record it, all as part of the experience. I’m not a great photographer: mostly I leave the camera alone and just experience my life without documenting it. But sometimes, documenting it is part of the experience, adds to it. So, in my 30 minute walk from the University of Guelph and my sister’s house, I shared the colours around me and saw the responses from my friends and colleagues far and wide. I was no less on the street, no less engaged. But I was also interacting with the world via the internet. I loved it. I was in two places at once. I had voices in my head. I was connected in two places. It reminded me of Snow Crash.

I’m sure this is no revelation for anyone who’s already had a smartphone all this time, so mea culpa. I was aware of the sort of ambient/ubiquitous computing, I just hadn’t had the chance to experiment with it myself yet, to see what it really feels like. I think the interface is still a bit clunky, too limiting, but the touch screen is getting closer to effortless. What’s wonderful about it is its seamlessness; picture to twitter, responses, all so easy to see and engage with. And engaging online isn’t even really drawing me away from my real life experience. It’s just a part of it. I’m not thinking about cables or connections or keyboards. Technology is getting to be close to invisible, just present and available.

As I sat on the train, reading fiction online, leaving comments, checking out links on Twitter, reading educause research, answering work email, I realized that I would never be bored again.

I read someone’s response to the iPad a few months ago where he returned his iPad for this very reason: the threat of never feeling bored again. Boredom as critical experience, necessary experience. I can understand that, but of course it’s all in the decisions that you opt to make. We are invariably drawn to the shininess of instant gratification via the internet, of course. But even that can get boring, eventually. You do reach a point where you’ve read it all for the moment, and you’ll have to wait for more to appear in the little niche of reading that you do. Does that force you to branch out, find more and more interesting things? That’s not necessarily a terrible thing. Does it allow you to avoid reflecting, being with yourself in a place?

One of the very early criticisms directed at the iPad was that it was a device for consumers, on which information is merely consumed, not created. That jarred me, as it felt untrue and frankly a bit elitist. Creation doesn’t just mean writing software or hacks. Creation can be writing, or drawing, or singing, or sharing reactions and thoughts. but I see now with both the iPhone and the iPad, that this criticism is both true and false. It’s true that these devices make it very easy to consume content created by others; it’s easier to browse and read than it is to write, for instance. The keyboard is pretty great, but it’s not as easy to use as the one attached to my laptop. But what I choose to browse/read/consume is still my choice; just because it’s on an iPad doesn’t mean that it’s all commercial content, not while the web is as relatively free and easy to access as it is. Most of my reading on these devices is not sponsored and not created by mainstream media. I’m not just reading the New York Times. I’m reading blogs and archives, primarily. And why are we so anti “consumer”? We need to consume the creations of others as part of a healthy dialogue, after all; there is a level of pop consumption that’s a good thing. Neither of these devices is as simple as a TV or a radio where there is a clear creator and a clear consumer. I am also a creator on these devices, a sharer of experiences, of thoughts and ideas. My experience walking down the street in Guelph on a beautiful day was a case in point; I was clearly a creator, sharing what I saw, engaging with others. That’s not a passive experience. Sitting on the train reading someone’s review of a movie, or a fictional take an on old idea; I’m consuming as well. In places where I couldn’t do so before.

It feels like there are fewer spaces in my life. The level of connection I’m currently experiencing seems to make my days blend together into one long back-and-forth with any number of people. Is this less downtime? Downtime transformed into time spent in this otherworld of communication and information? Am I reflecting less?

I started with a bang, so I guess it remains to be seen how much I keep at it. Will it get old? Will I return to my former habits, with less time testing the limits of my devices? It remains to be seen.

Screencasting Tools

Screencasting Tools

[youtube http://www.youtube.com/watch?v=3JNmgBatsIo&hl=en_US&fs=1&]

My plan, for July, is to set up a place where we can all share the cool software, web apps, ideas and tricks that we think the rest of the world should know about via screencast. That way we have a great big searchable index of all the cool things available to us on the internet. In order to get there, first I need to share some easy ways to make a screencast. Hence the video above.

Admitedly, I’m currently addicted to screencasts. I’ve never been a big fan of them, I must admit, but these tools are so easy to use, and I can get more across in a screencast. I love text, but sometimes it’s not the best medium. And since I found all these super easy screencasting tools…there’s just no excuse not to try.

I picked four tools for this introduction: Screenjelly, Screenr, Screentoaster, and Screencast-o-matic. They all have their pros and cons, but they’re all dead easy to use. Give one of them a try, let me know how it goes.

Fanfiction as Creative Commons

Fanfiction as Creative Commons

She seems to be under the impression that everyone who writes fanfiction wants to be just like her (i.e. a successful published writer named Diana Gabaldon), but because they are just not as dedicated/original/awesome as she is, the best they can do it try to write exactly like her. With her characters and everything. (link)

I’ve been skimming through the great fanfiction debacle. For those not following along, I’ll summarize: Diana Gabaldon, fantasy fiction writer, discovered that a group of fanfiction writers were auctioning off custom-written fanfiction based on her books, with the proceeds going toward the hospital bill of an uninsured breast cancer patient. When Diana Gabaldon caught wind of this situation, she did not like it one little bit. She posted about her opinions of fanfiction in general (not something she’s avoided airing before: she has previously stated that fanfiction is like someone selling your children into white slavery.) She struck a nerve by describing fanfiction as immoral and illegal, and then went on to wax poetic with analogies for fanfiction like “You can’t break into somebody’s house, even if you don’t mean to steal anything. You can’t camp in someone’s backyard without permission, even if you aren’t raising a marijuana crop back there.” And more inflammatory yet: “I wouldn’t like people writing sex fantasies for public consumption about me or members of my family—why would I be all right with them doing it to the intimate creations of my imagination and personality?” The posts themselves, there were three of them in total which garnered a significant number of comments in reaction, have been deleted from Gabaldon’s blog, but have been reproduced for posterity here. Obviously, these words generated a lot of hurt feelings, and many others, fanfiction readers, writers, and published authors alike have weighed in.

What I find so interesting about the whole mess is the basic misunderstanding, summed up so succinctly by one of the commenters on the fandom wank post quoted above: Diana Gabaldon appears to believe that the purpose of writing fanfiction is mimic writers. And perhaps, if understood from this perspective, her reaction makes sense.

In the mid 90s, when I was finishing my undergraduate degree, I did a research project on an oddity that I noticed in journalistic sources during the 19th century; women in factories wearing outfits that would have cost them their entire yearly wage to buy. I wondered what would possess a woman of limited means to buy such an dress, and uncovered a whole paranoid segment of literature where the upper classes were unrelentingly scornful of the working classes who sought to “pass” as above their station. There was a great deal of worrying about this possibility, and certainty that such “greasy silk” would never really convince anyone. Once I started to dig into the working class side, another motive appearedl it wasn’t limited to fancy clothes, either. Furniture and general household objects, all sorts of things, including fake dinners, complete with the rattling of silverware even if they had no food, to keep up appearances. And then I understood; while the upper classes saw their underlings trying to “pass”, the working classes were actually communicating amongst themselves. They were signaling to each other that they were doing okay, doing great, doing better than their neighbours, no matter what their actual circumstances. The upper classes were there only as a metaphor, as the providers of a language of symbols they could use to communicate, not with the upper classes themselves, but with each other.

This is pretty much exactly the same thing that’s going on in fan communities, including the scornful, wealthy observers. While authors see amateurs stealing their work and possibly trying to masquerade as one of them (usually very poorly, laughably poorly, and the wealthy, educated, comfortable elite has no issues announcing that fact loudly and proudly), fan writers are really only communicating within their own group, to each other. What those on the outside of these communities fail to understand is that any one work of fanfiction rarely stands alone. It is part of a larger discussion about who these characters could be, what these places are like, and working through the issues of the moment within the community itself. This is why it’s often possible to track the development of a fandom version of a character regardless of who the writer is. Fandom tropes come and go, objects, jokes, ideas, themes come into style, and within the culture of the fan community. It’s up to each writer to tackle these things in new and creative ways, to contribute to the narrative behind these characters, these ideas: that’s the challenge, that’s the fun of it. It’s not about you, Diana Gabaldon, privileged writer with a comfortable living and no concept of fan community. It’s about us.

Of course, all fan communities are rooted in the original text (whether that text is in fact text, or video, or any other media); that text is the language that everyone understands. It’s the commons from which everyone feeds. All creative work happens on top of that commons, and subtle differences between the canon action and the story presented carries a ton of meaning. These shared language, structure, place, and characters is what brings strangers together, gives them a common location from which to start.

This is exactly how biblical stories are thought to have developed. They would take a standard story that everyone knows (The garden-paradise, the tower of Babel, etc.), and embroider it in a particular way. The way you chose to embroider a known story is where all the politics and challenge is, and demonstrates your take on the story, your comment on the workings of the day. In the story of the garden that we understand as the standard one, Adam and Eve are thrown out of the garden; in another, they walk out of their own accord. These are the decisions that tell you what the author means to say with his version story; are humans powerful or powerless? Are we here because we outsmarted God, or because we are being punished? Should we be proud or humble? The author is communicating something above and beyond the story itself, using the story elements as tools. If you don’t know the base story, you’ll miss the whole point, the meaning behind the differences. You’ll think it’s just a story.

Published writers unfamiliar with this kind of community will say, “go write your own story! Stay out of mine!” which displays a basic misunderstanding of the whole point of fan communities. If we were all writing our own, we wouldn’t have the shared language to work from. I couldn’t read your story and say, “hm, so you think there is the psychological basis to have character X go this way, well, that seems reasonable and I can see where you’re coming from, but it doesn’t resonate with me. I’m going to write something indicating the opposite, which is also reasonable and arguable, as you shall see.” The first writer will project one tiny element in one direction, and another will come along and build on that, pushing boundaries in another way. You can see characters in fandom as great big trees; starting with a trunk in the commons as part of the original work, then branching off as the community wrestles with him, pushing him in different directions. Camps form; some people see a character as essentially one way, and others see the opposite. People from the camps gather and further refine ideas together, with waves of creativity taking them off in new directions altogether from time to time. If everyone were writing their own story, there would only be a single branch. There wouldn’t be a whole community getting together and sorting out all the ways a given character might go, and writing each and every direction.

The original author is largely irrelevant to this entire process. S/he can step in and add some elements, which might make one faction feel triumphant in their “right” interpretation, but many more couldn’t care less. (Most slash fandoms, for example.) Interpretation of canon material springs from the canon material only; if the book leaves arguable room for a character to become a lawyer, or be gay, or be straight, or marry his best friend, then some part of the fandom will celebrate him in that way, no matter what the author says about it or what the author would prefer. Fandom is about the various interpretations of the collective, not the desires of the individual.

While many fanfiction writers want to be published authors one day, and in fact, many former fanfiction writers have indeed gone on to publish their own original work, the majority do not. This is where Gabaldon is so confused; most fanfiction writers write to participate in this larger community of interpretation and imagination, following not only her lead with her characters and her world, but the lead of all the fanfiction writers who had come before and laid the groundwork, establishing rationales and potentialities. A fandom once born tends to feed itself like a brushfire. Many fanfiction writers get into the culture not by reading the original text, but by reading fanfiction, which by its very nature begs the reader to answer it, to add their own layer, to contribute. Characters leave their original stories and live a million other lives through these multiple lenses, picked up and reconsidered, refashioned. No one’s trying to pretend to be Diana Gabaldon; no one thinks they’re version is a replacement for the original, anymore than a branch is a replacement for a trunk. Instead, fan communities face inward, sharing their stories, their ideas, their interpretations with other fans. The creative commons of culture, including books, movies, tv, video games, provides the base layer on which fandoms begin to create their scaffolds, which spawn more and more scaffolds on which to hang a new story every day.

How to Create a Useful Social Network

How to Create a Useful Social Network

The last time I took a written test, I found myself very frustrated. I was sitting by myself in a room, answering questions on a sheet of paper, cut off from the large network of people I have digitally gathered around me over the years. The questions were testing my knowledge, not how I could put knowledge to use with the help of my extended social networks, which, practically, is how I would solve the problem. We are increasingly living in a world where our general understanding of things is more important than the particular details we can remember; we are using our brains more to make sketches of how things work and letting things like Google and our social networks fill in the blanks. Rather than spending time memorizing, we are jumping up the ladder and processing meaning and use. We expand our understanding knowing that the details will come via our always-on internet connections.

And this is why your social networks are important. You store information in your social networks, in the people you trust and communicate with. One of your friends reads a lot of historical novels; when you need to know the name of Henry VIII’s second wife, you can ask him. Or you can just Google it. You don’t need to store that name in your grey matter. You know you don’t need to; you know Henry VIII had a second wife. And that’s largely enough. Your friend would be happy to chat with you about English history, and when your friend stumbles into an area you’re interested in, you’re happy to chat with him about that. Reciprocal information-sharing. Two heads are better than one!

Step one in creating and using a social network is to acknowledge that it’s there. Asking a friend is something they let you do on TV game shows, but we often don’t see that knowledge network as real or valuable in our professional lives. But it’s probably the biggest asset we have. Your social network is your living library. You are part of other people’s living libraries. One of the best things you can do is to contribute to your network when they need your obscure knowledge and educated opinion. Engage with your network; provide ideas, thoughts, where required. Let your network shine by employing your knowledge. Then you can do the same.

I would comfortably posit that people at certain stages in their lives don’t have functionally useful networks. This might be because your network isn’t comfortable in its knowledge yet, or that knowledge isn’t yet solidified, or that the individuals in your network haven’t had a chance yet to set out on its own and develop knowledge and experience independent of their peers. If everyone in your network reads the same books, has similar summer jobs, and lives in the same town, that network isn’t going to be terribly useful to you. So branch out a bit: cultivate difference. Embrace it. Share your experiences. Become expert at something. It doesn’t have to be something lofty; it could be about gardening in a micoclimate, or knitting, or the history of a pop band, or the works of Margaret Atwood, or doing laundry. Become the go-to person. Everyone has expertise in something; if we pool all that expertise together, we get a really interesting resource that makes us all better people.

I’ve found that the deeper I dig into my passion (which is my work: internet apps in academia), the more obscure my knowledge and expertise gets. And so does that of my friends and my peers. So my networks have become really interesting and rich. I know that if I announce an opinion on a social network (facebook, twitter, my blog, etc.), I will surely get some diverse responses. Because the people I care about are coming from so many different spaces, I am enriched by interacting with them.

We largely categorize this kind of interaction as “social” and therefore “fun” and therefore “not work/serious”. But interacting with our networks is often the key that opens up whole new worlds for us. Our friends and our peers shape us, just as much as official, serious education and information do (likely far more). Let’s just acknowledge that while our friends are great and fun and we blow off steam with them and have fun with them, they are still valid sources of information and growth for us. Often when we’re working on a thorny problem, and have a few IM windows open, and Twitter, and Facebook, and are composing a blog post, we’re not just messing around on the internet. It might be fun, it might be building our friendships, it might look like we’re not paying proper attention, but in actual fact we are learning and processing and drawing on the collective knowledge of our networks. Even pure socializing, pure “not-work”, is part of building a real and useful social network. We are laying the groundwork to trust and share with our peers.

So: is it a bad thing to have facebook open at work? It can be if it’s distracting you from getting something done. I remember back at library school everyone would open up their IM clients and complain about the assignment we all had due. It can distract, it can act as the thing you do instead of doing what you need to do. Or, we can use these tools to build ourselves. We can use them as our interactive library. The thing itself isn’t the problem; it’s how we use it.

This is largely why I like to share what I’m thinking about or experiencing via social networks. I know that many of my friends and peers find it engaging and thought-provoking professionally, and I find the same when they share their work with me. I get to benefit from their learning when they share it. My professional development expands via sharing. When I attend an event about a subject I’m only passingly familiar with, I go to that event with the collective knowledge of my network, who correct my assumptions and add colour to the details I learn.

So embrace your social network. Cultivate it. add to it the people who challenge and inspire you. Let your network build you into the sort of person you want to be, and return the favour.

Twitter and Libraries

Twitter and Libraries

In preparation for our new library website, I have been working on some social media policies. I’ve never really been much of a policy person before, but I recognize that because I am bringing in some standard social media tools, I’m going to have to define some best practices. I got my first blog in 2001 and had many conversations back then and ever since about what is and is not appropriate content; I’ve had many years to think about it and get comfortable with my own boundaries. As I prepare to give each content creator in our library a blog, I realize that a policy might be the best way to share some of that experience. No need for everyone to stub their toes and scrape their knees via a professional medium.

Blogging policies are actually pretty easy to generate these days. There are tons of them around, since many industries encourage corporate/professional blogging, and most have developed policies for them. Maybe it’s also easier to do because we have, I think, determined the distinction between a personal blog (like this one) and a professional one. It’s not a foreign concept.

The hard part comes when trying to come up with a Twitter policy.

I posted both my draft blogging policy and my draft twitter policy on twitter to get some feedback from people who use these services. Here there are for your information. The Blogging policy starts with the legal and then moves into guidelines; the Twitter one doesn’t have as much legal, I think the general TOS of Twitter covers that.

These two are actually contained in one document on my side; I split them up because at first I wasn’t going to post the Twitter policy. I thought it would be…controversial, not helpful to anyone else, not useful outside our very specific context. I expected it to be widely disliked. I think what people are expecting is something more like this; some friendly guidelines that help a librarian engage with her patrons by treating Twitter as a personal, interactive communication medium. My guidelines are very nearly the opposite of that.

Now: as a librarian who uses Twitter a lot, follows a lot of librarians, and gets into a lot of discussions on Twitter about library issues, I understand where people are going with their personal guidelines. I suppose I think I’m the last person in the world who should tell another librarian how to use Twitter personally. As a person. As themselves. For themselves. For their own development. Reading through those guidelines, I can almost hear the chorus coming from all the non-Twitter, non-social media librarians of the world: “When am I supposed to find the time for that?!” I love using Twitter to share and question and communicate, but I’m not sure it’s the best use of an institution’s time. Which is why my policy runs counter to what I do personally.

So I guess my policy isn’t so much for the people who want to use Twitter the way I do. It’s for people who don’t, who have no interest in social media, but who still need to communicate with their patrons in the widest possible way.

Here are the reasons why I want to use Twitter for our library website and for our digital signage:

  • It’s easier/less intimidating to post to Twitter than to write a professional, thoughtful blog post
  • Because it’s so easy, I’m hoping I can convince the uncertain to make easy updates via Twitter that I can distribute throughout the website in key, relevant places
  • Twitter updates are the perfect size to feed onto our brand new digital signage, which is mounted in front of every elevator and pointing at every angle in our Information Commons
  • I can get many updates a day from library staff to the digital signage without having a login to the digital signage software
  • I can invite many people to update a single Twitter feed without opening the website up to risk by having many people update one node
  • I can get student staff input on a Twitter feed without giving them content creator status on the website
  • Unlike our website, Twitter can be updated from a phone, which means we are more likely to get rapid updates from our campus partners and IT staff
  • My current means of communicating things like “Blackboard is down! It’s not just you! We’re working on it!” is to write it on a white board and roll it out in front of the main doors.

I’m not planning to use Twitter for Twitter’s sake. I am advocating the use of Twitter as a broadcast medium, as unpopular as that might be. I’m not sure Twitter is really at its best when it’s conversational, though I may be in the minority on that. There are so many better conversational media, and we’re using those too. We’ll have mulitple meebo widgets scattered throughout the site; some staff want a personal one. If you want to have a conversation, we will ensure that you can. Twitter actually is a broadcast medium, as far as I can tell.

Maybe this is a redefinition of the term “broadcast”. On Twitter, I broadcast my thoughts, my ideas. When I’m at a conference, I broadcast a lot. My use in that case isn’t dependent on anyone reading my broadcast or responding to it. If someone broadcasts their own response to what I’m saying, I can broadcast a response back. Blogs are a broadcast medium as well, in very much the same way, in spite of all the hype about the conversationality of blogging. Just because it’s a broadcast medium doesn’t mean we’re not paying attention to its context or responding to questions or comments around it. Not using Twitter to @reply to singular users in public doesn’t make it less useful, in my opinion. Or even less personal, less engaging, or less a good use of the medium.

The great thing about Twitter is that I can use it this way and it won’t affect anyone else at all; in fact, I don’t really care how many other Twitter users follow our broadcast Twitter account. I don’t anticipate that our students will; almost none of them (statistically) are on Twitter to start with, or have any interest in using it. I don’t want to exclude them by using Twitter-specific conventions or lingo. My goal is not to draw them into Twitter or increase their use of social media (not with this initiative, at least). Our use of Twitter in this way serves our needs first; we have vital information to distribute to students in our own building and campus, and currently have very limited means of doing so. We’re going to use Twitter to distribute it in a way we’ve never been able to do before. If it happens to serve a Twitter community at the same time, I’m delighted.

In short: I wrote a couple of social media policies for libraries as institutions rather than for librarians as individuals. They may or may not be useful, interesting, or appropriate to your situation. I’m still not sure how I feel about them myself. But I will certainly be tracking how it works this year.

Any feedback or comments on the policies is gratefully accepted, and will probably spawn more navel-gazing and fussing on my part.

Digital Normals

Digital Normals

This may be my favourite bit of research lately. Teens aren’t internet superusers: if anyone is, it looks like it’s adults.

Pull this out the next time someone regales with you more anecdotal evidence that the kids these days are “digital natives” and we cannot understand their ways.

The Death of Newspapers

The Death of Newspapers

My old friend Michael drew my attention to an article by Michael Nieslen about changes in publishing and how the paradigm shifts catch companies by surprise. In short:

Each industry has (or had) a standard organizational architecture. That organizational architecture is close to optimal, in the sense that small changes mostly make things worse, not better. Everyone in the industry uses some close variant of that architecture. Then a new technology emerges and creates the possibility for a radically different organizational architecture, using an entirely different combination of skills and relationships. The only way to get from one organizational architecture to the other is to make drastic, painful changes. The money and power that come from commitment to an existing organizational architecture actually place incumbents at a disadvantage, locking them in. It’s easier and more effective to start over, from scratch.

It’s not that they’re malevolent; they’re just stuck in an institutional structure that is too difficult to change. His first example is newspapers; the New York Times (in decline) versus TechCrunch (in the black).

That got me thinking: what would it take for me to go back to supporting a newspaper? Because, in truth, I love newspapers. I haven’t subscribed to one in about two years now, but I do love newspapers. I just don’t like getting one every day. First: they’re messy. The ink stained my carpet at the point where it met the front door, because the newspaper deliverer would drop it just so. It stained my fingers. They pile up and have to be transported somewhere and be disposed of. They’re net worth isn’t sufficient for all the work I have to do to maintain their presence in my daily life. However: I love sitting outside on the patio of our favourite breakfast place with Jeremy, trading parts of the paper, skimming the stuff that is vaguely interesting, digging down on the stuff that’s very interesting, ignoring the sports section…I suppose we use the newspaper as our internet when we’re not online, or when being online would be too costly, too disruptive, or too awkward. Clearly it’s simply a matter of time before we have devices that will fill this desire handily: a roll of thin plastic, perhaps, tucked under an arm, an easy part of the breakfast scene, online for cheap no matter where we are, showing us only the articles that are at least vaguely interesting if not very interesting to us, with no sports section to ignore; our device would have the upsides of the newspaper (no computers cluttering up the table, getting between the food, the people), but the cleanliness, customizability and immediacy of the internet. The future newspaper is a gadget.

Michael Nieslen says: “My claim is that in ten to twenty years, scientific publishers will be technology companies.” Could that be true of newspapers as well? Is the medium more valuable to us than the content? If newspapers managed to produce the device, instead of the content, or perhaps in conjunction with some content funded by the popularity of the device, could that be their future?

Beth Jefferson makes the case that librarians should carefully watch the decline of the newspaper industry, because our descent is similar and may come soon afterward. We, also, are less about our content than about the medium in which we can present them. Our devices are buildings; while “the library without walls” meme has been going around for a while, the reality is that people still need space, and our spaces are popular as spaces to work, think, be and be seen. At the very least. When we move into things like ubiquitous interfaces, maybe our space becomes the medium, the device.

A recent report on libraries and mobile technologies suggested that we wait on developing mobile tech versions of our collections and services, a conclusion with which some disagreed. While I’m all for being cutting edge (bleeding edge, even), I agree with the report. We have no idea where this mobile thing is going. If we had gone all mobile three years ago (when we easily could have gone to town with it), and then the iphone would have appeared, with its alternate internet of apps. Mobile devices don’t tend to do the web well; rather than get better at it, we’re creating a new web for them, designed with their location-awareness, mobility, and lack of keyboards in mind. What if our big future isn’t in making our content web/mobile friendly, but in building ourselves into the e-newspaper or the e-book, letting you do “more like this” searches, hooking up bibliographies, keyword searches within (digital, mobile) text? Maybe the future of libraries is an app inside an app? What about blackberries and other smartphones? Are they going to get in on this app revolution? Are we going to have competing app universes to contend with? The data plan revolution (at least in Canada) is clearly coming, but when? And what will it bring with it? What restrictions will we be under?

I see the legacy of “waiting” that newspapers have demonstrated has not served them particularly well. But on the flip side, jumping in without getting the full lay of the land doesn’t have a good track record either. Maybe we’re all about to come technology companies, in some way or other.

Memed Digital

Memed Digital

Since the start, I’ve taken issue with the “digital immigrants/digital natives” divide. From one angle, that division puts me and everyone I share my digital life with on the digital immigrants side, in spite of our very rich online lives. From another, it suggests that the undergraduate students I spend my days assisting are somehow “wired differently” than me, and are way more adept at technology than me. This just isn’t my experience in any way. I think it denigrates the amazing work of older net citizens and puts teens in a box in which they do not identify in any way shape or form. The generational argument just falls flat to me.

Listening to Don Tapscott’s recent Big Ideas lecture the other day gave me a new insight on the matter. Like all who advocate the idea of a digital generational shift, Tapscott was inspired by watching his kids. They’re geniuses! No wait, all their friends are geniuses too! This is the beginning of the problem; anecdotes are great, but they bias you in a particular way. In Tapscott’s world, it’s the kids who are living the digital life, not his peers. Therefore, it must be generational. There is nothing in his evidence that proves this; in fact, even the brain chemistry evidence he cites doesn’t prove it. Different behaviours, different activities can change brain chemistry; that’s not news. That’s the real story, not generations.

Different behaviours and activities can be more popular with certain age groups than others, which makes this “digital native” thing an issue of correlation, not causation. However: do we have evidence that more teenagers are interested in the digital life than any other generation? Gen X is small compared to the “millenials”, correct? In 1994 Wired predicted that by the year 2000 the average age of internet users would be 15. Then I wonder why, in 2008, the average age of internet users in the UK is 37.9? As of right now, NiteCo lists the average age of internet users as 28.3421. I’m not suggesting that teens aren’t interested in the internet and in digital life; it’s just that it’s not primarily or only them. It’s not a factor of their age. This isn’t even like Elvis, when the kids loved the rock’n’roll and the adults hated it; it’s nowhere near that clear cut.

I think it’s more like a cultural meme. It’s a series of metaphors, of truths we accept. In the digital culture meme, there can be something called “digital culture”. An online community is a real community. You can have online friends, and they’re real friends. You can “talk” online using only text, and have it mean as much to you as a face to face conversation. You’re intrigued by new internet apps, not scared. You have a tendency to play with things digital and see how they fit into, or alter, your digital life. The idea of wanting to be connected pretty much all the time is not that strange or dangerous; “thinking with the internet” is a concept that makes sense to you. These ideas, among many others, make up the digital culture meme, and the people who subscribe to it are the digital natives. It has nothing to do with when and where you were born.

Maybe it’s like Stravinsky. When they first performed Rite of Spring, people rioted. It was so foreign, no one knew how to respond to it. But eventually, the meme of radical music spread; eventually, the song made it into Disney’s Fantasia. It wasn’t worthy of a riot anymore; it wasn’t different anymore. It wasn’t going to destroy society. It was just a new way of thinking. Did that start with a generation? Or just a group of classical music lovers? We didn’t consider that a generational shift, but perhaps it was. New ways of thinking, new ways to intrepret culture.

Or are we trapped by old ideas about genetics? Old ideas, the ideas that filter through into society as truths. You can’t teach an old dog new tricks; real change comes from the youth. Is that so? For people like Don Tapscott, is thinking of the digital culture meme as a generational change a way to excuse himself, and his peers, and others who fear the meme, from participating? Is it reassuring to think of digital culture as something akin to built-into-your-genes and unfixable? They are just built differently, they’re brains are different; don’t feel challenged by these new ways of thinking and communicating. Don’t feel threatened. It’s not your fault that you don’t understand or won’t participate. That’s what’s right given your brain wiring. This is only a game for the young. This is the way THEY think, because they were born in this world. But no, it’s not like genetics in that sense; it’s more like epigenetics. Your brain is flexible, your genes are flexible depending on the choices you make, the options you have, and the circumstances you’re in. Accepting the meme and living digital can change your brain. It has nothing to do with your age.

Lauren and her Laptop

Lauren and her Laptop

[youtube http://www.youtube.com/watch?v=EIS6G-HvnkU&hl=en&fs=1]

For the most part I’m not that interested in the ad war between mac and PC. I think the mac ads are cute, mostly because John Hodgman is adorable. There’s lots of talk online right now about this ad, saying that “Lauren” is an actor, she never went into the mac store as she said she did, and the PC she got is a piece of crap, etc. Dishonest marketing? Of course! What marketing isn’t dishonest?

When I first saw the ad I went to see what computer she got, and I saw that it was 8lbs and laughed.

I personally don’t care about the mac/pc war because in general I think mac will continue to produce good products regardless, they’re making plenty of money to keep them in business, they’re still producing macbooks, which will be my computer of choice for the rest of the forseeable future. I like to love my laptops, and I love using macs. I generally think that mac is good as a niche; they aren’t going to produce crap computers for the cheap audience, because they don’t cater to the cheap audience. I don’t really want to see them change that priority just to get the greater market share. So as a mac user, I like them having a healthy share of the niche market. Seems perfect to me. So if PC wants to create a persona who “isn’t cool enough to be a mac person”, that’s cool. I mean, if “Lauren” wants to spend 25K on her car but won’t spend more than 1K on a computer, well, maybe she’s really not a mac person.

But in musing about it, the “regular person” technique, a few things are jumping out at me. She wants a cheap, 17-inch laptop. Why 17-inch? Clearly not for professional reasons; the 17-inch computer she got doesn’t have the juice to do any video editing or whatnot. For watching movies? It’s funny, because things are getting smaller these days. Most of the students at my campus have laptops, but the ones who got the bigger ones generally don’t want to lug them around. (And Lauren’s laptop is 8lbs…she might as well have gotten a desktop, really, for the amount she’ll be willing to drag it around.) The smaller laptops are getting more popular because of their sheer usability as portable machines. Netbooks are all the rage because of there incredible portability; we’re entering an era where we’re finally savvy enough about our needs to not always get the biggest and best “just in case”.

Maybe that’s why this ad makes me laugh. Lauren wasn’t trying to get the biggest and best, like we used to, trying to make the most of her investment. She just wanted the biggest, for the least amount of money. Why? This request just doesn’t resonate, particularly not in our current computing climate. Big laptops are increasingly a pain in the ass for everyone who owns one. Currently, the only people who appear to really want a big laptop are professionals who have particular kinds of work to do that requires a big screen and a modicum of portability for presentations. I’m a professional who wants lots of screen real estate; I have an external monitor at work on which I extend my desktop. I wouldn’t want a 17-inch laptop. It’s just not practical.

The only laptop I regularly move around these days is my beloved netbook, which gets online and plays all my favourite tv programs for me while I’m on planes, trains and automobiles. I can sit at the bar and check my email on my netbook, and still have room for my dinner and my beer. I get more comments on that netbook than I’ve ever gotten on all of my macs put together. People love the idea of a usable, small, cheap laptop. If you’re a coolhunter, you’re probably looking at small, fast and cheap. You can buy gigs of space on USB drives for peanuts these days; why spend hundreds for a big internal hard drive? Small hard drive, small physical computer, big RAM, bloody great OS (Ubuntu, anyone?) No one’s that excited about a big laptop running Vista, no matter how cheap it is.

Apple is often a bit a head of its time, sometimes painfully. They got rid of floppy drives well before it was a good idea (even I had to buy an external in the 90s). They took out the phone jack in the last few years too; that’s what pushed me to give my dad my old wireless router so I could still get online when I was visiting. They’re usually on the right track, but they pull the plug on things a tad too early. They keep you slightly uncomfortable with the things they declare as dead. But why is it that microsoft always seems to be, just as painfully, a step behind? Everyone else is talking about cheap, fast and small, and they give us an ad about cheap, slow and huge?

Ada Lovelace Day: Catspaw

Ada Lovelace Day: Catspaw

It’s Ada Lovelace day, which is the day when we celebrate women in tech! This is an easy one for me: Michelle Levesque, otherwise know as Catspaw.

I first met Catspaw just before she started her first year at the University of Toronto. She was a scrappy teenager, with a history of sneaking into servers and testing their security without actually causing any damage. She learned code by writing it out by hand offline, and then testing it when she could get back online. She “hacked” her way into a private building in order to post up a giant-sized drawing of a stick-figure cat, just to show that hacking isn’t just a skill with code, it’s the skill of quietly finding and exploiting weak spots in security. She is scrappy, strong, intelligent, and incredibly gentle, thoughtful and considerate.

The most wonderful thing about Catsy is the way she thinks about technology. On the side of skills: she’s gifted. But that’s not what makes her special. It’s how she understands the role of technology in light of everything else. It’s the medium, not the message. Her interpersonal skills are excellent; she respects non-tech people, non-programmers, for the skills they bring to the table. She really listens to what people say. She absorbs ideas and really turns them around in her head. She sees all ideas as part and parcel of the project of changing the world. She will never, ever say to anyone: “You just don’t understand the tech.” She will instead improve the way she communicates so that you do. And she will work to make sure the tech understands you.

When she was offered a job at Google before she even finished her undergraduate degree, she wondered if she should even take it. In the same way she thinks about projects and code, she didn’t want to take the best-looking road first, in case there was another, better, more clean and sophisticated route to get there. She was still hacking her way through things, just like always.

What I admire most about Catsy is the way all the parts of her are merged into her work. She is not just a geekgirl. She is not just a programmer, just an engineer. She refuses to put her ideas or herself in a box; there is code in everything, and she won’t ignore the softer side of things because they don’t fit into the strict definition. This is why she’s able to be so much more than the sum of her parts; she merges them, she doesn’t deny them. She won’t fit the stereotype.

Catspaw is at the beginning of her career. I can’t wait to see what more she’s going to do. It will be amazing.

Emerging

Emerging

So: new job title (“Emerging Technologies Librarian”). Definitely something that I wanted to see happen. I feel like it reflects what I actually do a lot better. Though I have pangs of regret when I think about instructional technology, but the lines are still blurry. Now I deliberately look at emerging technologies in teaching and learning, or maybe ones that haven’t quite emerged at all yet. Also emerging technologies as they apply to libraries in general, and our library in particular.

It’s exciting to have a job title that reflects what I’m already doing anyway, but it’s also kind of intimidating. I mean, keeping up with the trends was something I did as a bonus. Suddenly it’s in my job title.

So I was thinking about what trends I’m currently tracking, and I wonder how they fit into the whole “emerging” thing.

Second Life/Virtual Worlds. I’ve been on this one for a while, but I still think it’s emerging. Mostly because I think no one’s popularized the one true way to use virtual worlds in teaching and learning yet. In fact, there are so many wrong ways in practice currently that many people are getting turned off using Second Life in teaching. I’m still interested in it. I’m a builder, I’m interested in what you could use the environment for to build things and have students build things. A giant collaborative place filled with student-created expression of course content would be awesome. So I’m holding on to this one.

Twitter. I can’t believe I’m putting it on the list, but I am. Mostly because I’ve been talking about how great it is at a conference for some time now and I’m starting to see the argument come back to me from much larger places. People complain about what people twitter during events (“Too critical! Too snarky! The audience is the new keynote!”), but that’s pretty much exactly what would make things interesting in a classroom. I want to install the open source version and try it out with a willing instructor. I’m also interested in it for easy website updates, but most people would tell me that that’s a total misuse of the application. (Too bad!)

Ubiquitous Computing. I’ll say that instead of mobile devices. The hardware will come and go, but the concept of ubiquity for computing is fascinating. It’s coming in fits and starts; I want to see how I can push this one in small ways in the library. Computing without the computer. Ideally without a cell phone either. This is something I’m going to track for a good long while. I have this ubiquitous future in my head that seems like a perfect setting for a cyberpunk novel. (I might get around to writing it one of these days.)

Cheap Storage. As a rule hardware isn’t my area, but I’m interested to see what it means that storage capacity is getting so crazily cheap. If I can carry 120 gb in my pocket without even noticing it, what does that mean for computing in general?

Cloud Computing. This goes along with the cheap storage. Jeremy tells me we will never be affected by the cloud because we are a locked down environment for the most part, but I think he might be wrong. Even if we can’t fully employ the cloud because of security and legal limitations, I think the concept of cloud computing will sink into the consciousnesses of our users. We will need to be prepared to offer services as easily as the cloud can.

Netbooks. This fits in with cloud computing and cheap storage; if we can have tiny little computers with us at all times, massive amounts of physical storage and powerful applications coming down from the cloud, what does the world end up looking like?

Social Networks. Embracing the networks you have, on facebook, on IRC, on Twitter, on IM, wherever. Accepting that we are no longer a culture that uses its brain for information storage; we are processors, connectors. We store our knowledge in machines and in our networks. While social software may look like too much fun to be productive, those social networks are what’s going to scaffold us through most of the rest of our lives. Learning how to respectfully and usefully employ our networks as part of our learning (and teaching, for that matter) is an important skill.

There are some other pieces that are just never going to go away: blogging (for librarians!), wikis for everyone, IM: I think we’ve finally reached a point where we can intelligently choose the best tool for the task at hand from an incredible range of options. So I think part of the emerging trend is to use what’s best, not necessarily what’s most powerful, most expensive, or most popular. Things like twitter and netbooks are evidence of that: sometimes you don’t need all the bells and whistles.

So that’s my emerging update of the moment.

Best. Era. Ever.

Best. Era. Ever.

I was thinking, while reading various articles about twitter, and interactive learning, and participatory culture, and fandoms, that I’m so glad I live when I do. I’m glad I was able to be around to see the birth of things like blogs and virtual worlds and all kinds of interactive applications of the internet. So much is still unformed, undefined; the blessing and curse of the early days of the social internet is that we get to do the defining. We don’t have buck a trend, we get to try out the new stuff and give them meaning to the wider culture. We get to be as imaginative as we can.

That’s so cool.

Wireless in the Classroom

Wireless in the Classroom

My campus is planning the construction of a building dedicated to instruction; state of the art classroom technology, lots of computers, a space where a large class can take a monitored online test. There is, I’m told, a debate about whether or not to put wireless access into the building. Many instructors dislike the idea of students being online while they teach; “being online” means “not paying attention”, after all. The internet is fun and games, and learning is meant to be work.

No, that’s harsh, isn’t it.

Being online means chatting with your friends and goofing off. You shouldn’t be chatting with your friends and goofing off while you’re sitting in a lecture. It’s not respectful.

Except: what about people like me, who get so tied up in knots about the subject at hand that I need to spill my ideas out to SOMEone, SOMEwhere, and often use IM to channel my over-enthusiasm? (I think Jason took all my library school classes with me, virtually, through my constant stream of IMs.) What if that “chatting with friends” prevents someone like me from interrupting and turning your lecture into a one-on-one discussion? Or, what if the “chatting with friends” helps a student refine her critique? Or keeps her engaged, because otherwise her mind wanders and if reporting what she’s hearing about in the classroom to a trusted and interested friend helps her retain the knowledge better?

What if that trip to wikipedia, or google, helps clarify something? What if that internet activity is related to the process of learning the material?

Why does the instructor get to make the decisions about how students are going to learn?

Why are we more interested in optics than in allowing students to be adults and choose their own learning methods?

Why don’t we trust students?

Why do we not make use of the amazing resources available online while we’re teaching? Why not allow students to use virtual reference desks worldwide to get questions answered for the class, or check UN stats, or otherwise contribute meaningfully to the lecture? Why not harness the fact that students like to do something other than sit still in a room for three hours and ask students to go forage for elements that can enrich everyone’s learning experience? Why not be more interactive? Why not share not just expertise but a true love of seeking out information and turning it into knowledge? Why not just expect the learning part to happen after class, but in class as well?

Why not allow students to take notes collaboratively, on a wiki, or with Google notebook, or other, multi-cursor collaborative software?

Why not allow students to twitter their questions and ideas (you can easily monitor that)?

Why not give students a chance to react?

I’d like to throw together a video about why wifi in the classroom is a good thing. If you’ve got an opinion, go below and record me a little video describing your ideas, experience, anything. It doesn’t need to be long. I’ll mash them together into a video and upload them to YouTube. Please help!

Twitter and the Library

Twitter and the Library

My latest all-consuming project is working to redesign/rework/completely renew our library’s website. It’s still early days, but there are certain lessons I’ve learned from my last all-consuming project (introducing coureware to the campus); you can never communicate too much. Even when you think you’re communicating enough, you probably aren’t.

From the worst days to the best days rolling out software to faculty and students, no one ever accused me of giving them too much information. While the internet is a very social medium, it can also be a very isolating one at the same time. When people are trying to get from point A to point B using some software that you answer for (even if you don’t control it), there’s really no way you can get too far into their personal space. They want to know that you’re there, that you’re anticipating their questions, that you’re aware of the problems they’re encountering. I never, ever want to go into downtime or unexpected system backfires without the ability to send out a message saying, “I feel your pain; here’s what I’m doing to help solve the problem. I’ll keep you in the loop.” It’s a lot easier to cope with problems online when you know someone somewhere is working on it.

And this is primarily where I have a problem with the static library website. The first page always stays the same; it’s generally got all the same information on it. This is good when you’re trying to teach people where to find stuff, if you think of your website as a static structure that should be learned. But it’s terrible if you consider your website your library’s (non-expressive) face.

I think there are two ways to think about a library website: it’s either a published document (heavily planned and edited before it’s published, published, then referred to), or it’s your communication tool. As a communication tool, it’s not published in the same way that books are published. It’s available, it’s public, it’s indexable, but it’s not static, it’s not finished. I kind of wonder if we should get rid of the term “publish” from these kinds of online tools. Sure, you put stuff online and it’s in wet cement (as Larry put it best), ie, likely to be around forever, but our concept of publishing suggests a kind of frozen quality, a finished quality. To me one of the best things about the web is our ability to leave nothing untouched. A communication tool, rather than a published document, should never look the same twice. It should always be telling you something new, informing you, reflecting the real people behind it.

So as we start laying down the foundations for a new library website, I keep thinking of ways to pierce it through with holes through which the real workings of the library, the real voices of the people who work there, can come through. I want students to get a sense that the library isn’t a solid object; it’s a place filled with people, people who work very hard to make things better for them, at that. People working to make sure the collections match the needs of their instructors and their course expectations, helping them with assignments, helping them find the resources they need, helping them use the software they need to use to succeed. I’d like to see if we can use social software to help make that work more transparent to students and faculty alike. Librarians do good work; everyone should see that work.

The first most obvious way I thought about making sure this transparency and easy communication was possible was through blogs. In my dreamworld, these long thought-pieces about technology and libraries would go on a library blog, not my personal one. But I’m not the only one thinking about things like collections blogs with discipline-specific categories, or reference blogs. Once this information is shared and online in an RSS-able format, we can shoot it in all kinds of useful directions. And then I started thinking about the things students know right now that they’d like to know: busted printers, software problems, unavailable computer labs, courseware downtime. How busy the library is. (Ours is more often packed to the gills than not.) The obvious things. We know about them before the students do: isn’t there some quick way we can tell them?

So then I got to thinking about twitter. Twitter for immediate messages. It doesn’t take up that much space, embedded on a page. And it keeps everyone to 140 characters. Like facebook status messages, but about the systems you’re trying to use. You can find out if they’re having a bad day or not before even trying to wrestle with them. I like it. Transparency, a little personality, a little humanness, and lots of communication.

We’ll see how it goes.

The Value of Networks

The Value of Networks

To say that networks are important is to state the blindingly obvious. Networks have always been important, from the medieval European confraternity to 20th century golf courses. Now, most people I know go to conferences partly because of the conference program, but mostly because of the extra-conference networking. The conversation that you have between speakers is often more valuable than whatever the speaker is saying. The best thing the speaker can do for you, as a conference attendee, is to provide the common ground, topic, and language to allow a networking conversation to open up around you; even a terrible speaker, one who says things with which everyone in the room vehemently disagrees, can have this effect.

Is this a radical statement? Not to say that there isn’t value in hearing about the status of someone’s research, but that’s what journal articles and even blog posts are for. I don’t go to a conference specifically to hear about that sort of content; there are cheaper means to do so. I go to meet you, to engage with you, and to hear about what others think about what you’re doing while you talk about it. I’m there to meet with others over the common ground of our interest in what you have to say. A gathering of like minds: I’m there to get the whole collection of ideas. This may be why unconferences and camps are gaining popularity; they, at least, at upfront about where the value in a conference lies. Sure, the speakers are important, but so are the conference-goers. Everyone has something to contribute, and there are many, many means of doing so.

I feel like we acknowledge the importance of networking, but do our best to pretend it’s not true at the same time. A very dichotomous relationship to the concept of the social network: networking is everything, but to speak its name is anathema. A colleague of mine at the library tells me that as a Computer Science undergraduate student, the word they used for cheating was “collaborating”. There’s lots being said about networked intelligence, but if someone is looking at facebook or using MSN or AIM at work, they’re being unproductive. (Not to say they’re de facto being productive just because they’re using a social networking tool; that’s unclear without more information.) Networking is supposed to be a quiet activity that you do on your own time. It’s too fun to be work.

I got to thinking about networks a lot lately when I quizzed a bunch of friends on a favourite IRC channel about the differences between various CMS platforms, and then arranged to bring in an old friend to consult with us via AIM. While many people feel they need to hide their networking efforts professionally, I’ve always opted to embrace mine, and I intend to do so even more in future. My network, constructed out of the people that I know whose knowledge and experience I trust, is smarter than I am. As with all information, I must evaluate what I get from my network, but I have the context available to do so; my friend at Google knows lots about web search, but not so much about Google docs, and while one of my friends in California is always on to the new thing, his quick dismissal of popular applications things means his predictions aren’t necessarily on target. This is angle on an old concept. Social networking applications give us the ability to dig for context on our contacts when using our networks to help us form opinions and make decisions; how we know these people, what do we know about them, where does their experience lie, and who do they know: these things can have an impact on the way we interpret information gleaned from them. You can actually be on facebook and not wasting your employer’s time! (Who knew!)

It’s a give and take, of course. I’m not just talking about quizzing my networks when I have a question (though I mean that as well). My networks give me things to think about all the time; they shape my thinking, point me in new directions, give me a sense of where things are moving. They show me where the trends are, what I should be paying attention to. The network imparts knowledge not only in the direct sense, but also through ongoing participation. We are a participatory culture online: web 2.0 is pretty well ingrained into us at this point. We talk back. It’s the talkback that turns around and alters my brain chemistry.

I’ve been cultivating my networks for years. Because my personal life and my professional life cross at so many points, it’s serendipity that my social network can be so valuable to me in a professional capacity. One of the most exciting things to discover is that an old connection from another community is bringing a new vision and new interpretation to my wider network.

In short: I’m starting to think seriously that it’s part of my professional responsibility to read twitter, my feed reader, Facebook, etc. in order to be shaped by my network.

Meanwhile, the professional speaker circuit doesn’t like this. Attending a talk, as I’ve outlined, has a two-fold impact: the obvious one, gaining insight from the speaker, and the hidden one, where I am further inspired, provoked, and shaped by my network in light of what the speaker is saying, and my intrepretation of it. That’s my true professional development, in the crossroads of all those things.

In probably 80-90% of most business and conference settings speakers have a message to give – at keynote speeches and large company events – the large audience venues. It is not a groupthink or collaboration.

If this is what the speakers of the world think, that most of the time we are there only to absorb their message without interpreting it and reinterpreting it on our own, that the bigger the event the more we should shut up and absorb, I’m afraid they’re talking out of both sides of their mouths and supporting an educational system that just doesn’t work.

In this post, Bert Decker suggests that it’s rude to IM during a conference, but presumably it’s not rude to jot down notes. In fact, isn’t jotting down notes the best sign in our culture that you’re paying attention? If you walk into a meeting with no paper to jot down notes on, people tend to presume that you don’t think anything of value is going on. This is considered unprofessional, and you will be pulled aside and given a talking to about it. Always come prepared; always carry something to write down notes on. That’s how you demonsrate respect! Rudeness is in the eye of the beholder on this one; if people are tweeting during your talk, perhaps you should take it as a compliment. They feel there is something in the talk to record and share with their network.

The way we create and feed our networks is to participate in them. We share our thoughts on the ideas that come to us; we build systems of thought and method based on the interplay between primary, secondary, and tertiary information. The tweets of the guy next to me during a big keynote is my secondary source, the thing that provides more voices and opinions to the information I’m gleaning.

Constant networking is impossible, and it’s important to know when it will help you and when it will distract you. But while most traditional folks like to take notes when they attend keynotes, I like my notes to talk back to me at the same time. If you’re not ready for the rich dialogue that it allows me to enter into with you based not only on my own experience and ideas, but also those of my network, I’m not sure you’re the right person to be giving that keynote in the first place.

My network is valuable; I bet yours is too.

Thick Tweets

Thick Tweets

Another follow-up to a tweet, posted in response to David Silver’s attempt to use a Geertzian theory on twitter:

http://tinyurl.com/bwxrac bizarre categorization of tweets. With a link, this is “thick”
2:45 PM Feb 25th

I appreciate someone trying to apply thick description to tweets, but I’m not certain David Silver hasn’t missed the mark a bit here.

First: isn’t it frustrating that every time we experiment with web applications, there’s someone somewhere trying to tell us how to do it right? Case in point, back from 2005: “I just spent fifteen minutes clicking through about 20 Xanga sites and I CAN’T FIND ANY BLOGGING GOING ON! Is it me?” (my response). We like these applications to fulfill a pedagogical role, often to improve the profile of the use of the application to other academics and departmental chairs. Current case in point: some researchers/educators using Second Life don’t want to be associated with the “debauchery” of the Second Life Community Conference, and want to break out on their own in order to set the “right” tone.

So now we get to the “right” and “wrong” kinds of tweets. This is a challenging thing, since a tweet is only 140 characters long. Silver encourages students to “pack multiple layers of information within 140 characters or less,” and those layers are defined by links, primarily. Also by shout outs. And mentioning names.

I don’t think thick description is a good way to evaluate a tweet. A tweet’s value isn’t in how much information it’s conveying, it’s in the basic value of the information itself. Personally I quite like funny tweets, regardless of whether they’ve got links in them or not. The context of tweets doesn’t come from the tweet itself, it comes from the environment around the tweet, the news of the day, the location of the user, the user’s other tweets, the user’s blog, flickr stream, employment history, and the live events the user it attending. Tweets are ultimately snippets that don’t necessarily make sense in isolation. I’d suggest that to evaluate them individually is to miss a great deal of their “thickness”.

Some of my favourite tweets:

“Great design comes from an artistic or cultural impulse, not a focus group.”
11:06 PM Jan 24th from web cloudforest

Is there anything more newbish than using Internet Explorer? Question still valid after 12+ years of asking it.
2:31 PM Feb 27th from TweetDeck, BeCircle

Overheard this week: “Lent? That’s the musical with all the AIDS, right?”
3:58 PM Feb 27th from TweetDeck, RJToronto

Still ignoring Facebook 25 things requests, but copying my wife’s idea: I’ll gladly go for coffee/beer and answer them in person instead.
4:03 AM Feb 27th from web, oninformation

These tweets don’t really fulfill Silver’s “thick” requirements, but I find them interesting and valuable. They give me things to think about. How do you quantify the pithiness of 140 characters?

Audience

Audience

I wanted to follow up on and extend a recent tweet:

At what point does online sharing become performance? Is it always performance from the start, or does it morph as people start to watch?
11:21 PM Feb 21st from web

I was thinking about the fact that I’m flying out to Drupal4Lib unconference/camp at the Darien Public Library in Connecticut today, and each time I go to a conference where lots of ideas are flying around me, I try to capture the ones that really resonate with me on Twitter. I also use Twitter to respond to speakers when I can’t interrupt them. I use it particularly when I think my opinions will be unpopular or not particularly well accepted. Now that there are a few more people following me on twitter, many of whom I respect a great deal, I’m a bit hesitant to tweet as freely as I want to. As often as I want to. And that hesitation bothers me.

Sure, perhaps I need a little hesitation before I go publishing my ideas and responses and thoughts to the world, right? But I don’t like it. I like sharing, but I’m ambivalent about the general concept of an audience.

I guess deep down I don’t think about online sharing as sharing with an audience until I’m sharing with X number of people. That number isn’t something I’m aware of, I just sense that there is a tipping point in there somewhere.

I have permanent status now (i.e., tenure) , so I’m happier to share this fact: back during the process of dropping out of a phd program in history, I got deeply involved in a fandom community. I wrote a lot. I wrote somewhere around 400K words of fanfiction in the space of about 9 months. It was escapist, particularly to a world where the characters were all generated by someone else, and thus has nothing to do with the devastating and identity-altering reality of my existence. It was nice to inhabit a space where I didn’t exist. Call it a coping mechanism, but I learned more about social networks and technology in aid of collaboration and creativity in that space than I did anywhere else. I have a deep affection for fandom communities and I still try to follow their meanderings. One of the things I learned as part of a fandom community was the power of an audience.

When I started writing in fandom, I did so in total obscurity. I threw myself into writing, something I hadn’t done in years and I really enjoyed. It was like coming out of the darkness into the sunshine. It was incredibly therapeutic. I had been through some difficult times; a terrible break-up, heartbreak, depression, hatred of my program, loneliness, loss of identity. A lot of old feelings resurfaced. Writing was excellent therapy. I had a blog in my own name at the time, but I started a new one with my fandom identity on Livejournal, which was (and still is) the place where fandom congregated. I loved my livejournal. I loved talking about writing process, about ideas, scenes, character motivations; I loved writing about writing. It was profoundly internal, profoundly navel-gazing, and so much fun. I needed to be inside and outside at the same time; I needed to sort out so much but I didn’t want to face in myself. I can’t express how useful this process was; not just writing the fanfiction, but processing the whys and hows and sharing ideas. I had no idea how much of myself I was processing with it. (Easier to see in hindsight.)

My lengthy and frequent blog musings were okay at first. Not at all abnormal in a fandom community. But then I started to attract an audience. I was writing slash (gay romance) fiction revolving around a very popular pairing of characters, so there was a wide audience of readers for what I was so feverishly producing. Fanfiction writers tend to attract an audience, and they generally want to. It’s great to get feedback on what you’re writing. And that feedback is instantaneous. When I finished and posted a story, I would have responses to it within 10 minutes, and 60 or 70 responses within a half hour. (This is not a record: people writing more mainstream fanfiction with heterosexual pairings got far, far more responses than I would.) Many people in fandom have no interest in writing, but write to be a part of the community. Sharing writing is, I would argue, a form of gift exchange. Those of us who wrote a lot were presumably owed a lot in return; the return is feedback, recommendations, reviews, and attention in general. For people like me, noses stuck firmly in their own navels and there just for the sheer therapy/fun of it, this economy completely evaded my notice. I was getting more and more attention for my writing, albeit only from a segment of the fandom itself. I wasn’t at the top of the food chain when it comes to attention-getters, but the attention I received was certainly nothing to sneeze at. By this I mean a registered audience of a few thousand, and an unregistered audience of many more thousands. Not the millions people get with a viral youtube video in 2009, but a few thousand (8 or 9) is quite a bit for any normal individual, particularly back in 2001.

With a fairly large audience, the nature of my livejournal changed. While I still wanted to talk about process and ideas and all this internality that brought me to the community in the first place, somehow it wasn’t okay to do so anymore. With the podium I had, it was understood as incredibly selfish of me to only talk about myself and my own ideas. Suddenly it became important for me to talk about other people’s work at least as often as my own (ideally more often). Now that I think of it, maybe I’ve got this gift economy thing all backwards; what if the economy has nothing to do with the writing and everything to do with the attention? Increasingly I felt pressure to give back; more comments, more reviews, more shout-outs and recommendations; my livejounal couldn’t be my private writing space anymore. It now had to be more outward-looking. I had to give back to my audience, I had to give them the attention they were giving me. I didn’t have the space to just have fun with it anymore. Fun had to benefit others now, I had already got my share. Others, who didn’t have the attention I had, could do what I used to do, writing down their thoughts and sharing ideas with their friends. It was silencing and sad.

A friend of mine had many times the amount of attention that I got, and I saw how it crippled her public posting. Her livejournal had gone from, like mine, being a place to natter on about what she was thinking about and turned more into a means through which to inform her audience of something (updates, teasers for her next chapter, etc.), to discuss other people’s work, the larger themes of the community, and to weigh in on the “right” side of any debate. It became public property.

Perhaps fandom is a unique entity when it comes to relationships with online audiences, but I don’t think it is. This is why I objected to ranking librarian blogs when Walt proposed it. My reaction is over-heated, but this is where I’m coming from. I’m not a high-profile librarian blogger, and I’m planning to keep it that way. I like to be able to muse about whatever I feel like musing about, be that Second Life, or cancer, or the book I’m currently reading, or random conversations with my friends. I want to be able to use twitter in the way that fits best with my personality, too.

So in response to my own question posted above: I think there is a difference between sharing online and having an audience. Sharing online is fun and productive; I love using twitter to record my reactions to things and my epiphanies, because I like to share them with friends and family, and I like to get feedback from people with similiar or radically different opinions. I like their perspectives to shape my epiphanies as they’re being formed. I find that brings my thinking to a higher level. But somehow there’s a line in the sand there, and I’m not sure where it is, between sharing with a group and having an audience. I find the audience gratifying, but oppressive after a certain point. I don’t have the wherewithall to rise above the expectations of a full, demanding audience. Good thing I can twitter and blog in gentle near-obscurity. That’s just how I like it.

Edited to add: Hmmm. This is a pretty good example of what I’m talking about.

Real World Virtuality

Real World Virtuality

I started reading Spook Country last night before bed, the first chapter of which ends with a virtual world/real-world mashup that has the main character standing in front of the Viper Room in LA looking down at a dead River Phoenix on the sidewalk in front of her. Leaving aside a whole other post I could write about the significance of that particular moment to people born around when I was, it made me think about gaming and ubiquitous computing.

I suspect most of what I’m about to say is so passe to most people who think about gaming and the internet, but it was a fun revelation for me, at least.

When I first started talking outloud about ubiquitous computing in the library after the Copenhagen keynote about sentient cities, our chief librarian wilted a little. “We just built this place!” she said. But I think ubiquitous computing is not going to come from the walls at all; I think it’s just going to use the walls to interface with mobile computing.

Okay imagine it: you have VR goggles. You put on your goggles and you see the world around you, but also the game space. You have already entered in the usernames of your friends, who are also playing this game with you. You are synced up to GPS, so your goggles know where you are in relation to your environment. You have chosen a genre or theme, but the game is constructed on the fly by the system based on the environment you’ve chosen, the number of civilians in your view, weather information, and variables drawn from the user profiles of you and your friends.

So say you pick a large field by a river for your game space. Maybe you do a walkthrough it first with your goggles on so that the system can add more detail to the GPS and map data; that data would go into a central repository for geographical information. The system can then generate characters that wander past, hide behind bushes, sit in trees, etc. You and your friends can all see the generated characters because of the goggles, so you can all interact with them simulaneously. The characters might be generated by the game designers, or they might be created by users, like the Spore creature creator, with backstories and voices all supplied by fans, vetted by the designers. You and your friends can be costumed by the system; when you look down at your own (bare) hands, they might be wearing chain mail gloves and be carrying a sword.

Or say you pick a city block as your game space; the system connects to google map data, and then also takes in information about all the people around you, and uses them as part of the game. It could turn the city in a futuristic place, with flying cars and impossibly tall buildings. Running around the city, chasing aliens, avoiding civilians, being a big ole’ gaming geek in full view of the public. Awesome.

So now: the virtual library could come with a pair of goggles and a good series of fast databases.

That would be pretty cool. Just sayin’.