Browsed by
Tag: ubiquitous computing

iPhone: week one

iPhone: week one

For all my tech-geekery, I’ve never had a smartphone. There hasn’t been a really good reason for this, aside from a vague attempt at fiscal responsibility and the reality that I spend my life essentially in one of two wifi zones (home, work). I figured I didn’t really need a truly mobile device that connected to the internet. Couldn’t I have my (short) commute time away from it? It just never seemed that important. I’ve been following the developments, and while never anti-smartphone, I’ve just never been a good phone person. (At least: not since I was 16 and on the phone constantly.) There are so many other interesting ways to communicate: talking on on the phone just seemed like the least imaginative. I don’t have a home phone, and my work voicemail is something I have to remind myself to check.

The internet is, largely, my passion in life: communication, productivity, creative thinking with internet tech, that’s what I do for a living. It’s also something I enjoy in my off-time; I’m genuinely interested in web innovation, and my explorations and thinking don’t stop when I leave the office. I understand the app revolution, and while I’m on the side that believes the apps are probably only temporarily in power and the mobile web will probably take over, I’m intrigued by the apps and the interesting things developers and users are doing with them. So you’d think I’d have been on this smartphone thing ages ago, but no.

In spite of my obvious interest in all things online, it wouldn’t be fair to classify my web experiences as addictive or compulsive. I’m absolutely okay with pulling the plug at pretty much any time. I can take a long road trip without the internet, and I don’t miss it. I love to read, I love to talk to people, I love to sit and think and muse. Contrary to the “information overload” debate (which I think is code for “I procrastinate and the internet makes it too easy”), I don’t find my connection to the internet either overwhelming or demanding. It’s a give and take. If I don’t want to pay attention, I don’t. When I want it to entertain me, or confuse me, or engage me and make me think in new ways, it does. So while I thought the smartphone thing was pretty cool and clearly an intriguing and useful development, I didn’t actually have one of my own.

Until last week, that is. I finally got on the bandwagon. And I’ve been diving in head first. No holds barred, no panic about the 3G useage. Not in the first week, at least. I gave myself permission to be gluttonous with it, to roll around in it and see how it felt.

The only times prior to now that I thought I’d like to have a smartphone is when I’m out to dinner. Not because my dining companions have been sub par, but because I have an ongoing fascination with food history. I like to know how the composition on my plate came to be, and what historical events I can credit for it. This is easy with things like potatoes and tomatoes (“New World”, obviously), but garlic, carrots (did you know medieval Europeans ate not the orange root, but only the green tops of carrots?), bean sprouts, onions, cows, pigs, chickens, saffron, pepper, etc. It’s really the only time I’ve felt the lack of the internet. I want to look up some historical details at very odd times. I figured a smartphone would be helpful for that. (I can’t really carry around a comprehensive food history book everywhere I go, can I.) Filling specific information needs: in spite of my own certainty that search is basically dead, in the back of my head I figured this is how I would use a smartphone. I was not right.

But it’s been different than I expected. First, and most obvious, I suddenly always know when I have email. I bet people hate that. Email is my second least favourite means of communication, so putting it at the front of the line has mixed results. As I said, I’m reasonably good at not feeling pressure to look at anything when I don’t want to, but the thing pings when I get new email, and it makes me curious. But even in the first week, I don’t look every time. I didn’t stop my conversation with my mother when I heard it ping. I did, however, answer a question from an instructor while on the Go train back home on Saturday. If you want to be distracted, access to the internet via smartphone will certainly act as a decent distraction.

My best experience with it so far as been a trip to my home town, Guelph. It’s early October, and suddenly this week autumn appeared in full colour. If you’ve never experienced a southern Ontario fall, you’re missing something great. The cool temperatures at night mixed with the remaining warm days turns out a crazy quilt of colour across the landscape. It’s only when there’s enough cold that you get the firey reds and deep oranges. We’re in a banner year here, and on the bus on the way to Guelph I saw this awe-inspiring riot of colour out the window. Purple brush along the side of the road, a scintillating blue sky, red, orange, yellow and green leaves on the trees; this is the kind of thing that makes me happy to be living. The kind of thing I want to share, just out of the sheer unbelievability of it. It’s incredibly ephemeral, these fall colours, so capturing them and sharing them has additional appeal.

So this phone I had in my hand, it has a camera. This was actually my first experience using it. And I discovered quite by accident that I could snap a picture and then post it to twitter with a matter of a few swipes of a finger. So there I was, first on the bus, then walking down Gordon St. in Guelph, 22 degree weather, the sun warm on my skin, and while I was away from home, away from my computer, I was sharing my delight in the beauty around me, capturing it and sharing it effortlessly. It was one of those days when I felt like I could hardly believe the intensity of what I was seeing, but I was able to share it, record it, all as part of the experience. I’m not a great photographer: mostly I leave the camera alone and just experience my life without documenting it. But sometimes, documenting it is part of the experience, adds to it. So, in my 30 minute walk from the University of Guelph and my sister’s house, I shared the colours around me and saw the responses from my friends and colleagues far and wide. I was no less on the street, no less engaged. But I was also interacting with the world via the internet. I loved it. I was in two places at once. I had voices in my head. I was connected in two places. It reminded me of Snow Crash.

I’m sure this is no revelation for anyone who’s already had a smartphone all this time, so mea culpa. I was aware of the sort of ambient/ubiquitous computing, I just hadn’t had the chance to experiment with it myself yet, to see what it really feels like. I think the interface is still a bit clunky, too limiting, but the touch screen is getting closer to effortless. What’s wonderful about it is its seamlessness; picture to twitter, responses, all so easy to see and engage with. And engaging online isn’t even really drawing me away from my real life experience. It’s just a part of it. I’m not thinking about cables or connections or keyboards. Technology is getting to be close to invisible, just present and available.

As I sat on the train, reading fiction online, leaving comments, checking out links on Twitter, reading educause research, answering work email, I realized that I would never be bored again.

I read someone’s response to the iPad a few months ago where he returned his iPad for this very reason: the threat of never feeling bored again. Boredom as critical experience, necessary experience. I can understand that, but of course it’s all in the decisions that you opt to make. We are invariably drawn to the shininess of instant gratification via the internet, of course. But even that can get boring, eventually. You do reach a point where you’ve read it all for the moment, and you’ll have to wait for more to appear in the little niche of reading that you do. Does that force you to branch out, find more and more interesting things? That’s not necessarily a terrible thing. Does it allow you to avoid reflecting, being with yourself in a place?

One of the very early criticisms directed at the iPad was that it was a device for consumers, on which information is merely consumed, not created. That jarred me, as it felt untrue and frankly a bit elitist. Creation doesn’t just mean writing software or hacks. Creation can be writing, or drawing, or singing, or sharing reactions and thoughts. but I see now with both the iPhone and the iPad, that this criticism is both true and false. It’s true that these devices make it very easy to consume content created by others; it’s easier to browse and read than it is to write, for instance. The keyboard is pretty great, but it’s not as easy to use as the one attached to my laptop. But what I choose to browse/read/consume is still my choice; just because it’s on an iPad doesn’t mean that it’s all commercial content, not while the web is as relatively free and easy to access as it is. Most of my reading on these devices is not sponsored and not created by mainstream media. I’m not just reading the New York Times. I’m reading blogs and archives, primarily. And why are we so anti “consumer”? We need to consume the creations of others as part of a healthy dialogue, after all; there is a level of pop consumption that’s a good thing. Neither of these devices is as simple as a TV or a radio where there is a clear creator and a clear consumer. I am also a creator on these devices, a sharer of experiences, of thoughts and ideas. My experience walking down the street in Guelph on a beautiful day was a case in point; I was clearly a creator, sharing what I saw, engaging with others. That’s not a passive experience. Sitting on the train reading someone’s review of a movie, or a fictional take an on old idea; I’m consuming as well. In places where I couldn’t do so before.

It feels like there are fewer spaces in my life. The level of connection I’m currently experiencing seems to make my days blend together into one long back-and-forth with any number of people. Is this less downtime? Downtime transformed into time spent in this otherworld of communication and information? Am I reflecting less?

I started with a bang, so I guess it remains to be seen how much I keep at it. Will it get old? Will I return to my former habits, with less time testing the limits of my devices? It remains to be seen.

Emerging

Emerging

So: new job title (“Emerging Technologies Librarian”). Definitely something that I wanted to see happen. I feel like it reflects what I actually do a lot better. Though I have pangs of regret when I think about instructional technology, but the lines are still blurry. Now I deliberately look at emerging technologies in teaching and learning, or maybe ones that haven’t quite emerged at all yet. Also emerging technologies as they apply to libraries in general, and our library in particular.

It’s exciting to have a job title that reflects what I’m already doing anyway, but it’s also kind of intimidating. I mean, keeping up with the trends was something I did as a bonus. Suddenly it’s in my job title.

So I was thinking about what trends I’m currently tracking, and I wonder how they fit into the whole “emerging” thing.

Second Life/Virtual Worlds. I’ve been on this one for a while, but I still think it’s emerging. Mostly because I think no one’s popularized the one true way to use virtual worlds in teaching and learning yet. In fact, there are so many wrong ways in practice currently that many people are getting turned off using Second Life in teaching. I’m still interested in it. I’m a builder, I’m interested in what you could use the environment for to build things and have students build things. A giant collaborative place filled with student-created expression of course content would be awesome. So I’m holding on to this one.

Twitter. I can’t believe I’m putting it on the list, but I am. Mostly because I’ve been talking about how great it is at a conference for some time now and I’m starting to see the argument come back to me from much larger places. People complain about what people twitter during events (“Too critical! Too snarky! The audience is the new keynote!”), but that’s pretty much exactly what would make things interesting in a classroom. I want to install the open source version and try it out with a willing instructor. I’m also interested in it for easy website updates, but most people would tell me that that’s a total misuse of the application. (Too bad!)

Ubiquitous Computing. I’ll say that instead of mobile devices. The hardware will come and go, but the concept of ubiquity for computing is fascinating. It’s coming in fits and starts; I want to see how I can push this one in small ways in the library. Computing without the computer. Ideally without a cell phone either. This is something I’m going to track for a good long while. I have this ubiquitous future in my head that seems like a perfect setting for a cyberpunk novel. (I might get around to writing it one of these days.)

Cheap Storage. As a rule hardware isn’t my area, but I’m interested to see what it means that storage capacity is getting so crazily cheap. If I can carry 120 gb in my pocket without even noticing it, what does that mean for computing in general?

Cloud Computing. This goes along with the cheap storage. Jeremy tells me we will never be affected by the cloud because we are a locked down environment for the most part, but I think he might be wrong. Even if we can’t fully employ the cloud because of security and legal limitations, I think the concept of cloud computing will sink into the consciousnesses of our users. We will need to be prepared to offer services as easily as the cloud can.

Netbooks. This fits in with cloud computing and cheap storage; if we can have tiny little computers with us at all times, massive amounts of physical storage and powerful applications coming down from the cloud, what does the world end up looking like?

Social Networks. Embracing the networks you have, on facebook, on IRC, on Twitter, on IM, wherever. Accepting that we are no longer a culture that uses its brain for information storage; we are processors, connectors. We store our knowledge in machines and in our networks. While social software may look like too much fun to be productive, those social networks are what’s going to scaffold us through most of the rest of our lives. Learning how to respectfully and usefully employ our networks as part of our learning (and teaching, for that matter) is an important skill.

There are some other pieces that are just never going to go away: blogging (for librarians!), wikis for everyone, IM: I think we’ve finally reached a point where we can intelligently choose the best tool for the task at hand from an incredible range of options. So I think part of the emerging trend is to use what’s best, not necessarily what’s most powerful, most expensive, or most popular. Things like twitter and netbooks are evidence of that: sometimes you don’t need all the bells and whistles.

So that’s my emerging update of the moment.

Real World Virtuality

Real World Virtuality

I started reading Spook Country last night before bed, the first chapter of which ends with a virtual world/real-world mashup that has the main character standing in front of the Viper Room in LA looking down at a dead River Phoenix on the sidewalk in front of her. Leaving aside a whole other post I could write about the significance of that particular moment to people born around when I was, it made me think about gaming and ubiquitous computing.

I suspect most of what I’m about to say is so passe to most people who think about gaming and the internet, but it was a fun revelation for me, at least.

When I first started talking outloud about ubiquitous computing in the library after the Copenhagen keynote about sentient cities, our chief librarian wilted a little. “We just built this place!” she said. But I think ubiquitous computing is not going to come from the walls at all; I think it’s just going to use the walls to interface with mobile computing.

Okay imagine it: you have VR goggles. You put on your goggles and you see the world around you, but also the game space. You have already entered in the usernames of your friends, who are also playing this game with you. You are synced up to GPS, so your goggles know where you are in relation to your environment. You have chosen a genre or theme, but the game is constructed on the fly by the system based on the environment you’ve chosen, the number of civilians in your view, weather information, and variables drawn from the user profiles of you and your friends.

So say you pick a large field by a river for your game space. Maybe you do a walkthrough it first with your goggles on so that the system can add more detail to the GPS and map data; that data would go into a central repository for geographical information. The system can then generate characters that wander past, hide behind bushes, sit in trees, etc. You and your friends can all see the generated characters because of the goggles, so you can all interact with them simulaneously. The characters might be generated by the game designers, or they might be created by users, like the Spore creature creator, with backstories and voices all supplied by fans, vetted by the designers. You and your friends can be costumed by the system; when you look down at your own (bare) hands, they might be wearing chain mail gloves and be carrying a sword.

Or say you pick a city block as your game space; the system connects to google map data, and then also takes in information about all the people around you, and uses them as part of the game. It could turn the city in a futuristic place, with flying cars and impossibly tall buildings. Running around the city, chasing aliens, avoiding civilians, being a big ole’ gaming geek in full view of the public. Awesome.

So now: the virtual library could come with a pair of goggles and a good series of fast databases.

That would be pretty cool. Just sayin’.

Note-taking goes Ubiquitous

Note-taking goes Ubiquitous

[youtube http://www.youtube.com/watch?v=DE-mnEdAf7g&hl=en&fs=1]

Faculty can usually tell if they’re being recorded, what with the need for some sort of obvious recorder. But what if the computer and the recording device are in the pen?

This would be pretty wicked if you could mash up all the notes from the class along with the lecture recording. If we could get the pen to work online instead of just off, you could see the notes being created in real time.

MLearn: Mobile Devices and Coursework

MLearn: Mobile Devices and Coursework

I wanted to take a moment to reflect on two very interesting presentations I attended yesterday; one about museums and coursework by Mike Sharples from the UK Open University, and another by Maria Parks & Mark Dransfield from York St John University in the UK about occupational therapy students blogging from mobiles. I wanted to hear the presentation about museums because I figured many of the issues present in “the one-off museum visit” are similar to the ones faced by librarians. I was definitely right in part, though the museums have a few advantages we don’t quite have as librarians.

He started by explaining that museum visits by classrooms are often isolated from coursework. Teachers spend a lot of time working out the logistics of getting students to the museum and when their lunch break will be, but less time connecting the visit back to curriculum. He had some interesting ideas around how to link these two locations up through technology.

The term of the moment: “enquiry-led museum learning”. Of course my ears perked right up, as I’ve been hearing a lot of conflicting ideas about what “inquiry-based learning” meant. He expressed a definition much like the one that resonated most with me; a structured experience with a specific question to answer, where the way to the answer is what the student determines for him or herself. In the example he showed us, the students were prepared with a question about D-Day; was it a success or a failure? They were given mobile phones that could take pictures and were connected to a piece of software that would organize and post the pictures they took and the comments they had. So they students were set free in the museum to find evidence to support whatever conclusion they came to. Since the museum is a very visual place, the photographs made sense as evidence.

When I thought about this class project, and tried to imagine it in a library context, I realized that he was using photographs where we already use print and digital resources; while we rarely frame academic work as a journey toward an evidenced-based result, that’s exactly what it is. It would be harder to use photographs to prove a point in a library. What are the copyright implications of taking photographs of images in books, after all? A library activity even close to this one would be mostly spent, not running through the stacks, but sitting in front of a computer linking up digital resources or creating a bibliography. Not quite so exciting, really.

Though you could do fun library school assignments like this, taking photographs of the funny bits of LC (where socialists sit next to criminals, for instance).

The mo-blogging presentation was somewhat similar (but very different). The occupational therapy students were given cell phones hooked up to flickr and blogger. So they blogged from the phone, could take pictures and blog those (but not of broken legs and such like they wanted to, that went against the ethics board). It was a very interesting presentation, and definitely exactly the kind of reflective learning that we’re talking about at UTM, so I was paying close attention.

One of the other themes of this conference (which is very very excited about cell phones, let me tell you) is that the cell phone interface is preferable to “today’s kids”. I’ve heard repeated versions of what I think is the same story about a kid in South Africa who would rather type out his essay on his cell phone rather than sit at a computer with a keyboard. I’m fairly sure the stories I kept hearing like this are all about the same kid. The presentation from York absolutely underscored this; half of the occupational therapy students had a full-sized bluetooth keyboard to connect to their cell phones, while the other half did not. Maria Parks shows us examples of the blog posts written by the students with keyboards; they had pictures, and tons and tons and tons of reflective text. And then she showed us examples of blog posts by the students with no keyboards; one line. Pictures, basically no text. The feedback they got: “I wanted to write more, but the phone was so annoying!” Maria said it was a good thing their assessment was based on other things, because the difference between the two groups was so extreme. How can you assess reflection based on one line every few days or so? And what that one line contained: some basic description of things that happened, or things they needed to do: “Hypersensitivity must control pain”. Versus the paragraphs of text from the other students.

It’s just a strange thing how the over all feeling of what was “right” and true was so different from the projects on the ground running.

But also, the difference from country to country; in a place where computers and internet connections prohibitively expensive, but cellphones are cheap, it makes sense that people would feel more at home with the cell phones. But that’s really not the way work here.

I came to this conference to figure out how I felt about mobile devices in education at my own school; I’m still not quite sure yet. It will take a bit more reflection to sort through it all.