Feed aggregator

New England Science Boot Camp is heading Downeast!

e-Science Portal Blog - Thu, 11/20/2014 - 10:13

The upcoming 2015 New England Science Boot Camp will be held June 17-19 on the beautiful campus of Bowdoin College in Brunswick, Maine.  Plans for session topics and activities are currently underway and will be announced in the next few months.

 

Broader Impacts and Data Management Plans

e-Science Portal Blog - Thu, 11/13/2014 - 13:57

By Andrew Creamer, Scientific Data Management Specialist, Brown University

The National Science Foundation (NSF) explains that Data Management Plans are to be “reviewed as an integral part of the proposal, coming under Intellectual Merit or Broader Impacts or both, as appropriate for the scientific community of relevance.” As the librarian responsible for writing data management and sharing plans, I was invited to be a part of my institution’s Broader Impacts Committee, which aims to “help Brown faculty and researchers respond effectively to the Broader Impacts criterion and other outreach requirements of governmental funding agencies.” For example, it helps to build collaborations between the K-12 educators in my state and the university’s researchers, and it promotes a database to share STEM curricula, among others.

The NSF views Broader Impacts through the lens of societal outcomes:

NSF values the advancement of scientific knowledge and activities that contribute to the achievement of societally relevant outcomes. Such outcomes include, but are not limited to: full participation of women, persons with disabilities, and underrepresented minorities in science, technology, engineering, and mathematics (STEM); improved STEM education and educator development at any level; increased public scientific literacy and public engagement with science and technology; improved well-being of individuals in society; development of a diverse, globally competitive STEM workforce; increased partnerships between academia, industry, and others; improved national security; increased economic competitiveness of the United States; and enhanced infrastructure for research and education.

Recently I was asked to speak at a Broader Impacts Workshop for faculty. In my presentation I focused on several ways that their proposal’s DMP can connect with the societal outcomes described in their Broader Impacts. For example, researchers detail in their NSF DMPs when and how they will make their data and research products available for other researchers and/or the public, how they will archive and preserve access to their research products after the project ends, and they outline the dissemination strategy for their projects’ research products, which can include citing and sharing the projects’ data, metadata, and code in their publications and presentations and depositing these items into a data-sharing repository. Retaining, preserving and making data, metadata, and code, along with the resulting publications, accessible maximizes the potential for replication and reproduction of research results, and therefore they further the impact of the project by making it possible for their data  and research products to be discovered, used, repurposed, and cited to aid in new research and discoveries.

Ways the Library Can Support Broader Impacts and Preserve and Disseminate Related Research Products

  • The library can advise on selecting optimal file formats and media in which data can be stored, shared, and accessed. Proprietary software and data formats used to collect and capture data can impact the potential for a dataset to be of use by others. Researchers can work with the library to identify and export their data files into data-sharing and preservation-friendly formats.
  • The library can collaborate with researchers to create the documentation and contextual details (metadata) that can make their data discoverable and meaningful to others. The library can help researchers locate metadata schema, standards and ontologies for a specific discipline, and it can also help to create metadata for data being prepared for upload into to a data-sharing repository.
  • Depositing their Broader Impacts curricula and data into a repository is a way for researchers to guarantee that their research products will be discovered and used by others. It is also the easiest way to locate and access data years after a project ends. Libraries can offer a number of repository related services. It can help researchers to choose and evaluate potential repositories. The library can offer an institutional repository (IR) as an option for some researchers to publish, archive, and preserve their project’s data after their projects end.
  • More libraries are offering a global persistent identifier service for researchers wishing to maximize the dissemination and discoverability of their datasets. A digital object identifier (DOI) is one way the library can provide researchers and the public a way to locate and cite data. The library for example through EZID can issue researchers DOIs, even if their datasets are not in their IR. For example, the library can issue researchers DOIs for the datasets they have deposited in NCBI databases that have accession numbers so they can then cite these datasets in their publications, presentations, and grant reports. The library also mints DOIs for researchers who are required by publishers to submit a DOI for their datasets underlying their manuscripts or for compliance with their publishers’ data availability and data archiving policies.

While researchers may have not thought about the library when it comes to societal outcomes and disseminating research data, we librarians hope that they will begin to see the library as the ideal institutional space to plan for data retention, appraising which research products should be retained, archived, and preserved, exploring options for sharing and long-term preservation-friendly file formats, creating documentation and metadata to make data discoverable and useful, publishing and archiving data in a repository, citing data, and disseminating and measuring the impact of data.

 

November 2014: recent job postings

e-Science Portal Blog - Wed, 11/12/2014 - 12:39

From around the web (mostly from the ALA job list): here’s a list of recent job openings that may be of interest to the e-Science Community:

California State University, East Bay Library:  Health Sciences and Scholarly Communications Librarian

California State University, San Marcos:  Health Sciences and Human Services Librarian

Cornell University Library:  Director of Preservation Services

Dartmouth College:  Research and Education Librarian, Biomedical Libraries

Head of Education, Research and Clinical Services

Institute for Health Metrics and Evaluation, University of Washington:  Data Indexer

Iowa State University:  Science & Technology Librarian (Engineering & Physical Sciences)

New York University:  Research Data Management Librarian

Pennsylvania State University:  Science Data Librarian

Tufts University:  Research & Instruction Librarian

University of California at Los Angeles (UCLA):  Geospatial Resources Librarian

University of New Hampshire:  Life Sciences and Agriculture Librarian

University of New Mexico Libraries:  Research Services Librarian for the Engineering, Life & Physical Sciences

 

Upcoming Digital Science workshops at Tufts and UMass Medical School

e-Science Portal Blog - Fri, 11/07/2014 - 17:45

The following announcement has been posted on behalf of the Boston Library Consortium and Digital Science. For information about the workshop or to register, please contact Susan Stearns at sstearns@blc.org

Addressing the Emerging Needs of the Research Ecosystem: An Invitation

 

 

 

The Boston Library Consortium and Digital Science invite you to attend a free workshop focused on the management, dissemination, and collaboration around research data in the university.  Today’s research ecosystem is increasingly complex and includes players from many different departments and groups within the academy: research and sponsored program staff, the CIO and IT staff, library deans/directors and their scholarly communications and research data management librarians, university marketing and communications staff and, of course, the researchers themselves.

Meeting the diverse requirements of these varied groups in efficient and cost-effective ways requires that quality data are able to flow in and out of university information systems, often populating such diverse technologies as grants management systems, researcher profiles, institutional repositories, and enterprise data warehouses.  Non-traditional measures of research impact such as Altmetrics and the increasingly prevalent funder mandates create new challenges for universities as they look to ensure a robust research information management environment.

Our goal for this workshop is to assemble a representative cross-section of stakeholders from a variety of BLC institutions. The workshop will bring together experts from Digital Science, a technology company with a focus on the sciences that provides software and tools to support the research ecosystem, and speakers with direct experience of evaluating and implementing research information management systems and services. We hope you will actively encourage your colleagues to attend.

Two options are available for the workshop as indicated below. BLC is considering offering live-streaming of one or both sessions if there is adequate interest.

Friday, November 21st at Tufts University, Medford Campus – 9:30am – 2:30pm; lunch included

Tuesday. November 25th at the University of Massachusetts Medical School, Worcester – 10:00am – 3:00pm; lunch included

Workshop speakers will include: Jonathan Breeze, CEO of Sympletics, Mark Hahnel, CEO of Figshare and the Vice Provost for Research or equivalent from a local Boston University Consortium member institution.

To register or for further information, send an e-mail to sstearns@blc.org indicating which of the above sessions you are interested in attending.

Learning About Git

e-Science Portal Blog - Fri, 11/07/2014 - 13:37

Submitted by guest contributor Daina Bouquin, Data & Metadata Services Librarian, Weill Cornell Medical College of Cornell University, dab2058@med.cornell.edu

The role of the data librarian extends far beyond helping researchers write data management plans. Rather, librarians working where data-intensive science is happening spend their time answering questions about the entire data life cycle—data pre-processing, analysis, visualization and data validation are all important, and sometimes highly intricate, parts of the research process. As a data services librarian I have personally found myself advising researchers to rework their workflows to make use of tools they have available to them help make their research more replicable, efficient, and shareable at these various stages of their research process. Unfortunately though, I do not always have hands-on experience with the tools and techniques I’m advising researchers to use– nor is it possible for me to always have experience using every tool out there available to researchers in computational environments. However, I do believe it’s important for me to get as much hand-on experience as possible with the most useful, commonly used tools, so that I can develop both refined expertise in my field, and also empathy for my patrons. E-Science Portal editor Donna Kafel recently wrote a wonderful post where she reflected upon, and pulled advise from others about self-learning and the challenges associated with it. Here, I aim to outline how I’m making use of some of the excellent advice offered in that post, while focusing in on an area of the data life cycle that I believe is sometimes oversimplified in discussion—I’m referring to the version control processes inherent in good data management.

Be single-minded. Identify one topic or skills you want to learn and focus on mastering it.” – Donna Kafel, Challenges of Self Learning

I decided the advice I would take to heart most fiercely from Donna’s self-learning post was the above take-away. It rang true with me because I regularly encounter problems by trying to tackle too many new topics at once. If I don’t use something regularly, it’s difficult for me to become proficient—especially with technically challenging tools. It makes sense that I should focus more on mastering a single skill before moving on to anything new, but how to choose what to focus on? This is where Version Control Systems (VCS) or “Revision Control Systems” come in. VCSs are incredibly diverse in both complexity and application, and while I rarely see them discussed at length by librarians, I find them to be exceedingly important to researchers in collaborative environments. I regularly read discussions on file naming as an approach to control versioning and to aid researchers in a multitude of data management processes, and I do not want to discredit that discussion because it is so important (check out some of the great writing on this topic right here on the portal blog!), but I’m hoping to extend that conversation a bit more in this post. Below I focus in on Git as both a self-learning opportunity and incredibly useful VCS.

Git

Git is a technology that “records changes to a file or set of files over time so that you can recall specific versions later”1. You can use Git for just about any type of file, but it is primarily used by people working with code files. Often times, people use simpler version-control methods, like copying files into a time-stamped directory, but this tactic is risky—one could forget which directory files are stored in or accidentally write over the wrong file (file naming helps here), but an even better approach is using a tool like Git. 1

Git is what is called a Distributed Version Control System (DVCS), but it is easier to understand DVCS if you first understand Centralized Version Control Systems (CVCS). CVCSs have a single server that contains all the versioned files a group of people are working on. Individuals can “check out” files from that central place so everyone knows to some extent what other people on the project are doing. Admins have control over who can do what so there is some centralized authority making it easier to manage than local version control solutions. Examples of CVCSs include the popular Apache tool Subversion1

CVCS- Chacon, 2014

 There are though some drawbacks to using a CVCS—namely, the single server situation. If the server goes down, not only can no one can make any changes to anything that’s being worked on, but if the server gets damaged and is corrupted, the individuals working on the project are completely reliant on there being sufficient backups of all versions of their files. This is again, quite risky.

 To mitigate this problem, DVCSs were developed. In distributed systems (like Git) people do not just check out the latest version of a file, they completely “mirror” the repository. In this way if the server dies, anyone who mirrored the repository can copy back to the server and restore it. Every time someone checks out a file, the data is fully backed up

DVCS- Chacon, 2014

Distributed systems are also capable of working well with several remote repositories at once, allowing people to collaborate with multiple groups in different ways concurrently on the same project. 1

However, I did not decide to focus my single-minded self-learning on Git just because it is so useful for version control—I wanted to focus on learning as many skills as possible, while still staying focused. You see, in learning to use Git, I’d have more opportunity to learn about Bash Unix Shell. Having some background in using command line interfaces, I am still a beginner with the Terminal and figured that learning Git would get me much more proficient with navigating my computer via the command line, which in-turn could help me get up the confidence to learn how to use a Linux operating system. Learning Git would also help me learn how to use GitHub, which is growing by the day in popularity as a place for people to store and share code. The GitHub graphic user interface would also help get me off the ground. So I found Git to be the great door-opener to many other skillsets on my list of self-learning goals.

Thus, I have begun learning to use Git and GitHub. I was able to get some hands-on experience with it by participating in a Software Carpentry Bootcamp this past summer, but didn’t find the time to dedicate to following up on it– I was not staying focused on learning a single new skill. So now I am re-grouping. I have primarily been using the resources I am providing below, however there is so much more out there. These resources are just a great place to start, and having made some headway in my own reading of these documents I hope to be trying out Git more in the very near future.

Pro Git Great free eBook and videos on getting started with and better understanding Git and version control. I used this excellent book in writing this post.

Pro Git Documentation External Links Tutorials, books and videos, to help get you started.

Even if you don’t think learning to use Git is right for you, learning more about the tools researchers are using to work with their data and getting a look under the hood about how those technologies work can be a great way to continue to grow professionally. I hope you all have the opportunity to join me in exploring a new skill and share your experiences with the e-Science Portal Community.

References:

1. Chacon, S. (2014). Pro Git. Berkeley, CA: Apress. http://git-scm.com/book/en/v2

And just incase you weren’t already overwhelmed, here’s a great TED Blog on places to learn how to code!

Dr. Bruce Alberts: Science and the World’s Future

e-Science Portal Blog - Fri, 10/24/2014 - 17:57

Science and the World’s Future
Lecture given by Bruce Alberts, Professor of Science and Education, UCSF
Part of the Sanger Series at Virginia Commonwealth University, Richmond, VA

Bruce  Alberts’ lecture was a review of his career that focused on the lessons he learned along the way and how they are important for the future of science research and the earth.

He failed his initial PhD exam at Harvard, but earned it 6 months later after more research. This taught him that having a good strategy in science research was a key to success, and negative results were okay.

Alberts started his own lab at the age of 28, and he believes that it should be easier for researchers to set up their own labs earlier in their careers – so funding needs to change.

After many years of research, Alberts became president of the National Academy of Sciences (NAS) and started learning about science policy.  Science allows humans to gain a deep understanding of the natural world, and we can  use this knowledge to predict future events or problems. Many government people wanted the NAS reports to be kept secret or have changes made but he felt that science was for all and that NAS was providing independent policy advice based on science, so there could be no changes or secrecy.  Now the full text or a report goes on website when the government gets it.

Alberts’ work with NAS and as editor for Science magazine led him to international work with science academies. Alberts said that science and technology developed in North America or Europe can’t always be exported to the countries that need it.  Countries need national, merit-based science institutions to help with policy and support science.  Only local scientists have the credibility to rescue a nation from misguided local policies. Alberts’ examples were  AIDS in Africa or polio vaccine in Nigeria.  Alberts feels that the world needs more of the creativity, rationality, openness, and tolerance that are inherent to science for success of every nation.  What Pandit Jawaharlal Nehru of India called “scientific temper”.

Alberts suggested strategies to help the world’s future:

  1. Education – active learning, open access, start by changing college science teaching since that is where high school science teachers learn science. (Science special issue April 19, 2013: Grand Challenges in Science Education and Education Portal http://portal.scienceintheclassroom.org/ )
  2. Promote science knowledge as a public good – open access again, not just papers but other educational materials, eg. http://www.ibiology.org/
  3. Empowering best young scientists- Global Young Academy
  4. Developing scientists as connectors – science communication, scientists need to connect with policy makers and the public, such as the AAAS Science & Technology Policy Fellowship program 
  5. Develop and harness research evidence to improve policies.

What can librarians do

Obviously information literacy is huge when it comes to making sure students and future voting adults, can find the information they need to make decisions about health, technology, and science. Teaching regularly about reliability of web sites and other information sources must be part of this training.

I think librarians can also help harness the research evidence needed to improve policies.  We have excellent search skills and many of us already have experience doing systematic reviews, which is what is needed to find all the evidence.

If you want to read more about Bruce Alberts, this interview by Jane Gitschier is good: Scientist Citizen: An Interview with Bruce Alberts

I liked this quote used by Alberts:

“The society of scientists is simple because it has a directing purpose: to explore the truth. Nevertheless, it has to solve the problem of every society, which is to find a compromise between the individual and the group. It must encourage the single scientist to be independent, and the body of scientists to be tolerant. From these basic conditions, which form the prime values, there follows step by step a range of values: dissent, freedom of thought and speech, justice, honor, human dignity and self respect.

Science has humanized our values. Men have asked for freedom, justice and respect precisely as the scientific spirit has spread among them.”

—  Jacob Bronowski, Science and Human Values, 1956

e-Science Portal Users–we need you!

e-Science Portal Blog - Wed, 10/22/2014 - 14:30

The e-Science Portal design team has been conducting a series of online Optimal Workshop user studies of the portal over the past few months. In May the team had issued a Call for Participation for Usability Testing of the e-Science Portal, and we were happy to receive over ninety volunteers!  With these volunteers’ participation, we’ve conducted three separate tests and gleaned valuable information from their responses.  With this information, we’ll be “tweaking” the design of the portal, but before we do so, we need further input from a new pool of participants.

Whether or not you’re familiar with e-Science and/or the e-Science Portal for New England Librarians,  we need you! On average the test takes 12-15 minutes to complete. You do not need to be a web design expert or have previous experience in user testing, and the instructions are easy.

To volunteer, please complete the following e-Science Portal Usability Testing form at  https://docs.google.com/forms/d/1Wb6kk4QYtfvi4bZuVMnRoZxF_KQmdYUTsQrnK1VDWDE/viewform by Monday, October 27th.

Thank you for participating,

Donna Kafel, Coordinator for the e-Science Portal

 

Nov. 25 workshop: Improving integrity in scientific research

e-Science Portal Blog - Fri, 10/17/2014 - 09:01

Posted on behalf of Chris Erdmann, Head Librarian, Harvard-Smithsonian Center for Astrophysics, Harvard.

Workshop:  Improving integrity in scientific research: How openness can facilitate reproducibility

Time: 3:00pm – 5:30pm
Date: Tuesday, November 25th
Location: Center for Astrophysics, Phillips Auditorium

https://www.google.com/maps/search/phillips+auditorium

  “Using Zenodo to share and safely store your research data”

Lars Holm Nielsen, CERN

Is your 10-year-old dataset stored safely? Is it openly accessible? In the workshop, you will learn how to preserve, share and receive credit for your research data using Zenodo (https://zenodo.org/), created by OpenAIRE and CERN, and supported by the European Commission. We will explore the different aspects and issues related to research data and software publishing, why preservation is important, how to link it up and make your research data discoverable. We will also see how research software hosted by GitHub can be automatically preserved with just a few clicks. In addition, we will look at how research communities can be created in Zenodo to support a variety of publication activities.

Requirements: None, but it’s highly preferable to bring your own laptop and an example research output (dataset, software, presentation, poster, publication, …) you would like to share to be able to follow the interactive part of the workshop.

Improving integrity in scientific research: How openness can facilitate reproducibility

Courtney Soderberg, COS

Have you heard about the reproducibility crisis in science (ex. in AAAS and Economist) and worry about false positive results? Ever wondered how you could increase the reproducibility of your own work and help the accumulation of scientific knowledge? Join us for a workshop on reproducible research, hosted by the Center for Open Science.

This presentation will briefly review the evidence and challenges for reproducibility and discuss how greater transparency and openness across the entire scientific workflow (from project inception, to data sets and analysis, to publication and beyond) can increase levels of reproducibility. It will also include a hands-on demonstration of the Open Science Framework (http://osf.io/) a free, open source web application developed to help researchers connect, document, and share all aspects of their scientific workflow to increase the reproducibility of their work.

Attendees are encouraged to bring laptops and research materials (stimuli, analysis scripts, data sets, etc.) they would like to share so they can follow along with the hands-on section of the presentation.

Please RSVP

Tracking the impacts of data – beyond citations

e-Science Portal Blog - Thu, 10/16/2014 - 13:29

How can you tell if data has been useful to other researchers?

Tracking how often data has been cited (and by whom) is one way, but data citations only tell part of the story, part of the time. (The part that gets published in academic journals, if and when those data are cited correctly.) What about the impact that data has elsewhere?

We’re now able to mine the Web for evidence of diverse impacts (bookmarks, shares, discussions, citations, and so on) for diverse scholarly outputs, including data sets. And that’s great news, because it means that we now can track who’s reusing our data, and how.

All of this is still fairly new, however, which means that you likely need a primer on data metrics beyond citations. So, here you go.

In this post, I’ll give an overview of the different types of data metrics (including citations and altmetrics), the “flavors” of data impact, and specific examples of data metric indicators.

What do data metrics look like?

There are two main types of data metrics: data citations and altmetrics for data. Each of these types of metrics are important for their own reasons, and offer the ability to understand different dimensions of impact.

Data citations

Much like traditional, publication-based citations, data citations are an attempt to track data’s influence and reuse in scholarly literature.

The reason why we want to track scholarly data influence and reuse? Because “rewards” in academia are traditionally counted in the form of formal citations to works, printed in the reference list of a publication.

There are two ways to cite data: cite the data package directly (often by pointing to where the data is hosted in a repository), and cite a “data paper” that describes the dataset, functioning primarily as detailed metadata, and offering the added benefit of being in a format that’s much more appealing to many publishers.

In the rest of this post, I’m going to mostly focus on metrics other than citations, which are being written about extensively elsewhere. But first, here’s some basic information on data citations that can help you understand how data’s scholarly impacts can be tracked.

How data packages are cited

Much like how citations to publications differ depending on whether you’re using Chicago style or APA style formatting, citations to data tend to differ according to the community of practice and the recommended citation style of the repository that hosts data. But there are a core set minimums for what should be included in a citation. Jon Kratz has compiled these “core elements” (as well as “common elements”) over on the DataPub blog. The core elements include:

  • Creator(s): Essential, of course, to publicly credit the researchers who did the work. One complication here is that datasets can have large (into the hundreds) numbers of authors, in which case an organizational name might be used.

  • Date: The year of publication or, occasionally, when the dataset was finalized.

  • Title: As is the case with articles, the title of a dataset should help the reader decide whether your dataset is potentially of interest. The title might contain the name of the organization responsible, or information such as the date range covered.

  • Publisher: Many standards split the publisher into separate producer and distributor fields. Sometimes the physical location (City, State) of the organization is included.

  • Identifier: A Digital Object Identifier (DOI), Archival Resource Key (ARK), or other unique and unambiguous label for the dataset.

Arguably the most important principle? The use of a persistent identifier like a DOI, ARK, or Handle. They’re important for two reasons: no matter if the data’s URL changes, others will still be able to access it; and PIDs provide citation aggregators like the Data Citation Index and Impactstory.org an easy, unambiguous way to parse out “mentions” in online forums and journals.

It’s worth noting, however, that as few as 25% of journal articles tend to formally cite data. (Sad, considering that so many major publishers have signed on to FORCE11’s data citation principles, which include the need to cite data packages in the same manner as publications.) Instead, many scholars reference data packages in their Methods section, forgoing formal citations, making text mining necessary to retrieve mentions of those data.

How to track citations to data packages

When you want to track citations to your data packages, the best option is the Data Citation Index. The DCI functions similarly to Web of Science. If your institution has a subscription, you can search the Index for citations that occur in the literature that reference data from a number of well-known repositories, including ICPSR, ANDS, and PANGEA.

Here’s how: login to the DCI, then head to the home screen. In the Search box, type in your name or the dataset’s DOI. Find the dataset in the search results, then click on it to be taken to the item record page. On the item record, find and click the “Create Citation Alert” button on the right hand side of the page, where you’ll also find a list of articles that reference that dataset. Now you have a list of the articles that reference your data to date, and you’ll also receive automated email alerts whenever someone new references your data.

Another option comes from CrossRef Search. This experimental search tool works for any dataset that has a DataCite DOI and is referenced in the scholarly literature that’s indexed by CrossRef. (DataCite issues DOIs for Figshare, Dryad, and a number of other repositories.) Right now, the search is a very rough one: you’ll need to view the entire list of DOIs, then use your browser search (often accessed by hitting CTRL + F or Command +F) to check the list for your specific DOI. It’s not perfect–in fact, sometimes it’s entirely broken–but it does provide a view into your data citations not entirely available elsewhere.

How data papers are cited

Data papers tend to be cited like any other paper: by recording the authors, title, journal of publication, and any other information that’s required by the citation style you’re using. Data papers are also often cited using permanent identifiers like DOIs, which are assigned by publishers.

How to find citations for data papers

To find citations to data papers, search databases like Scopus and Web of Science like you’d search for any traditional publication. Here’s how to track citations in Scopus and Web of Science.

There’s no guarantee that your data paper is included in their database, though, since data paper journals are still a niche publication type in some fields, and thus aren’t tracked by some major databases. You’ll be smart to follow up your database search with a Google Scholar search, too.

Altmetrics for data

Citations are good for tracking the impact of your data in the scholarly literature, but what about other types of impact, among other audiences like the public and practitioners?

Altmetrics are indicators of the reuse, discussion, sharing, and other interactions humans can have with a scholarly object. These interactions tend to leave traces on the scholarly web.

Altmetrics are so broadly defined that they include pretty much any type of indicator sourced from a web service. For the purposes of this post, we’ll separate out citations from our definition of altmetrics, but note that many altmetrics aggregators tend to include citation data.

There are two main types of altmetrics for data: repository-sourced metrics (which often measure not only researchers’ impacts, but also repositories’ and curators’ impacts), and social web metrics (which more often measure other scholars’ and the public’s use and other interactions with data).

First, let’s discuss the nuts and bolts of data altmetrics. Then, we’ll talk about services you can use to find altmetrics for data.

Altmetrics for how data is used on the social web

Data packages can be shared, discussed, bookmarked, viewed, and reused using many of the same services that researchers use for journal articles: blogs, Twitter, social bookmarking sites like Mendeley and CiteULike, and so on. There are also a number of services that are specific to data, and these tend to be repositories with altmetric “indicators” particular to that platform.

For an in-depth look into data metrics and altmetrics, I recommend that you read Costas et al’s report, “The Value of Research Data” (2013). Below, I’ve created a basic chart of various altmetrics for data and what they can likely tell us about the use of data.

Quick caveat: aside from the Costas et al report, there’s been little research done into altmetrics for data. (DataONE, PLOS, and California Digital Library are in fact the first organizations to do major work in this area, and they were recently awarded a grant to do proper research that will likely confirm or negate much of the below list. Keep an eye out for future news from them.) The metrics and their meanings listed below are, at best, estimations based on experience with both research data and altmetrics.

Repository- and publisher-based indicators

Note that some of the repositories below are primarily used for software, but can sometimes be used to host data, as well.

 

Web Service

Indicator

What it might tell us

Reported on

GitHub

Stars

Akin to “favoriting” a tweet or underlining a favorite passage in a book, GitHub stars may indicate that some who has viewed your dataset wants to remember it for later reference.

GitHub, Impactstory

Watched repositories

A user is interested enough in your dataset (stored in a “repository” on GitHub) that they want to be informed of any updates.

GitHub, PlumX

Forks

A user has adapted your code for their own uses, meaning they likely find it useful or interesting.

GitHub, Impactstory, PlumX

SourceForge

Ratings & Recommendations

What do others think of your data? And do they like it enough to recommend it to others?

SourceForge, PlumX

Dryad, Figshare, and most institutional and subject repositories

Views & Downloads

Is there interest in your work, such that others are searching for and viewing descriptions of it? And are they interested enough to download it for further examination and possible future use?

Dryad, Figshare, and IR platforms; Impactstory (for Dryad & Figshare); PlumX (for Dryad, Figshare, and some IRs)

Figshare

Shares

Implicit endorsement. Do others like your data enough to share it with others?

Figshare, Impactstory, PlumX

PLOS

Supplemental data views, figure views

Are readers of your article interested in the underlying data?

PLOS, Impactstory, PlumX

Bitbucket

Watchers

A user is interested enough in your dataset that they want to be informed of any updates.

Bitbucket

 

Social web-based indicators

 

Web Service

Indicator

What it might tell us

Reported on

Twitter

tweets that include links to your product

Others are discussing your data–maybe for good reasons, maybe for bad ones. (You’ll have to read the tweets to find out.)

PlumX, Altmetric.com, Impactstory

Delicious, CiteULike, Mendeley

Bookmarks

Bookmarks may indicate that some who has viewed your dataset wants to remember it for later reference.

Impactstory, PlumX; Altmetric.com (CiteULike & Mendeley only)

Wikipedia

Mentions (sometimes also called “citations”)

Does others think your data is relevant enough to include it in Wikipedia encyclopedia articles?

Impactstory, PlumX

ResearchBlogging, Science Seeker

Blog post mentions

Is your data being discussed in your community?

Altmetric.com, PlumX, Impactstory

 

How to find altmetrics for data packages and papers

Aside from looking at each platform that offers altmetrics indicators, consider using an aggregator, which will compile them from across the web. Most altmetrics aggregators can track altmetrics for any dataset that’s either got a DOI or is included in a repository that’s connected to the aggregator. Each aggregator tracks slightly different metrics, as we discussed above. For a full list of metrics, visit each aggregator’s site.

Impactstory (full disclosure: my current employer) easily tracks altmetrics for data uploaded to Figshare, GitHub, Dryad, and PLOS journals. Connect your Impactstory account to Figshare and GitHub and it will auto-import your products stored there and find altmetrics for them. To find metrics for Dryad datasets and PLOS supplementary data, provide DOIs when adding products one-by-one to your profile, and the associated altmetrics will be imported. Here’s an example of what a altmetrics for dataset stored on Dryad looks like on Impactstory.

PlumX tracks similar metrics, and offers the added benefit of tracking altmetrics for data stored on institutional repositories, as well. If your university subscribes to PlumX, contact the PlumX team about getting your data included in your researcher profile. Here’s what altmetrics for dataset stored on Figshare looks like on PlumX.

Altmetric.com can track metrics for any dataset that has a DOI or Handle. To track metrics for your dataset, you’ll either need an institutional subscription to Altmetric or the Altmetric bookmarklet, which you can use when on the item page for your dataset on a website like Figshare or in your institutional repository. Here’s what altmetrics for a dataset stored on Figshare looks like on Altmetric.com.

Flavors of data impact

While scholarly impact is very important, it’s far from the only type of impact one’s research can have. Both data citations and altmetrics can be useful in illustrating these flavors. Take the following scenarios for example.

Useful for teaching

What if your field notebook data was used to teach undergraduates how to use and maintain their own field notebooks? Or if a longitudinal dataset you created were used to help graduate students learn the programming language, R? These examples are fairly common in practice, and yet they’re often not counted when considering impacts. Potential impact metrics could include full-text mentions in syllabi, views & downloads in Open Educational Resource repositories, and GitHub forks.

Reuse for new discoveries

Researcher and open data advocate Heather Piwowar (full disclosure: the co-founder of Impactstory and my boss) once noted, “the potential benefits of data sharing are impressive:  less money spent on duplicate data collection, reduced fraud, diverse contributions, better tuned methods, training, and tools, and more efficient and effective research progress.” If those outcomes aren’t indicative of impact, I don’t know what is! Potential impact metrics could include data citations in the scholarly literature, GitHub forks, and blog post and Wikipedia mentions.

Curator-related metrics

Could a view-to-download ratio be an indicator of how well a dataset has been described and how usable a repository’s UI is? Or of the overall appropriateness of the dataset for inclusion in the repository? Weber et al (2013) recently proposed a number of indicators that could get at these and other curatorial impacts upon research data, indicators that are closely related to previously-proposed indicators by Ingwersen and Chavan (2011) at the GBIF repository. Potential impact metrics could include those proposed by Weber et al and Ingwersen & Chavan, as well as a repository-based view-to-download ratio.

Ultimately, more research is needed into altmetrics for datasets before these flavors–and others–are fully understood.

Now that you know about data metrics, how will you use them?

Some options include: in grant applications, your tenure and promotion dossier, and to demonstrate the impacts of your repository to administrators and funders. I’d love to talk more about this on Twitter.

Recommended reading
  • CODATA-ICSTI Task Group. (2013). Out of Cite, Out of Mind: The current state of practice, policy, and technology for the citation of data [report]. doi:10.2481/dsj.OSOM13-043

  • Costas, R., Meijer, I., Zahedi, Z., & Wouters, P. (2013). The Value of research data: Metrics for datasets from a cultural and technical point of view [report]. Copenhagen, Denmark. Knowledge Exchange. www.knowledge-exchange.info/datametrics

Current Science & Science Data Librarian job postings

e-Science Portal Blog - Thu, 10/16/2014 - 12:12

Submitted by Donna Kafel, Project Coordinator for the e-Science Portal.

Here are some recent job postings for science, health sciences, and data librarians at various institutions across the US and Canada.

California State University, East Bay Library, Health Sciences and Scholarly Communications Librarian: https://csucareers.calstate.edu/Detail.aspx?pid=41475

Carilion Clinic, Roanoke, VA, Clinical Research Librarian:  https://www.healthcaresource.com/carilion/index.cfm?fuseaction=search.jobDetails&template=dsp_job_details.cfm&cJobId=734201&fromCarilion=true

Lewis & Clark College(Portland, OR):  Science & Data Services Librarian:  https://jobs.lclark.edu/postings/4720

New York University Health Sciences Libraries, Knowledge Management Librarian http://hsl.med.nyu.edu/content/knowledge-management-librarian

Life Sciences Librarian, New York University:  http://library.nyu.edu/about/jobs.html#sciences

Research Data Services Librarian, New York University:  http://library.nyu.edu/about/jobs.html#RDM

McGill University:  Data Reference Services Librarian:  http://joblist.ala.org/modules/jobseeker/Data-Reference-Services-Librarian/27493.cfm

University of Cincinnati, Digital Metadata Librarian:  http://www.libraries.uc.edu/about/employment.html

University of Connecticut, Sciences Librarian,:  http://joblist.ala.org/modules/jobseeker/Sciences-Librarian/27501.cfm

University of Delaware, Science Liaison Librarian:  http://www2.lib.udel.edu/personnel/employment/102465ScienceLiaisonLibrarian.pdf

University of Kentucky:  Head of Science Library and e-Science Initiatives:  http://www.diglib.org/archives/6865/

University of Massachusetts Medical School,  Assoc. Director of Library Education and Research https://careers-umms.icims.com/jobs/23818/assoc-dir%2c-lib-education-%26-research/job?mobile=false&width=1837&height=500&bga=true&needsRedirect=false

Call for Papers IASSIST 2015

e-Science Portal Blog - Wed, 10/01/2014 - 15:49

IASSIST (International Association of Social Science Information Services and Technology) announces a Call for Papers for IASSIST 2015, which will be held June 2-5 in Minneapolis, MN.

 

Challenges of Self Learning

e-Science Portal Blog - Fri, 09/26/2014 - 16:20

Submitted by Donna Kafel,  e-Science Coordinator,  University of Massachusetts Medical School

Data Visualization, Research Methods in Information, How to Think Like a Computer Scientist, Interactive Web Design, Blindspot, the Harvard Edx course “Introduction to Computer Science.” These are just a few examples of the many topics and items on my to-read and to-learn list. I want to learn about Python script and R, I want to be better versed in research methodology, develop self-paced educational modules, be more aware of hidden biases, and develop proficiency in data science techniques. Knowing these things would be very useful for me professionally. And I’m sure I’d enjoy learning some of them if I could find the time.

The following picture depicts my typical daily dilemma.  During the course of my workday, I come across a book, or a new tool, or an online course, or something that I want to learn about. And I think to myself, when I go home tonight I’m going to delve into reading about this topic and learn something. Or I’m going to set aside an hour every night for a week and learn the basics of Python.  I’m going to learn the ins and outs of a new database.  These ideas seem very do-able in the light of the workday.  Yet after work, other demands and tasks take over, and I let these great aspirations fall by the wayside, night after night.

Reflecting on this vicious cycle, I got to wondering about how my colleagues approach self-learning.  Do they set specific goals for themselves?  Do they set aside work time to learn a new technology? Do they ever sleep?

I decided to interview two librarians who I admire for their creativity, unique skills, and passion for learning:  Sally Gore and Chris Erdmann.  I work with Sally at the Lamar Soutter Library at UMass Medical School. Sally works as an Embedded Research Informationist and is involved in some very interesting projects with faculty researchers who are investigating things like patient compliance in mammogram screening and developing a system for citing neuroimages.  Sally is a thoughtful and articulate writer who regularly shares her insights about her experiences working as a librarian in the research environment and emerging trends in librarianship in her blog A Librarian by any Other Name.   Chris is the Head Librarian at the Harvard-Smithsonian Center for Astrophysics.  Much of his work there focuses on astrophysics data and developing library data services that support the needs of astrophysics researchers. Chris directly works with researchers doing data processing and analysis;  assisting them with data citation and publishing, and exploring new approaches for repository systems that support access to huge astrophysics data sets. What’s particularly striking about Chris is his passion for teaching other librarians data science techniques in his DST4L (Data Science Training For Librarians) class that is now in its third iteration. In this class, Chris and his associates have taught librarians programming skills and technologies through hands-on activities and group projects.

I interviewed Sally and Chris individually but both of their responses are noted below each question.

How do you find time to “teach” yourself new things?

Sally:  I set aside one morning a week, usually Friday mornings, for professional reading and writing my blog posts. Making this a weekly practice is a good habit. I strongly believe that librarians need to make an active effort to stay informed, and to do that, we need to set aside some work time for reading and learning. In my spare time I also take the opportunity to attend seminars, and learning events, like Science Café Woo for example. I also try to meet new people at such events, by sitting with people I don’t know and talking with them about their interests and the work they do.

Chris:  When things are quieter at work, I seize moments to focus on learning a new skill. One of my fears is that I won’t be able to keep up with the rapid pace of changing technologies. It’s a huge challenge to find this time, but that’s how I learned a lot of computer programming, during breaks.

I do encourage librarians who are interested in the DST4L class to advocate for professional development time to take the class by pointing out to their administrators the usefulness of the skills they’ll learn. I have thought about teaching the class online but it wouldn’t work well that way.  One of the key factors to successfully sticking with a class is being involved in group projects in which your classmates are counting on your participation. No one wants to let their group down, so they consistently attend the classes.

Did your educational background prior to library school help you with your work now?

Sally:  I have a B.A. in Philosophy and a Master’s in Divinity, and a Master’s in Exercise Physiology. These are all very different fields than the research disciplines that I’m involved with right now.  I do think having worked in a research environment while studying physiology has been a huge plus.  It gave me a sound background in research methods and familiarity with research work and environments.

Chris:  My background is a B.A. in History with a minor in Agriculture and Managerial Economics. Very different from computer science! But several years back, I really wanted to get a job as a programmer and was pretty sure that I could teach myself the basics.  I learned programming initially by picking up a C++ book years ago and studying it. It wasn’t easy but I was determined to learn programming so I could work in a software company.  I did get hired as a programmer. The first week on the job was a bit shaky, but I persevered and learned as I went along.

The thing I missed though in working as a programmer was not working directly with users. I enjoy working with people. I did a consulting gig for a while and was able to work more with users then. As I thought more about wanting to work with people, I started to consider library school.

What has inspired you?

Sally:  I have been working at UMass Medical School for nine years now, but it’s only been during the last two years that I’ve worked directly with researchers. I know much more now about the research work that is being done at the school, yet the more I know the more I realize how much more is being done here that I know nothing about. I’m inspired by the incredibly bright researchers with whom I have the opportunity to work.  I enjoy the work I’m doing now as an informationist on a neuroscience project. I like looking at the big picture; understanding the project activities and project design and data management challenges. In this project, we’re trying to explore new ways for effectively citing individual neuroimages that are part of a “dataset” that is basically a collection of neuroimages.

Chris:  I was inspired by an internship I did at the Smithsonian in which I worked on DigiLab, an interactive exhibit of digital materials. I enjoyed the work I did there so much, it inspired me to continue learning everything I could about programming for the web.

Another thing that I’ve found motivating is going to user forums to learn new coding skills. They’re ideally places where you can informally learn from a community of other users. When I started learning, forums used to be intimidating spaces but have improved dramatically. Now the users are generally more welcoming, though, there is still work to be done to improve the culture so it is less male dominated. I’ve been favorably impressed by Software Carpentry.  They’ve been great to work with, and I always recommend the bootcamps they run to students.

From there my conversations with both Sally and Chris veered to new roles for libraries, data repositories, and revamping library school curricula to include data science courses—topics for other blog posts.  However, I did come away from the interviews with a few do-able approaches for self-learning:

·          Be single-minded. Identify one topic or skills you want to learn and focus on mastering it.

·          Seize opportunities to attend lectures, seminars, poster presentations  on research topics (there are many of these in an academic institution)

·          Enroll in a face-to-face class with required projects

I found these helpful and hope they’ll help others who are paralyzed by the so much to learn, so little time conundrum. I’ll let you know how my self-learning proceeds in a future post!

 

Dr. Lawrence Tabak: Enhancing Reproducibility and Transparency of Research Findings

e-Science Portal Blog - Mon, 09/22/2014 - 16:12

Enhancing Reproducibility and Transparency of Research Findings.
Lecture given by Lawrence Tabak, Principal Deputy Director of the NIH
Part of the Sanger Series at Virginia Commonwealth University, Richmond,VA

 The starting point of Dr. Tabak’s lecture was the editorial he and Dr. Francis Collins published in Nature, January 30 2014, on NIH plans to enhance the reproducibility of research.

One of the things they noted in the article was that the NIH can’t make the changes to research alone.  Scientists, in their roles as reviewers of grants and articles, editors of journals, and members of tenure panels, can help with the process.

 Science has always been viewed as self-correcting, and it generally is over the long term, but the checks and balances for reproducibility in the short and medium term are a problem. Tabak discussed several problems with current research publications. Journals want exciting articles, “Cartoon biology” according to Tabak, and so the methods sections are shrinking – “more like method tweets”. Add to this issues with;

  • poor research design, eg. not using blinding or randomizing, using a small sample,
  • incorrectly identified materials, eg. not verifying cell strains or antibodies,
  • variability of animals,
  • contamination of cells,
  • sex differences

Along with methods issues, Tabak identified problems with poor training in experimental design, poor evaluation leading to more errata and retractions, the difficulty of publishing negative findings, and the “perverse reward incentives” in the US biomedical research system.

What is the NIH doing?

As well as speaking at many venues outside of the NIH (such as this lecture at VCU) there are efforts to work with editors, industry, and other groups to improve research.

Editors of journals with the most NIH researcher publications were invited to a workshop in June 2014, and a set of principles for journal publication were drawn up.  Science and Nature will run editorials in November with the finalized principles.  The principles will include encouraging rigorous statistical analysis, transparency in reporting, data and materials sharing, and establishing best practices guidelines.

NIH is working with industry through PhRMA to make training materials on research design available to everyone. And there will be some training films developed at the NIH for use around the country.

Tabak mentioned a couple of projects that should help with the validation of high-quality experimental results.The Reproducibility Initiative is a collaboration between Science Exchange, PLOS, figshare and Mendeley, and the Open Science Framework from the Center for Open Science allows researchers to register materials and methods before research, similar to a clinical trials register.

Tabak also discussed a checklist of core elements that might be used when reviewing grants.  Included was the idea that researchers need to make sure their background articles are reproducible and of high-quality. He mentioned that some of the false hopes of patients for a cure for devastating illnesses, such as ALS or cancer, are based on poorly designed animal studies that should have never progressed to clinical trials.

Post-publication review of papers, in forums like PubMed Commons, is one way to insure transparency.  As well as discussing and clarifying the research, some authors have linked to data sets, including negative data sets, which increases the usefulness of the Commons model.

There was also a discussion of funding for replications studies, and alternative funding methods to increase stability for mid-career researchers. Tabak concluded by mentioning that these new funding models would need to be evaluated and it may be that different institutes will use different models.

 What can librarians do?

Throughout the lecture I thought of things librarians could be doing to support scientific transparency and reproducibility. We can encourage the best possible background searching for research, which means training students as well as working with researchers to refine their searching.  We can encourage citation searching and show researchers how to follow up on errata and retractions so they know what others think of the research they are reading. We can encourage the use of social media for informal communications about research. And be sure to keep an eye out for the principles the journals will be sharing in November.

As a data librarian,I can encourage proper data management and documentation for reliable reporting.  I can suggest sharing data in various venues and linking that data to their articles.  I can suggest that data management training be part of the research design training and I can offer to do it.

And libraries can invite lectures such as this one – VCU Libraries sponsored this lecture with our Office of Research and Innovation.  And it was very well attended.

I’m sure there are ideas I’ve missed.  I would love to hear any ideas you might have for ways librarians can help research transparency and reproducibility.

Video of the lecture is availale at http://youtu.be/E06QJTZ6LUw

Two job opportunities at Brown University Library

e-Science Portal Blog - Mon, 09/22/2014 - 09:44

Brown University has two new job opportunities:

  • Physical Sciences Librarian
  • Digital Humanities Librarian

Detailed job announcements for these positions are located on the Employment Opportunities page of the Brown University Library website.

Save the Date for RDAP 2015

e-Science Portal Blog - Mon, 09/22/2014 - 09:19

The 2015  RDAP (Research Data Access & Preservation) Summit will be held April 22-24 in Minneapolis, Minnesota. See RDAP website for further details.

Teaching Research Data Management at University of Tennesee Knoxville

e-Science Portal Blog - Mon, 09/15/2014 - 14:48

Check out the newly published article in the Journal of eScience Librarianship, “Planning Data Management Education Initiatives:  Process, Feedback, and Future Directions.” In the article,  Christopher Eaker, Data Curation Librarian at the University of Tennessee Libraries, discusses a one day Data Management Workshop that he taught to graduate science and engineering students using modules from the New England Collaborative Data Management Curriculum. As part of the workshop, Eaker asked students to take a pre-workshop survey and a series of seven post-module surveys throughout the day.  In the article, Eaker discusses findings from the surveys and how they are shaping  his plans for future research data management training.

 

Planning underway for 2015 National Digital Stewardship Residency Program

e-Science Portal Blog - Mon, 09/15/2014 - 13:11

The following announcement was made by George Coulbourne, Supervisory Program Specialist, Library of Congress, Office of Strategic Initiatives:

The Library of Congress Office of Strategic Initiatives, in partnership with the Institute of Museum and Library Services (IMLS), is planning for another year of the National Digital Stewardship Residency program (NDSR) to be held in the Washington, DC Metro area, starting in June, 2015. As you may know, this program is designed for recent master’s and doctoral graduates interested in the field of digital stewardship.  This will be the fourth class of residents for this program overall – the first in 2013, was held in Washington, DC and the second and third, which started earlier this month, are being held concurrently in New York and Boston.

The 2015 DC Residents will each be paired with an affiliated host institution for a 12-month program that will provide them with an opportunity to develop, apply, and advance their digital stewardship knowledge and skills in real-world settings. The participating hosts and projects for the 2015 cohort will be announced in early December, and the application period will open shortly after.  News and updates will be posted to the NDSR webpage (www.digitalpreservation.gov/ndsr ), and The Signal blog (http://blogs.loc.gov/digitalpreservation/).

In addition to providing great career benefits for the residents, the success of the NDSR program also provides benefits to the institutions involved as well as the library and archives field in general.

Please help us spread the word about this program, and forward this information to student groups and other organizations who might be interested.  We appreciate your help very much.

To learn more about the NDSR, please visit our website at: www.digitalpreservation.gov/ndsr.

 

 

 

In honor of Labor Day–some recent job postings

e-Science Portal Blog - Thu, 08/28/2014 - 14:40

According to the US Labor Department,  Labor Day ” is dedicated to the social and economic achievements of American workers. It constitutes a yearly national tribute to the contributions workers have made to the strength, prosperity, and well-being of our country.”

And what better way to celebrate Labor Day on e-Science Community than to share a few recent job announcements:

Arizona State University:  2 positions:  Health Sciences Librarian, Digital Projects Librarian

Boston University:  Research Data Management Librarian (Science & Engineering Library)

Medical University of South Carolina:  Research and Education Informationist/Librarian  (2 positions)

New York University:  Data Curator

Santa Clara University:  Science Librarian and Scholarly Communication Coordinator

Tufts University:  Science Collections Librarian

University of Florida:  Data Management Services Librarian

 

 

Institute for Research Design in Librarianship: Raising the Bar in Library & Information Science Research

e-Science Portal Blog - Mon, 08/25/2014 - 15:15

Submitted by guest contributors: Daina Bouquin, Data & Metadata Services Librarian, Weill Cornell Medical College of Cornell University, dab2058@med.cornell.edu; Chris Eaker, Data Curation Librarian, University of Tennessee Libraries, ceaker@utk.edu

Why do librarians need to do research? Or rather, why does anyone need to do research? Librarians conduct research to better understand the communities they serve and to develop responses that reflect their needs. Whether it be biomedical research, engineering, art history, or library science, research is imperative to developing the skills necessary to execute on innovative ideas and support decisions with data. Publication allows researchers to share their findings with the wider scholarly community and to build upon the findings of others. Research in the library and information science fields also helps increase receptivity to change in established environments; improves management skills through systematic study and data driven decision making; and helps researchers provide better service to and empathy for faculty researchers within their institutions (Black & Leysen, 1994; Montanelli & Stenstrom, 1986). Librarians who engage in research may also be better equipped to initiate new services that meet the specific needs of their communities. Furthermore, research in the academic library environment is not only useful, but expected for many academic librarians.  Librarians who produce comprehensive research are better able to progress toward promotion, tenure, higher salaries, advancement in the profession, and well-warranted recognition. However, many librarians are confronted with barriers to pursuing research. Many of these obstacles have been documented in the literature and include lack of time to conduct research, unfamiliarity with the research process, lack of support for research, lack of confidence, and inadequate education in research methods (Koufogiannakis & Crumley, 2006, 333; Powell, Baker, & Mika, 2002, 50; McNicol & Nankivell, 2003). In response to these barriers, librarian researchers at Loyola Marymount University developed the Institute for Research Design in Librarianship (IRDL). The IRDL is a 9-day continuing education program designed to mitigate these obstacles and train world-class library and information science researchers.

And so this past June, starting June 16 and running through June 26, twenty-five academic librarians and information professionals participated in the first-ever Institute for Research Design in Librarianship (IRDL) at Loyola Marymount University in Los Angeles, California. The IRDL is funded for three years by the Institute of Museum and Library Services to train a total of 75 professionals (25 per year) in research methods and to support them in developing professional research networks as they embark on their first attempts at comprehensive research and publishing in peer-reviewed journals. The first set of 25 IRDL Scholars (including the two authors of this article) were chosen in a competitive application process out of 86 applicants. To apply for the IRDL, applicants had to submit a proposal for a research project they would like to conduct once IRDL was over.

During IRDL, scholars received comprehensive training in the nuts and bolts of the research process. Topics included creating research questions and hypotheses, using qualitative methods (e.g. in-depth interviews and focus groups) and quantitative methods (e.g. surveys), along with mixed-methods research. Scholars were also given hands-on training with both quantitative and qualitative data analysis techniques and software, such as SPSS and NVivo. By studying these aspects of the research process, and consulting with peers and instructors, scholars were able to start developing skills to help them become more critical consumers of published research — this skillset is key when trying to not only produce quality research, but also contribute to meaningful discussion and criticism of research in information science. Scholars were also introduced to issues regarding realistic approaches to publishing to better prepare them to share their prospective research findings in the future.

The IRDL program also reflected an emphasis on the importance of having a supportive learning environment, mentorship opportunities, and tools to jump-start a new research agenda. Additionally, the Institute gave scholars access to both qualitative and quantitative methods experts both inside and outside of library and information science fields to help address the need to improve the quality of Library and Information Science research. An article published in The Journal of Academic Librarianship analyzed the contents of 1,880 articles in library and information science journals. Of those, they found that only 16% “qualified as research,” which they defined as  “an inquiry which is carried out, at least to some degree, by a systematic method with the purpose of eliciting some new facts, concepts, or ideas” (Turcios, Agarwal & Watkins, 2014). This study also found that surveys were the most commonly used research method among the studies published in the reviewed journals. These results could suggest that although there is research being done, librarians may not be making full use of all the methods they have available to them, and may not be producing as much “research” as they suspect. The goals of the IRDL are reflective of this sentiment.

During IRDL, scholars had to refine their initial proposal based on the new skills and concepts they were learning– now that the IRDL Scholars have returned to their respective institutions, the real work begins. Scholars are finalizing their research design and submitting IRB applications to begin conducting their research. Over the next several months, institute scholars will be conducting interviews and focus groups, administering surveys, and maybe even using our new favorite research method: garbology! Over the next year, keep a watch in the library and information science journals for articles from all the IRDL scholars’ many and varied research projects.

If you’re a new librarian, or a librarian who is still unsure of the research process, we encourage you to apply for next year’s IRDL. The IMLS has funded IRDL for three years, but they are working on plans to make it sustainable so many more cohorts of librarians can be trained in sound research methods and techniques. You can find out more about IRDL at http://irdlonline.org/ or on Twitter @IRDLonline and #IRDL. You will be overwhelmed with information, but that’s the price we must pay to move our research to the next level.

References:

Black, W. K., & Leysen, J. M. (May 1994). Scholarship and the academic librarian. College & Research Libraries, 55, 229-241.

Montanelli, D. S., & Stenstrom, P. F. (September 1986). The benefits of research for academic librarians and the instititions they serve. College & Research Libraries 47, 482-485.

Koufogiannakis, D., & Crumley, E. (2006). Research in librarianship: issues to consider. Library Hi Tech, 24(3), 324-340. doi:10.1108/07378830610692109

McNicol, S., & Nankivell, C. (2003). The LIS research landscape: A review and prognosis. Centre for Information Research. Retrieved from http://www.researchgate.net/publication/228392587_The_LIS_research_landscape_a_review_and_prognosis.

Powell, R. R., Baker, L. M., & Mika, J. J. (2002). Library and information science practitioners and research. Library & Information Science Research, 24(1), 49-72. doi:10.1016/S0740-8188(01)00104-9

Turcios, M. E., Agarwal, N. K., & Watkins, L. (2014). How much of library and information science literature qualifies as research?. The Journal of Academic Librarianship. doi: 10.1016/j.acalib.2014.06.003

 

ICPSR Managing and Curating Data Workshop

e-Science Portal Blog - Tue, 08/19/2014 - 11:19

Submitted by guest contributor  Willow Dressel, Plasma Physics/E-Science Librarian, Princeton University. wdressel@princeton.edu

The last week of July I attended ICPSR’s workshop Curating and Managing Research Data for Reuse at the University of Michigan in Ann Arbor.  The workshop is part of ICPSR’s summer program and was started three years ago. I was interested in this workshop to try to get a firmer grasp on managing research data and begin to develop a deeper understanding of what is involved in curating.

The workshop was presented by curators from both ICPSR and the UK Data Archives and followed the ICPSR Pipeline Process for curation, with each day progressing through the issues and actions associated with Deposit, Processing, Delivery, and Access. There was a healthy mix of lecture and hands on activities. The roughly twenty or so participants were international and from diverse backgrounds including social science research, other data repositories, and libraries, which provided unique perspectives that greatly enhanced class discussions.

Like many of my colleagues, I am a science librarian who has been tasked with developing services and resources to help science researchers manage their research data. Over the last couple of years I have attended various workshops and conferences to try to get up to speed.  In this time, I have learned a lot about the different issues around managing and preserving scientific research data, as well as what other libraries are doing. As a result, I have managed to put together some really basic services such as data management plan consultation and assistance depositing in disciplinary repositories.

However, as I begin to put together a data management workshop and libguide, I can feel my knowledge gaps in this area. I understand the need for things like documentation, stable file formats, storage and back-up, file cleaning, and confidentiality, but I don’t have a deep understanding of how to do these things. I am still reading and learning as I go. As an undergraduate physics and astronomy major, I worked with only a little bit of spreadsheet data, and that was ten years ago. It’s hard to feel confident in giving people advice on how to manage their data when I have worked with so little data myself. This workshop offered a lot of hands on exercises, including actually working with both quantitative and qualitative data. Prior to attending the workshop, I had been concerned that the heavily social science perspective of the workshop might not be as relevant to me as a science librarian. Now I believe this is a benefit. Who better to learn from than a field with established disciplinary repositories and a long culture of managing, curating, and reusing their data.

As for the curation aspect of the workshop, I don’t currently have data curation in my job description and my institution doesn’t currently offer data curation services. Nevertheless, it seems that this is an important aspect of dealing with research data and I believe having an understanding of the process and issues associated with data curation will help me assist researchers to deposit in a repository as well as inform the possibility of developing these services.

Syndicate content