Samples or Bibliometrics
Our readings this week are looking at two different areas. Beck and Martin are discussing Bibliometrics, while Pyrczak is discussing samples.
Either:
Look for an example of a bibliometric study conducted in your chosen library or information organization (i.e. school libraries, academic libraries, archives, public, etc.) and briefly share the study and its results. Where does it fit into the rest of bibliometric research? (i.e. How does it contribute to our knowledge? What were the major findings? Does it suggest any courses of action for best library practice?)
Or:
Look in the popular press (newspapers, magazines, blogs, etc.) for reports of research that used samples. How is the research reported? Do the reports allow you to judge the samples in light of Pyrczak's questions? Now that you know what questions to ask, how effective is the reporting of the research you discovered?
My article is: Slutsky, B., & Aytac, S. (2016). Bibliometric analysis and comparison of two STEM LIS journals: Science & Technology Libraries and Issues in Science & Technology Librarianship (2005–2014). Science & Technology Libraries, 35(2), 152-171. doi:10.1080/0194262X.2016.1171191
This article is a bibliometric examination of two of the leading academic journals with an emphasis on STEM (Science, Technology, Engineering, and Mathematics) librarianship: Science & Technology Libraries (STL) and Issues in Science & Technology Librarianship (ISTL). It looks at the research articles (columns, editorials, reviews, etc. are not included) published in the two journals over a ten-year period from 2005-2014 to perform a bibliographic analysis covering eight primary research questions with the general intention to provide a review of the two journals and how they function as potential publishing venues. An impact factor was not presented because neither journal was indexed in World of Science.
Primary research questions included:
- RQ1) What is the yearly distribution of articles from the two journals?
- RQ2) Does the subject coverage of the two journals differ?
- RQ3) What is the nature of authorship patterns and degrees of collaboration?
- RQ4) Do the average number and format distribution of the citations from the two journals differ?
- RQ5) How many times did STL articles cite ISTL articles and vice versa, and what is the influence of journals from outside the field on both of them?
- RQ6) Is there a difference between the top institutions that publish in the respective journals?
- RQ7) How different is the chronological distribution of citations?
- RQ8) What percentage of the citations is in library science versus other disciplines?
Major findings from the article were:
- RQ1 - Yearly distribution of articles was comparable – 163 articles from STL and 175 from ISTL.
- RQ2 - Bibliometrics and citation analysis were main topics in STL, libraries and librarianship were the main topics in ISTL. There was comparable distribution between the journals of other topics.
- RQ3 - Percentage of articles composed by at least on science librarian was comparable between the two journals. There was a difference in collaboration metrics – 35% of STL were collaborations, 51.4% in ISTL (with 9 student collaborations with librarians). Collaborations with LIS faculty was minimal in both journals.
- RQ4 - Journals are the most cited source of information, followed by web resources, and books. The percentages of citation were comparable between the two journals.
- RQ5 - Both journals tended to cite themselves more than any other journal in the LIS discipline – with similar percentages of self-citation. The journals most highly cited were very similar between the two journals – with similar citations percentages when citing the other journal (i.e, STL citing ISTL or ISTL citing STL).
- RQ6 - The University of Arkansas was the leading institution for authors in STL, Indiana University was the main source for ISTL. Though not directly called out in the article, Pennsylvania State University was a significant contributor to both journals – counting for 22 articles in total compared to 24 in total for University of Arkansas and 16 for Indiana University.
- RQ7 - 30.4% of citations in STL were pre-2000, 18.9% for ISTL – the implication being that the literature reviews in STL articles were more rigorous.
- RQ8 – 40.9% of STL were from non-library science journals, 27.5% was the figure for ISTL. STL authors cited more sci-tech journals.
- The total number of citations, SNIP, IPP, and SCImage journal rank was higher for STL.
Because the journals examined in this article deal largely with issues facing science/technology librarians, it can be considered a subset investigation of the group of LIS journals that cater to these practitioners. As the researchers note in their Discussions and Recommendations section, there are least ten other journals for sci-tech librarians and the scope of this investigation could be expanded to cover these journals as well as a longer time frame than the 10 years used in this investigation. Ultimately, what this bibliometric analysis accomplishes is to help categorize the journals around a number of factors including but not limited to academic rigor, author credentials, and journal influence. Based on those factors, STL would be the journal with the most rigor and greater influence among fellow science-technology librarians. It also offers advice on which journal would be most likely to publish student work – ISTL in this case, based on the number of articles published by LIS students.
On a personal note: It occurs to me that given the vast number of journals catering to librarians of all types of practice domains, that a bibliometric analysis of all the journals using Bradford’s law would help novice researchers know which level of influence/impact a journal has been rated and make some informed qualitative judgments on which journals to emphasize as potential sources when searching for research materials. It would be helpful to know that an article from STL is on average more likely to advance the domain practice than an article from ISTL, for example. Granted, it might amplify the Matthew effect for the most influential journals by increasing their citations and minimizing the citations for 2nd and 3rd tier journals. But if the best research should be cited the most, that seems in keeping with high standards of academic rigor. Science should not model its practice on an equal weight for all voices as the blogosphere seems to promote. Some are clearly judged by peers to be superior. Ideally, a ranking of authorial authority helps make the research process more efficient by highlighting the better sources.