Fairly Toolset: research data management and publishing at your fingertips!
Speakers: Serkan Girgin, Manuel Garcia Alvarez and Jose Urra Llanusa.
The Fairly toolset provides computational researchers an open source tool to connect the datasets in their computational research environments with the dataset on a data repository. Hence, the Fairly toolset could be the Git to your datasets.
As described on the OSF website, the toolset includes a Python library allowing easy cloning and downloading of datasets from repositories, local data and metadata management, easy and unattended uploading of datasets to repositories, and smart data and metadata synchronization between local and remote datasets. The standard API tool allows to manage and publish datasets, a command line tool that enables research data management without programming skills, and a JupyterLab extension to manage datasets through a graphical user interface. Various data repository platforms, such as Zenodo, Figshare, 4TU.ResearchData, are supported by the toolset. The toolset, which is developed by Faculty ITC, TU Delft DCC, and 4TU.ResearchData through NWO Open Science funding, is relevant for researchers, data stewards, research software engineers, data managers, and practically anyone who develops or manages research data.
Based on the short introduction to the Fairly toolset during the workshop, I would say that the tool can be an added value for computational researchers (or support staff) to easily create and publish FAIR research datasets directly from your research environment. A full training would be necessary though, since this introduction workshop was too short to fully capture the added value of this tool within a research data processing pipeline.
FAIRifying Open Educational Resources
Speakers: Dorien Huijser, Marie-Louise Goudeau & Ruud Dielen (Utrecht University), Pedro Hernández Serrano and Maria Vivas-Romero (Maastricht University).
Session materials available on Zenodo.
At Hasselt University, we give trainings on different aspects of Open Science, and Jolien (Berckmans) also teaches the Open Science course in the postgraduate Information Management at the Erasmushogeschool. To practice what we teach, we want to make our training materials FAIR. We already post our outputs on our OSF-page, where they are Findable through a DOI and basic metadata, Accessible for everyone and Reusable with a CC BY license. However, during this workshop, we learn that we could make our materials more FAIR by adding documentation (e.g. notes), because it might be challenging for an outsider to understand and reuse the slides without it. The PDF format that we are using is a preferred format for archiving (cf. DANS), but not an interoperable format for open educational resources (nor is the .ppt format – pointed out by a participant). Ideally, an OER should be adaptable and therefore in an open format.
(Garcia et al., (2020). Ten simple rules for making training materials FAIR.)
If you want to know whether your OER, or any online resource, can be found by an API / search engine / crawler, you can use the Google rich results test. To enhance that findability, the FAIR metadata wizard can help you. You upload a JSON file, edit the metadata (or you start from the example) and download it again as a JSON file. We received a ready-made template for an OER from our workshop instructors who used it to provide metadata for their Qualitative Data CourseBook.
As a final remark, this show was a real success because of the interaction with the audience. They talked about edusources, a platform for OER in higher education in the Netherlands, and FORRT, a platform for OER on Open Science. It was suggested that we should be working with Codeberg instead of Github, as Github is owned by Microsoft while Codeberg is open source. Furthermore, there was a lively discussion about getting teachers more involved with OER. We should make the connection between (open) science and (open) education more obvious for them, seeing that “research output becomes educational input”. The FAIR principles should apply to both.
Practical tips for making qualitative data reusable
Speakers: Maaike Verburg, Widia Mahabier and Ricarda Braukmann (DANS).
A guidebook on making qualitative data reusable? That is music to our ears! Ricarda Brauckmann presents the qualitative data reuse decision tree, which does not only look nice, but practical too.
When you have fully anonymized data, you can theoretically archive these in Open Access, although you should still consider the interest of the participants. There is the alternative to upload the data in a restricted access repository, allowing reuse for a specific group of people or for a specific purpose. Two (maybe) lesser known options are the analysis of the data in a secure environment and the decentralized analysis. With the first option, the researcher can only work on the data in a separate, protected environment, and has to store the results and interpretations in that environment as well. With the second option, the researcher does not get to see the data, but he/she requests the original researchers to analyze the data based on the analysis protocol that he/she created. Afterwards, he/she will only receive the results of that analysis. Finally, when you have considered all the options and none of them suits your project, it is still recommended to publish the metadata and the documentation, so that your research can be found by other parties.
We are convinced that all researchers and data stewards should know the lyrics of this guidebook by heart, or at least read them and reflect critically on them. What is more, we would like to invite Brauckmann (and her fellow songwriters) to present the guidebook to our community and our researchers as well. We could all learn, brainstorm, and possibly give feedback and suggestions to further develop this workbook. Parties interested in sharing expertise and developing a collaborative training on sharing qualitative research data, please contact the authors of this blogpost.
Recognition & Rewards: Open & Responsible Research Assessment
Speakers: Lizette Guzman-Ramirez, Nami Sunami and Jeffrey Sweeney.
As a member of the FRDN Project Group Research Assessment, this session should have been the highlight of my attendance to the Open Science Festival 2023. But I, Nicky (Daniels), think it is fair to say that this workshop did not go as planned. During the workshop the speakers from Erasmus Research Institute of Management presented their framework of badges to Reward Open & Responsible Research Practices. Once the introduction was presented on how these badges were developed and how these would be assigned to researchers, the attendees could not help but wonder if this is the system that could lead to responsible research assessment. Though the intentions and ideas behind the framework are great, the practicalities and the mindset of the public are not there yet. The session quickly turned into a Q&A session in which the speakers had to defend their work instead of building on the content they provided and testing it to use cases. The most prominent issue is that this framework might open the door to yet again comparing research profiles based on the amount of research output, or in this case badges, and not adhering to the position ‘Room for everyone’s talent’. Thus, solely using this one particular Open Science badge framework will not be sufficient to reform research assessment. However, this framework of badges to reward Open Science efforts could contribute to making a narrative research profile more evidence-based, intuitive and transparent, as well as providing researchers with the incentives to put Open Science into practice.
Filling the Ethics Gaps for Citizen Science: Co-creating an Ethical assessment for Citizen Science Projects
Speakers: Ana Barbosa Mendes and Chiara Stenico (EUR).
The current ethical applications fail to do justice to the dynamic relationship between researchers and participants, which is of the utmost importance in citizen science. Where is room for the citizen scientist in the ethical review? In three groups, we discuss the topics of engagement, acknowledgement and stakeholders. Ideas coming out of these discussions are the need for a collaboration agreement with the participants instead of an informed consent, a strategy for participation acknowledgement (how are citizen scientists compensated?), and a desire to keep out negative outside meddling. Although the latter should be further defined and explained in the ethical proposal (how do you decide what “negative interest” is). For me, this was more of a “jam session”: it was a bit all over the place, but there were definitely some interesting tunes and songs going on! I look forward to hearing which final result has come out of it.