Resources are of paramount importance as they foster scientific advancement and the development of novel applications. Sharing resources is key to ensuring reproducibility, allowing other researchers to leverage FAIR principles for scientific data management to compare results and methods or to explore new lines of research, and supporting practitioners in reusing research outputs.
The ISWC 2024 Resources Track aims to promote the sharing of resources that support, enable, or utilize semantic web research. We welcome descriptions of resources that leverage knowledge representation based on Semantic Web standards or other graph data models to improve the acquisition, processing, and sharing of data on the web.
Resources include, but are not restricted to: datasets, knowledge graphs, ontologies/vocabularies, ontology design patterns, evaluation benchmarks or methods, software tools/services, APIs and software frameworks, workflows, crowdsourcing task designs, protocols, methodologies, and metrics, that have contributed or may contribute to the generation of novel scientific work and applications in the semantic web. In particular, we encourage the sharing of such resources following the best and well-established practices within the semantic web community. As such, this track calls for contributions that provide a concise and clear description of a resource and its usage.
Important Dates
All deadlines are 23:59 AoE (Anywhere on Earth)
ACTIVITIES
DUE DATE
Abstracts Due
10 April 2024
Full Paper Due
17 April 2024
Rebuttal
3-6 June 2024
Notifications
27 June 2024
Camera-ready Paper Due
31 July 2024
Resources of Interest
A typical Resource Track paper has its focus set on reporting on resources that fall into one of the following categories.
- Datasets produced
- to support specific evaluation tasks (for instance, labeled ground truth data);
- to support novel research methods;
- by novel algorithms.
- Knowledge graphs, represented using semantic web technologies or other graph models for web data, which can be reused in research or industry.
- Ontologies, vocabularies, and ontology design patterns, with a focus on describing the modelling process underlying their creation.
- Reusable software and services, e.g., prototypes /services supporting a given research hypothesis and enabling specific data processing and engineering tasks.
- Reusable software and services, e.g., prototypes supporting a given research hypothesis or services enabling specific data processing and engineering tasks.
- Community-shared software frameworks that can be extended or adapted to support scientific study and experimentation.
- Crowdsourcing task designs that have been used and can be (re)used for building resources such as gold standards and the like.
- Benchmarking activities focusing on datasets and algorithms for comprehensible and systematic evaluation of existing and future systems.
- Novel evaluation methodologies and metrics, and their demonstration in an experimental study.
- Protocols for conducting experiments and studies.
Differentiation from the Other Tracks
We strongly recommend that prospective authors carefully check the calls of the other main tracks of the conference in order to identify the optimal track for their submission. Papers that propose new algorithms and architectures should continue to be submitted to the regular research track, whilst papers that describe the use of semantic web technologies in practical settings should be submitted to the In-Use track. When new reusable resources are produced during the process undertaken for achieving these results, such as datasets, ontologies, workflows, etc., and they can be reused on a wider range of use cases, they are suitable subjects for submission to the Resources Track. As examples of resources that fit the Resource Track, consider tools immediately available for reuse, or benchmarks where baseline algorithms are only used to prove their relevance.
Review Criteria
The program committee will consider the quality of both the resource and the paper in its review process. Therefore, authors must ensure unfettered access to the resource both during the review process and after, by citing the resource at a permanent location. For example, data available in a repository such as FigShare, Zenodo, or a domain-specific repository; or software code being available in a public code repository, such as GitHub or BitBucket or one’s institutional open data repository. Code releases should be properly deposited according to community best practices. In exceptional cases, when it is not possible to make the resource public, authors must provide anonymous access to the resource for the reviewers and briefly motivate why the resource cannot be made public. All resources should clearly disclose their license.
We welcome the submission of established resources, having a community using them (excluding the authors), and of new resources, which may not prove established reuse but have sufficient evidence and motivation for claiming potential adoption. Evidence of adoption of a resource is considered a positive factor in the evaluation.
All resources will be evaluated along the following review criteria:
Impact:
- Does the resource break new ground?
- Does the resource fill an important gap?
- How does the resource advance the state of the art?
- Has the resource been compared to other existing resources (if any) of similar scope?
- Is the resource of interest to the semantic web community?
- Is the resource of interest to society in general?
- Will/has the resource have/had an impact, especially in supporting the adoption of semantic web technologies?
Reusability:
- Is there evidence of usage by a wider community beyond the resource creators or their project? Alternatively (for new resources), what is the resource’s potential for being (re)used?
- Is the resource easy to (re)use? For example, does it have high-quality documentation? Are there tutorials available?
- Is the resource general enough to be applied in a wider set of scenarios, not just for the originally designed use? If it is specific, is there substantial demand?
- Is there potential for extensibility to meet future requirements?
- Does the resource include a clear explanation of how others use the data and software? Or (for new resources) how others are expected to use the data and software?
- Does the resource description clearly state what the resource can and cannot do, and the rationale for the exclusion of some functionality?
Design & Technical Quality:
- Does the design of the resource follow resource-specific best practices?
- Did the authors perform an appropriate reuse or extension of suitable high-quality resources? For example, in the case of ontologies, authors might extend upper ontologies and/or reuse ontology design patterns.
- Is the resource suitable for solving the task at hand?
- Does the resource provide an appropriate description (both human- and machine-readable), thus encouraging the adoption of FAIR principles? Is there a schema diagram? For datasets, is the description available in terms of VoID/DCAT/DublinCore?
Availability:
- Is the resource publicly available? For example as API, Linked Open Data, Download, Open Code Repository.
- Is the resource publicly findable? Is it registered in (community) registries (e.g. Linked Open Vocabularies, BioPortal, or DataHub)? Is it registered in generic repositories such as FigShare, Zenodo or GitHub?
- Is there a sustainability plan specified for the resource? Is there a plan for the medium and long-term maintenance of the resource?
- Does the resource adopt open standards, when applicable? Alternatively, does it have a good reason not to adopt standards?
In addition to the above criteria for evaluation, we stress that there are availability requirements to fulfill, as specified as follows:
- Mandatory: Is the resource (and related results) published at a persistent URI (PURL, DOI, w3id)?
- Mandatory: Is there a canonical citation associated with the resource?
- Mandatory: Does the resource provide a license specification? (See creativecommons.org, opensource.org for more information)
Guidelines for reviewers are available here.
To ensure that reviewers and readers of published papers will easily find the mandatory availability information, please use the Resource Availability Statement Guide and suggested wording.
Regarding specific resource types, checklists of their quality attributes are available in a presentation. Both authors and reviewers may make use of them when assessing the quality of a particular resource.
Submission Details
- Pre-submission of abstracts is a strict requirement. All papers and abstracts have to be submitted electronically via EasyChair.
- Papers describing a resource must be in the range of 8 and 15 pages + references. Papers must describe the resource and focus on the sustainability and community surrounding the resource. Benchmark papers are expected to include evaluations and provide a detailed description of the experimental setting. Papers that exceed the page limit will be rejected without review.
- All submissions must be in English.
- Submissions must be either in PDF or HTML, formatted in the style of the Springer Publications format for Lecture Notes in Computer Science (LNCS). For details on the LNCS style, see Springer’s Author Instructions. For HTML submission guidance, please see the HTML submission guide used for ISWC 2024.
- ISWC 2024 submissions for the resources track are single anonymous, i.e., authors are named, reviewers are anonymous. We encourage embedding metadata in the PDF or HTML to provide a machine-readable link from the paper to the resource.
- Authors of accepted papers will be required to provide semantic annotations for the abstract of their submission, which will be made available on the conference website. Details will be provided at the time of acceptance.
- Accepted papers will be distributed to conference attendees. Accepted papers will be distributed to conference attendees and also published by Springer in the printed conference proceedings, as part of the Lecture Notes in Computer Science series.
- At least one author of each accepted paper must register for the conference and present the paper. As in previous years, students will be able to apply for registration or travel support to attend the conference. Preference will be given to students that are first authors of papers accepted to the main conference or the doctoral consortium, followed by those who are first authors of papers accepted to ISWC workshops and the Poster & Demo session.
Prior Publication and Multiple Submissions
- ISWC 2024 will not accept resource papers that, at the time of submission, are under review for or have already been published or accepted for publication in a journal, another conference, or another ISWC track. The conference organizers may share information on submissions with other venues to ensure that this rule is not violated.
Research Metadata and Comparisons
To facilitate clearly stating novelty to readers and peer-reviewers alike, findability of the paper if accepted, and trying to use knowledge graphs ourselves, you may add to the paper a so-called “ORKG comparison” with the Open Research Knowledge Graph (ORKG). Such an ORKG Comparison is a characterization of a submission by juxtaposing it with related resources, if there are any, and therewith highlighting the key difference(s) of your resource with related ones. More information on the background and how to create an ORKG comparison can be found here (including a how-to video). This can be done during the submission process – in which case a link to the comparison can be added to the submission for reviewers. This workflow describes the steps involved in the creation of such a comparison.
This addition to an ISWC paper submission is experimental and optional. It may not be relevant to your resource, and the absence of such a comparison will not negatively affect the review of the paper.
Resource Track Chairs
Maribel Acosta, Technical University of Munich, Germany
Matteo Palmonari, University of Milan-Bicocca, Italy
Contact: iswc2024-resource@easychair.org