The Euro-Par conference series encourages authors of accepted papers to participate in the Artifact Evaluation Process (AEP). The authors of papers accepted at Euro-Par 2023 will be formally invited to submit their support material (e.g., source code, tools, benchmarks, datasets, models) to the AEP to assess the reproducibility of the experimental results presented in the accepted paper. The artifact will undergo a completely independent review process, run by a separate committee of experts who will assess the quality of the artifact, the reproducibility of the experimental results shown in the paper, and the usefulness of the material and guidelines provided along with the artifact.
All artifacts will receive a review. The review will consist of few comments stating whether the evaluation was successful or not and providing hints for improving the document. A technical clarification window will occur, during which the reviewers can, anonymously, ask the corresponding authors of the artifact to solve technical issues encountered. The issues must be clarified within few days, otherwise the artifact will not be accepted.
The papers whose artifacts will be accepted, will receive a seal of approval printed on the first page of the papers as they appear in the final proceedings published by Springer. The artifact material will be made publicly available.
Although warmly advised, the artifact evaluation process is completely optional and, in any case, will not modify the acceptance decision already made on the Euro-Par papers.
13 May 2023 AoE
Technical Clarification Window
27 May 2023 - 2 June 2023
3 June 2023
- Only authors of accepted Euro-Par 2023 papers are invited to submit an artifact
- You must use the same title and author for the paper and the artifact
- Artifacts should be provided as a single ZIP file including:
- The paper;
- an Overview Document in PDF format;
- the Artifact itself or a URL + md5 hash pointing to the artifact
We are delighted to announce that this year we will offer a prize for the best artifact.
The decision will be made based on the results from the Artifact Evaluation phase.
The prize money is 500 Euro.
Accepted Artifacts will be considered for the Euro-Par 2023 Artifact Special Issue in the Journal of Open Source Software (JOSS). More information about the standard submission procedure for the JOSS journal can be found at this link https://joss.readthedocs.io/en/latest/submitting.html.
An example of Euro-Par 2022 JOSS accepted Artifact can be seen here: https://www.theoj.org/joss-papers/joss.04591/10.21105.joss.04591.pdf
If your paper is accepted at Euro-Par 2023, you can submit your artifact before the deadline using the EasyChair link: https://easychair.org/my/conference?conf=europar2023 (Euro-Par 2023, track Artifact Evaluation).
You must use as title and authors of the submission the same title and authors of the accepted paper. Your artifact submission will take one of two forms:
- A file containing a URL pointing to a single ZIP file containing the artifact, plus a md5 hash of that file (use the md5 or md5sum command-line tool to generate the hash) and the paper-ID of the Euro-Par accepted paper.
- Direct upload: the artifact uploaded directly to EasyChair (if it is less than 50MB).
In the first case, the URL must be a Google Drive or Dropbox URL, to help protect the anonymity of the reviewers.
Your artifact must include an Overview Document as described below in PDF format.
A valid type of artifact is a working copy of software (and dependencies) that supports the paper’s conclusions. The ZIP file includes, along with the Overview Document, README files, datasets, examples, benchmarks and case studies needed to reproduce the results contained in the accepted paper.
All necessary packages, dependencies and any additional software required to run the artifact must be explicitly listed in the
document and possibly included in the artifact evaluation ZIP file. Artifacts that need proprietary software released under non-open source licences or that cannot be freely (and anonymously) downloadable will not be evaluated by the committee.
All artifacts will receive a review. The review will consist of some comments stating whether the evaluation was successful or not
and providing hints for improving the document. During the technical clarification window, the reviewers can, anonymously, ask the corresponding authors of the artifact to solve technical issues encountered. The issues must be clarified within a few days; otherwise the artifact will not be accepted.
The Overview Document (that should be just a few pages) must contain all the exact steps to install, compile and execute the artifact. Notably, the document must include comprehensive guidelines to assess the quality of the execution’s outcome and how to interpret the results with respect to the Euro-Par accepted paper.
Your overview document should consist of two parts or sections:
- a Getting Started Guide, and
- Step-by-Step Instructions on how to reproduce the results (with appropriate connections to the relevant sections of your paper).
The Getting Started Guide should contain setup instructions, including the additional software to install with their exact versions and basic testing of your artifact. It is expected that this phase should require no more than 30 minutes to complete. You should write your Getting Started Guide to be as simple and straightforward as possible, and yet it should stress the key elements of your artifact. If well written, anyone who has successfully completed the Getting Started Guide should not have any technical difficulties with the rest of your artifact.
The Step-by-Step Instructions should explain how to reproduce any experiments or other activities that support the conclusions in your paper in full detail. Write this part in a way that it is useful for future researchers who have a deep interest in your work and want to compare with it or improve your results.
In this section, you have to indicate the exact platform you have used for your tests, and for each input dataset that has to be used to reproduce your experiments, the execution time it took on your system.
If running the artifact to reproduce your experiments takes several hours, please clearly state this at the beginning of the Step-by-Step section and clearly point out ways to run it on smaller inputs to reduce the execution time (yet obtaining qualitatively acceptable results). Artifacts requiring only long-running executions to produce meaningful results will not be evaluated.
Where appropriate, include descriptions of each test and link to files (included in the ZIP) that represent expected outputs, e.g., the log files expected to be generated by your tool on the given inputs, or expected results for each input file.
For performance experiments, it is understood that results will not perfectly match those in the paper, due to differences in the
reviewers’ hardware. However, the artifact evaluators should be able to reproduce the same qualitative outcomes contained in the paper.
Where possible, please automate data extraction and the production of plots, so that wherever possible the experiments run using the artifact produce figures matching those figures in the paper.
The criteria used for the evaluation are as follows:
- Artifacts should be consistent with the paper
- Artifacts should be as much self-contained as possible
- The documentation provided must give clear guidelines on how to validate and verify the results
- Artifacts should be easy to reuse and facilitating further research
- Artifacts requiring only long running execution will not be evaluated
- Artifacts requiring specialized hardware and/or complex network topologies/infrastructures and/or large cluster configurations will not be evaluated.
The ideal target platform for evaluating the artifact should be a small cluster (1-3 nodes) of standard multicore servers equipped with one GPU and interconnected via a standard switched Ethernet network. The reference OS is Linux.
For artifacts needing specific non-commodity hardware, we will ask authors to give remote access to this specialised hardware and to provide access information. If such access for the Evaluation Committee is not possible, the artifact will not be evaluated.
- Harald Gjermundrod, University of Nicosia, Cyprus
- Georgia Kapitsaki, University of Cyprus, Cyprus
- Massimo Torquati, University of Pisa, Italy
- Haris Volos, University of Cyprus, Cyprus