| Home | Program | Dates | FAQ | Program Committee | Latest Edition | 
The 1st International Fuzzing Workshop (FUZZING) 2022 welcomes all researchers, scientists, engineers and practitioners to present their latest research findings, empirical analyses, techniques, and applications in the area of fuzzing and software testing for automated bug finding.
The workshop will be organized as Phase 1 in a 2-phase preregistration-based publication process. All research papers will be peer-reviewed on the basis of a full-length preregistered report, and acceptance will be based on (i) the significance and novelty of the hypotheses or techniques, and (ii) the soundness and reproducibility of the methodology specified to validate the claims or hypotheses -- but explicitly not based on the strength of the (preliminary) results. More details about the benefits of this process can be found in this blog post co-authored by the workshop organizers: http://fuzzbench.com/blog/2021/04/22/special-issue/
Update (8th June): We are excited to announce that ACM TOSEM has agreed to host the registered reports that have been accepted in Phase 1 of FUZZING'22 in their new Registered Reports track! Please submit the revision of your accepted FUZZING'22 paper under https://mc.manuscriptcentral.com/tosem until 15 June 2022 (chosing "Registered Report" as paper type).
Along with the submission of the accepted FUZZING'22 paper to TOSEM, we also facilitate an Artifact Evaluation (AE). Please checkout our AE page with more information and the submission instructions: fuzzing-workshop.github.io/artifacts.html
                Dissecting American Fuzzy Lop - A FuzzBench Evaluation
                
                Andrea Fioraldi, Alessandro Mantovani (EURECOM), Dominik Maier (TU Berlin), Davide Balzarotti (EURECOM)
            
                NSFuzz: Towards Efficient and State-Aware Network Service Fuzzing
                
                Shisong Qin (Tsinghua University), Fan Hu (State Key Laboratory of Mathematical Engineering and Advanced Computing), Bodong Zhao, Tingting Yin, Chao Zhang (Tsinghua University)
            
                Fuzzing Configurations of Program Options
                
                Zenong Zhang (University of Texas at Dallas), George Klees (University of Maryland), Eric Wang (Poolesville High School), Michael Hicks (University of Maryland), Shiyi Wei (University of Texas at Dallas)
            
                Generating Test Suites for GPU Instruction Sets through Mutation and Equivalence Checking
                
                Shoham Shitrit, Sreepathi Pai (University of Rochester)
            
                First, Fuzz the Mutants
                
                Alex Groce, Goutamkumar Kalburgi (Northern Arizona Univeristy), Claire Le Goues, Kush Jain (Carnegie Mellon University), Rahul Gopinath (Saarland University)
            
                Fine-Grained Coverage-Based Fuzzing
                
                Bernard Nongpoh, Marwan Nour, Michaël Marcozzi, Sébastien Bardin (Université Paris Saclay)
            
                datAFLow: Towards a Data-Flow-Guided Fuzzer
                
                Adrian Herrera (Australian National University), Mathias Payer (EPFL), Antony Hosking (Australian National University)
            
Submissions are solicited in, but not limited to, the following areas:
The workshop solicits registered reports drafts. A registered report is a full paper sans the evaluation or experiments. Each draft will be reviewed by at least three members of the program committee according to the review criteria mentioned above with the key objective of providing constructive feedback. Accepted drafts are made available to all participants. These drafts will be presented and discussed in detail at the workshop in order for the authors to receive further constructive feedback. After incorporating this feedback, the authors can submit final versions of the Registered Reports for review. Notably, accepted Registered Reports will be invited as full articles in a Special Issue in one of the premier software engineering journals. This invitation is equivalent to an in-principle acceptance in the general pre-registration process. For the final journal article, the authors are expected to conduct the experiments, evaluation, or study as specified in their registered report.
We are currently discussing with top journals regarding the publication of the final papers. Therefore, the deadlines for the submission of registered reports and full journal papers might be delayed, depending on the constraints from the journal.
Submitted report drafts are expected to be a full technical paper sans the full evaluation/experiment results. To assess the feasibility of the experiments, however, we expect some preliminary results in small-scale experiments.
Submitted report drafts should include no more than 8 pages, excluding references. There are no page limits on the references. Papers must be formatted according to the NDSS requirements. Templates are available at https://www.ndss-symposium.org/ndss2022/call-for-papers .
Depending on the authors’ preference, accepted registered report drafts will be published in the workshop proceedings of NDSS through the Internet Society. The proceedings will be submitted for publication in IEEE Xplore.
Update: FUZZING'22 will employ a double-anonymous policy for all submissions. Please ensure that the authors remain anonymous in your submission. For more details on this policy, please email us or consult this excellent FAQ. (Due to the short notice, we will not desk-reject a submission in violation of this policy. Instead, we will ask the authors to update the submission with an anonymized revision.)
Report drafts can be submitted at: https://easychair.org/conferences/?conf=fuzzing22
Q: Accepting papes without considering the outcome of the experiments
        sounds like “lowering the bar” for publication. Won’t this lead to 
        low-quality papers?
        A: No. In fact, we strongly believe that the process will lead to higher
        quality papers with a stronger focus on the significance/novelty of the
        proposed approach and the soundness and reproducibility of the evaluation.
        Specifically, moving the in-principle acceptance to a time *before* the
        evaluation is conducted i) improves the soundness of the evaluation
        (based on early reviewer feedback) and ii) ensures that the evaluation
        is free of bias (e.g., HARKing).
    
Q: Does this mean that you can publish negative results too? What is the
        point of that?
        A: Yes. Compared to the existing publication model, our preregistration-based
        model will also allow the publication of negative results, i.e., results that
        show that a proposed approach does indeed not work or a reasonable hypothesis
        does not hold. However, given that reviewers deemed the investigation to be
        worthwhile and the evaluation methodology to be sound, we are convinced that
        the publication even of negative results will not only avoid redundant efforts
        across the community but also enrich our understanding of the problem under
        investigation. Like for positive results, we ask that authors thoroughly
        analyze the underlying reasons for negative results and provide an
        interpretation.
    
Q: Can I modify the idea or the experimental protocol after in-principle
        acceptance in Stage-1? If yes, how much can be changed?
        A: Yes. The registered report serves as an agreement on the minimal experiment
        and any deviation must be clearly justified. You are allowed to change the
        paper and the experimental protocol within reason. For instance, if the
        results point to an optimization opportunity for a proposed technique, you
        are welcome to implement and evaluate the benefit of the optimization. If
        the results call for a deeper investigation of certain aspects of the proposal,
        you are welcome to add further research questions. Any such deviation from the
        experimental protocol must be explained in a Summary of Changes which will be
        subject to review in Stage-2. For larger deviations from the evaluation
        protocol or when in doubt, you are welcome to request permission to follow an
        alternative evaluation protocol.
    
Q: What if my idea gets “scooped” after my registered report is
        published?
        A: The registered report is published and “active”, i.e., the community knows
        that you are working on the project laid out in the registered report. The
        “scoop” would be pretty obvious. Even if related work is published after the
        in-principle acceptance is granted in Stage-1, this will not influence the
        final acceptance decision for your paper in Stage-2. The accepted registered
        report is a joint commitment by the authors and the reviewers.
    
Q: What if I cannot finish the experiments and finalize the paper before
        the final submission deadline?
        A: There is indeed a deadline for submitting the final version of your paper.
        If you need an extension, please provide an explanation for this extension
        anda suggestion for a new deadline. Extensions are normally granted but
        require a reasonable justification. If a deadline expires without submission,
        the registered report becomes “inactive” and the in-principle acceptance is
        withdrawn. The paper is not considered “under review” anymore and can be
        submitted elsewhere.
    
Q: What if my final paper gets rejected? Can I submit to other venues?
        What if the other venue rejects the paper claiming that the idea has already
        been published in the registered report?
        A: Unless inactive, withdrawn, or rejected/accepted in Stage-2, an accepted
        registered report is considered as a full paper currently “under review” and
        cannot be submitted elsewhere. Once the report is withdrawn, rejected, or
        inactive, you can submit the final paper elsewhere. You can think of the
        registered report as a workshop paper without evidence or only preliminary
        results. Conferences or journals often publish extensions of workshop papers.
        There should be no difficulty submitting the full paper despite the
        publication of the registered report.
    
Q: Stage 1 is double-anonymous: Reviewers do not know the authors and vice
        versa. Yet, Stage 2 is single-anonymous: Reviewers know about the authors
        for each submission. How does this influence the final decision?
        A: Stage 2 merely serves as a confirmation that the agreed experimental
        protocol has been followed and that deviations are explained. Since this
        judgment is objective, we believe that the risk of reviewer bias is
        sufficiently low in Stage 2.
    
| Design by Mike Pierce | © Conference Organizers |