The reviewers enjoyed many aspects of the paper, and appreciated the writing and the provision of data. However, they have identified serious flaws with the experimental design and additional flaws with the validity of the findings. One reviewer felt the review to be out of scope. The reviewers were no clear on the novel contributions of your approach, and if this is a case report on existing methods rather than a research article, then this is out of scope.If you resubmit, then you should address the comments of all reviewers. In particular:You should cite appropriate literature in the field, and clearly address which novel contributions your approach makes, in terms of theory and performance. It should be clear if your method is entirely automatic.Case studies are not in scope for PeerJ. If you resubmit, then the resubmission must be clearly a research article.For the evaluation of the results, you should use all appropriate statistical tests and present appropriately. When comparing with existing methods, you should be clear that you are comparing like with like, e.g. if your method involves manual intervention, then include a comparison with another method that also involves intervention.
Organization theory and design 11th edition pdf download full
There are a lot of uncontrollable externalities when experimenting on real-world conferences, but the authors have done a good job in designing experiments within these constraints. The experimental design for automated scheduling using 2013 conference data to inform their experiments on the core problem of the 2014 conference is principled and well thought through. While the expert analysis (2.2.2) to compare automated and manual schedules is well designed, small number statistics dictate that no strong conclusions can be drawn from 29 data points. The work is still interesting and thought provoking, but it should be presented as a preliminary study to suggest further work rather than anything to draw firm conclusions from. It is still important to publish and share such work, but its presentation needs to clearly state that this is only an exploration rather than a full rigorous experiment.I commend the authors for observing best practice open science and publishing their software and data on Zenodo. A few comments below:- README.md does not state python version and other dependencies, doesn't include instructions for running the programs or how to reproduce the results in the paper. An orientation of the file tree in the readme would be useful given there are so many folders. Minor point: commentary refers to 2014 in future tense.- Readability of the python code would benefit from pylint/pep8 formatting (which can be done automatically in many editors, so not a good excuse for not following best practice).- A comment at the head of each file would make the code easier to understand.- The inclusion of import statements (rather than just the call to main()) is a little confusing and can upset code assist/linting in some editors; even if the code is invoked via inclusion rather than as main, the imports will only happen once no matter how many times they are invoked.- Minor tidying up suggestion: remove the Mac ".DS_Store" files scattered throughout the tree. 2ff7e9595c
Comments