Log in Help
Print
Homesaleiswc07clone 〉 review-notes.txt
 
R1 says CLOnE isn't really a controlled language.  We disagree: CLOnE
is a sublanguage of English with restricted syntax and a partly
restricted lexicon.  It is designed to be easy to learn, so it has a
partly open lexicon and a looser syntax than many other controlled
languages.

R1 says this isn't "information extraction", that we are dealing only
with a very specific aspect of knowledge management (ontology
engineering), and that our title is misleading.  We accept that it
isn't IE in the usual sense (from free text) and we have changed the
title to "CLOnE: Controlled Language for Ontology Editing".

R1 says the evaluation needs more explanation.  R1 and R3 say the
meaning of Table 2 (confidence intervals) is unclear.  R1 says Table 3
should have highlight the interesting figures (as discussed in the
text), and R2 and R3 ask for an explanation of correlation
coefficients.  So we have added footnotes (with references) to explain
confidence intervals and correlation coefficients, as well as another
column to Table 3.

We are constrained by the page limit to present only pithy summaries
of the implementation and our evaluation measures, but we have added a
reference to the project deliverable, which has more details.

R1 complains that SUS scores are not comparable to the baseline, but
has misunderstood (confusing the pre-test scores with the reference
baseline for SUS scores in general).  R2 figured this out but says it
is confusing.  We have rewritten the text accompanying Table 1 to make
this clearer.

R1 notes that our statistics suggest using the CL interface is not
faster than using Protégé; we make no claims about speed!

R1 says we should mention Orakel and provides a URL.  We're aware of
it but the original paper provides little data about the type of
Controlled Language used, if any.  Orakel it automatically generates
its general purpose Lexicon using LTAG and the LopPar parser but no
evaluation was provided.  We could mention it just for completeness
but there is no room in the page limit!

R1 says the reference list is sloppy.  I've cleaned it up as
suggested.

R2 says writing is awkward at times.  I've edited sections 1, 2 & 3 to
make them less stylistically inconsistent (that was an artefact of
splicing various people's writing together) and shorter (to make room
for other details the reviewers requested).

R2 says we should show more about how the system works, and include a
screenshot --- no room for a screenshot, but I've added a little more
about how the system works.

R2 says the task times are not directly reported --- no room in 14
pages, but we are adding a reference to the deliverable.




R2 says we should explain why users liked our system in the
evaluation.

R2 says the last section should spend less space on projects and more
on the general ways CLOnE can be applied.  I've tried to tighten this
section up to make applications clearer in the context.

R3 says the experimental task is artificial and the tasks are too much
like CL data entry.  But we had to write the tasks simply so that
users could carry them out in both tools in a reasonable amount of
time without losing interest.  We also provided a mini-manual that
covered just enough Protégé to do the tasks without confusing
additional information.

R3 says the scope of the CL is not clear in the paper, but he then
states exactly what it's scope is (taxonomic and property information)
and understands that it is narrower than ACE's scope.


13 August.  Discussed with Kalina.  Edited and shortened "Related
work".  Changed all CLIE to CLOnE (less confusing).  Fiddled with
tables and reduced section headings to get more space, then added the
list of syntactic rules.