From tleen@cse.ogi.edu Thu Sep 2 11:52:03 1999
Date: Wed, 1 Sep 1999 19:20:14 -0700 (PDT)
From: Todd Leen
To: fnielsen@eivind.imm.dtu.dk
Subject: NIPS*99 Reviews enclosed
Dear Finn Nielsen,
Enclosed are the reviews provided by the referees for your NIPS*99
submission NS325
Modeling of BrainMap data
Each numerical review consists of a score (1 - very weak paper,
5 - top notch paper) and a confidence (1 - low confidence,
5 - absolute certainty). The scores and confidences from the 3 or
4 reviewers assigned to your paper were combined in a weighted
average to obtain a numerical ranking. The rankings were used
as a guide by the program committee in their selection of papers.
In addition to the numerical scores, I include the comments from
the reviewers. In several cases, there are additional comments
from the program committee, and they are identified as such.
We hope the comments will help you improve your paper. If your paper
was accepted, the comments can help improve the final version for
the proceedings. If your paper was rejected, we hope the comments help
you improve the paper for subsequent submission.
The reviews, both the numerical ranking and the reviewers' comments,
formed an vital part of the input to the decisions taken by the
program committee. However they do not, in isolation, explain the
acceptance or rejection of your paper. The program committee took
great care to resolve controversial or borderline papers through
preparation before, and discussion during our recent program committee
meeting.
Thanks for your interest in NIPS!
Todd K. Leen
NIPS*99 Program Committee Chair
........................................................................
Score : 2
Conf : 5
This paper is not clearly written. The objective of the study is never
made in explicitly quantitative terms, nor are there any new scientific
contributions. The techniques described are general density estimation
and modeling techniques. No argument is given for the particular
suitability of mixture of Gaussian density estimation in this case, and
it seems unlikely to be relevant.
A technical problem: the ''EM'' variant described is not an EM algorithm
at all. The ''E'' step, which requires soft assignment, has been
replaced by a hard assignment to the most probable cluster. This is
closer to a k-means algorithm in the Mahanalobis metric, and is not a
good way to estimate a mixture density.
........................................................................
Score : 3.2
Conf : 4
The paper describes the application of Gaussian mixture learning to an
important (and under-studied) data analysis problem. As such, it is
important work, and I think the authors would benefit from attending
NIPS.
But the study is not as thorough as one would like (although one
cannot expect a really full-blown evaluation in a conference paper),
and it is difficult to evaluate the results (they should be compared
with a simpler algorithm, perhaps one that has been used in the
literature).
........................................................................
Score : 2
Conf : 4
The authors build a mixture model to describe the distribution of fMRI- and
PET-
''hot spots'' in the human brain for different behavioural paradigms. They
provide a modified version of the learning algorithm, and they show some
figures of the 3D distributions for the different modalities and
behavioural
paradigms across the human brain. My general point of critique is that it's
not clear what we learn from it. The surface plots show large volumes
across
a considerable part of the brain but do not provide specific information
not
known before, and the table shows that ''generalization in terms of label
predictions is not high'' (as the authors state).
Specific comments:
1. Eq. (3) diverges. If a normalization factor would be included and if the
sums would be replaced by integrals one would obtain the cross-entropy as a
measure of similarity between the true and the model distributions.
2. Algorithm, ''Repeat until convergence'': Step 1: Why don't the authors
use
probabilistic assignments to weigh each datapoint's contribution? Step 2:
Why
are both sets equal? The covariance matrix has more free parameters than
the
mean vector, so more datapoints are needed for its estimation.
3. Table: The authors need to explain better how they calculated the
numbers.
also, what is the ''baseline probability''? Are the differences
significant?
4. Page 5, second par: What is the rationale to make a mixture model for
''modality''? If any significant differences in distribution are found,
wouldn't
they just reflect the preferences of certain experimental groups?