Dataset Viewer
Auto-converted to Parquet Duplicate
paper_id
stringlengths
10
56
reviewer_id
stringlengths
10
14
Question
stringlengths
61
2.58k
isHighQuality
bool
2 classes
2O2FOO8pl4
XNAD64x9rP
The authors redefine privacy and introduce a privacy-utility trade-off. In the related work, the authors also mentioned differential privacy, which has a similar trade-off. Could the authors elaborate on the difference between them?
true
2O2FOO8pl4
XNAD64x9rP
How does adversarial training impact the privacy in your experimental findings? Could the authors explicate the insights/findings?
true
2O2FOO8pl4
XNAD64x9rP
The authors give a comprehensive analysis of newly defined privacy and its leakage. The idea is interesting since it introduces a new perspective on privacy. Actually, I feel a little confused about why Definition 1 and Definition 2 are required. What is the insight/intuition of privacy guarantee? What does the newly defined privacy essentially protect? How do you measure the privacy loss in practice/experiments? Why privacy leakage is defined as mutual information?
true
cG8Q4FE0Hi
EyMZdNlpiZ
The experiments are only conducted on randomly sampled sub-sets of the test sets, which may raise concerns about the convincingness of the results.
true
cG8Q4FE0Hi
EyMZdNlpiZ
In addition, did the paper's testing of the Self-consistency algorithm use 30 paths? In Self-consistency, the typical number of paths used is (1, 5, 10, 20, 40). Why were 30 paths chosen? If the reason is to make comparable comparisons based on average tokens, it would be appropriate to report the performance and average tokens under different numbers of paths.
true
cG8Q4FE0Hi
EyMZdNlpiZ
Can the three different examples in the Introduction be unified?
true
CgpiO0DRrk
2mQAM45xsU
Each EBS serves a distinct region (no overlap, as per the paper) and optimizing each EBS individually should be sufficed in my opinion. How sharing among servers will help here?
true
CgpiO0DRrk
2mQAM45xsU
How is the three defined temporal metrics related to video file arrival rate?. Why not adopting arrival rate instead since it is widely used and correlate/encapsulate at least two of these three metrics?
true
yONJt6nFc3
sbbIyzSBbp
The improvements reported in the paper are astonishing. I wonder if the authors conduct any significance test on the improvements and the corresponding confidence levels, especially for the isolated and low-degree cases.
true
yONJt6nFc3
sbbIyzSBbp
Following Q1, it is surprising that all categories of nodes get benefited a lot after duplicating (or adding self-loops for) cold nodes.
true
yONJt6nFc3
sbbIyzSBbp
Compared to Hits@10, Hits@1 could be more critical in the real-world applications, especially for tail nodes with very few neighbors. I wonder if the authors can also provide the Hits@1 performance.
true
yONJt6nFc3
sbbIyzSBbp
Following W2, the authors should consider conducting a set of experiments using all remaining nodes as candidates for link prediction, thereby alleviating the bias on easy negatives and achieving more fair comparisons.
false
yONJt6nFc3
sbbIyzSBbp
Following W3, I wonder why so many reported metrics are contradict with existing studies.
false
yONJt6nFc3
sbbIyzSBbp
For instance, Cold-brew even underperforms the original GSage across all metrics on Cora/Citeseer and most of metrics on other datasets in Table 1. As the authors do not use the identical setup and data partitions to previous studies, it should be better to have some elaboration on this part.
true
yONJt6nFc3
sbbIyzSBbp
Following W4, in Appendix C, the authors claimed some reasons to not conduct experiments on some large-scale graphs. However, given the improvements on all node categories, I believe it is still worth to verify the performance on the large-scale datasets.
false
yONJt6nFc3
sbbIyzSBbp
Besides, `IGB` is not really *large-scale* while some datasets like `ogbn-products` and `ogbn-papers100M` have millions or handred millions of nodes.'
true
ApjY32f3Xr
C4sqXJNESI
How did you ensure that the PINN methods you evaluated were able to handle the diverse range of PDEs in your dataset, and what challenges did you encounter in this process?
true
ApjY32f3Xr
C4sqXJNESI
Can you describe the process of training the neural networks for each PDE, and how you optimized the hyperparameters for each method?
true
ApjY32f3Xr
C4sqXJNESI
How did you handle issues such as boundary conditions and initial conditions in your experiments, and what strategies did you use to ensure that these conditions were satisfied?
true
ApjY32f3Xr
C4sqXJNESI
Can you discuss the limitations of your benchmarking tool, and how future research could address these limitations to further advance the field of PINNs?
true
ApjY32f3Xr
C4sqXJNESI
First, the authors only discuss PINN methods. They didn't look at other common methods. It would be good to see how PINN methods compare to these.
false
ApjY32f3Xr
C4sqXJNESI
Second, they didn't give much detail on what computer stuff is needed for PINN methods. They did say if the methods work fast or slow. But, it would be helpful to know what computer tools or power is needed. People who want to use these methods would find that information useful.
false
ApjY32f3Xr
C4sqXJNESI
Last, the authors worked with a set of 20 PDE problems. But they might have missed some other important problems. In future studies, it would be good to add more problems to their list. This way, we can learn even more.
false
KknWbD5j95
sBfUeeE45V
In Sec. 3, for the ablation study of number of decoding steps, did you also perform such experiment to measure the effect of the number of decoding steps on the WER/CER performance?
false
KknWbD5j95
sBfUeeE45V
In Sec. 3.3, iterative parallel decoding, have you tried to replace the unmasked tokens from previous inference stage with the estimation from current inference stage with a higher confidence score? The question here applies to both current RVQ layer and previous RVQ layers.
false
KknWbD5j95
sBfUeeE45V
The original MaskGIT paper describes the limitation and failure cases of the MaskGIT method, such as semantic and color shifts, ignore and modify objects on the boundary when applied to outpainting and inpainting, oversmoothing or creates undesired artifacts on complex structure, etc. Are these limitations also appliable to speech generation task? Could you comment the technique limitations of the SoundStorm method?
true
KknWbD5j95
sBfUeeE45V
In the 1st paragraph of Sec. 3.3, the definition or the criteria for confidence score should be explained explicitly.
false
KknWbD5j95
sBfUeeE45V
In the 2nd paragraph of Sec. 3.3, when you mention “the conditional independence assumption in finer levels”, you should add that this assumption is made along the time dimension.
false
KknWbD5j95
sBfUeeE45V
In sec. 4, the experiments part lack of the training configuration details, such as batch size, learning rate scheduler, optimization method, etc. This happens for both utterance-based generation and conversational speech generation.
false
mrBd4hyWlP
poXkXPSITy
Could the authors elaborate on their decision to employ SSIM as the loss function rather than MSE loss? An explanation of this choice would provide clarity on the advantages or the intended outcomes of using SSIM over the traditional MSE, particularly in the context of the specific reconstruction challenges this work addresses.
true
mrBd4hyWlP
poXkXPSITy
The reliance on evaluation metrics such as SSIM, NMSE, and PSNR is somewhat restrictive and may not fully capture the diagnostic quality of radiographic imaging. Inclusion of expert assessments from radiologists could significantly enhance the validation of the reconstructed MRI's diagnostic utility, which is paramount in medical applications.
false
Oy1NtlFDmD
6HyMd3xmFT
In Table 4, without applying the instance normalization, the accuracy of GCN has a big difference between the max and min values, does this mean the convergence is not yet complete?
true
Oy1NtlFDmD
6HyMd3xmFT
According to Table 2, for ogbn-Arxiv dataset, the DropEdge seems slower than vanilla algorithm, which means different datasets would achieve different speedups since their distribution, do you have any ideas about improving the sampling algorithm for specific datasets from their features?
true
LYS3RhIYCq
5RO0ugP7aV
For Battle zone and Q*bert in Figure 1.(a), for a fixed FLOP budget, how the data size change when increasing the model size?
true
LYS3RhIYCq
5RO0ugP7aV
A more interesting direction is to investigate the scaling law of another class of IL methods named adversarial imitation learning [1, 2], which applies a totally different objective from BC.
false
U0P622bfUN
uP676dsarr
The quality of synthetic data could be highly different according to domain discrepancy between the local training data and the pretraining data for the foundation model. Instead of using standard image classification datasets, does the proposed method work for federated learning on fine-grained classification such as CUB-200, Cars, and medical image datasets?
false
U0P622bfUN
uP676dsarr
An ablation study of varying the foundation models is needed.
false
U0P622bfUN
uP676dsarr
Clients in federated learning are often assumed to have limited capacity in memory or computation. Generating prompts using a large visual captioning model in each client is impractical.
true
EGjvMcKrrl
RnX7s6Ub1p
In Theorem 1 , the authors claim that the SSM generalization is characterized by the temporal dependencies of the sequential data. More details on how does the dependency of the sequential data affect the generalization error should be included
true
EGjvMcKrrl
RnX7s6Ub1p
Moreover, in order to achieve small generalization error, the mean and variance of the GP should remain a small level. While these two key parameters rely on the GP assumption, independent of data. This seems inconsistent with data-dependent generalization error bounds, as claimed in the paper.
true
EGjvMcKrrl
RnX7s6Ub1p
In speak of enhancing the robustness of SSMs on different temporal dependencies, the authors take $1/\sqrt{\tau(\theta)}$ as a rescaling factor for initialization. Any theoretical guarantees (e.g. variance analysis) on the robustness comparing with the HiPPO framework?
true
EGjvMcKrrl
RnX7s6Ub1p
The main techniques adopted in the proof are sub-exponential property of r.v. and Borell-TIS inequality, how did they yield to temporal dependency generalization bounds since both of them are temporal independent.
true
EGjvMcKrrl
RnX7s6Ub1p
The current theoretical results could be more plentiful, e.g. replenish the generalization analysis on regularized model (9), which may help to answer the question raised below.
false
0i6Z9N5MLY
UqIVc5ZmJW
PAGE was originally designed for nonconvex minimization problems and SVRG/SAGA is a common choice for convex problems. Although the problem to be solved is monotone, Algorithm 1 chooses PAGE as the base algorithm. Could the authors explain why?
true
0i6Z9N5MLY
UqIVc5ZmJW
I don't see any dependence and requirement on $L_F$ for both Algorithms 1 and 2. Is the assumption that $F$ is $L_F$-Lipschitz used anywhere in the analysis?
true
0i6Z9N5MLY
UqIVc5ZmJW
Why is it required other than allowing easier comparisons with existing results?
true
0i6Z9N5MLY
UqIVc5ZmJW
I also think the discussions on the relationship among $L_F$, $L$, and $L_Q$ should be clearly stated in the paper instead of just referring to existing works.
false
0i6Z9N5MLY
UqIVc5ZmJW
I think a discussion about how and where the additional logarithmic factors in the convergence results of both algorithms come from would be great.
false
0i6Z9N5MLY
UqIVc5ZmJW
I assume they come from different sources and thus require different techniques and efforts to get rid of (if possible).
true
0i6Z9N5MLY
UqIVc5ZmJW
I still have questions on how to check the stopping criteria for the inner-problems in Algorithm 2. Is $E\Vert e_k\Vert^2$ something computable since it requires the exact solution $J_{\eta(F+G)}(u_k)$?
false
0i6Z9N5MLY
UqIVc5ZmJW
What are the examples of operators that are non-monotone but co-monotone other than $F(x)=-x$?
true
0i6Z9N5MLY
UqIVc5ZmJW
In Figure 1(b), EAG and Algorithm 2 tend to have lots of oscillations but Algorithm 1 does not. Some explanations and discussions on this might be good.
false
30N3bNAiw3
GEMagnvX1j
I would be grateful if the authors could clarify the points I made above regarding the design decisions made for the method and the details of its formulation.
false
30N3bNAiw3
GEMagnvX1j
As far as I understand, there is an inherent limitation for the method in that knowing the labels for the target dataset is required during training. This limits the applicability of SepCLR in the unsupervised setting, which is also the one most commonly examined by contrastive learning works.
true
30N3bNAiw3
GEMagnvX1j
I believe that there are some issues with the proposed method, that I would be grateful if the authors could elaborate on: - The authors make some decisions when designing the loss that go against what is commonly done in related contrastive learning papers. In particular, the loss they propose has the formulation of $L_{unif}$ as found in Wang & Isola [A], but the most commonly used formulation is that of InfoNCE, which differs in that the resulting loss is a sum of Log-Sum-Exp functions, instead of a single Log-Sum-Exp. Similarly, in the alignment term they use a formulation closer to $L_{out}$ from Supervised Contrastive Learning [B], but the same paper notes that another formulation that simply sums the inner products, named $L_{in}$, is better experimentally (the authors examine this in the appendix, but do not explain why they chose $L_{out}$). I would be grateful if the authors could elaborate on these design decisions.
false
30N3bNAiw3
GEMagnvX1j
Related to the above, it seems that the alignment terms in the common space and in the salient space are different (and similar to $L_{out}$ and $L_{in}$ respectively). I would be glad if the authors could explain why this is the case.
true
30N3bNAiw3
GEMagnvX1j
In Equation (7), the first term in the sums essentially forces the representations of the salient encoder to be far from the constant vector $s’$. It’s not immediately clear to me why this term is there - it doesn’t seem to arise from optimizing $\hat{H}(S)$, and the informationless hypothesis only comes into play in Equation (8). I think the authors need to explain this part a bit more.
true
30N3bNAiw3
GEMagnvX1j
Finally, the zero mutual information constraint is somewhat misleading - I understand the point the authors make that minimizing $I(c;s)$ is not the best thing to do, but at the same time, the proposed method does not directly force $I(c; s) = 0$. There is no guarantee that maximizing $H(c,s)$ does not affect the maximization of $H(c) + H(s)$, nor that the final solution will have $H(c,s) = H(c) + H(s)$. I believe that the authors should be clearer about this point.
true
30N3bNAiw3
GEMagnvX1j
Tables 1 and 2 contain several variants of SepCLR, but it is not clear what each of them signify. The authors should better explain the variants of SepCLR in this table.
false
30N3bNAiw3
GEMagnvX1j
Finally, I believe that it would be good to include the baseline of simply training the model using the entirety of the dataset via e.g. SimCLR. While I’m fairly sure that this will not perform as well, it’s still something good to include to get a sense of why the two different encoders are necessary.
false
gTWaUlxxWi
ICkPYmpIoh
Overall my major question is: what is the novelty of the proposed method compared with other ensemble method using network aggregator?
false
gTWaUlxxWi
ICkPYmpIoh
My major concern is that the novelty of the proposed method is very limited. Shallow network as aggregator has been widely used in other ensemble framework. More advanced methods such as using attention based aggregator have also been proposed. Thus, simply applying such method to FL setting is not a enough contribution (also I feel this also has been explored by previous papers).
false
gTWaUlxxWi
ICkPYmpIoh
It is also unclear what is the architecture of the proposed shallow network: does it only use the logits of all local models as input?
true
gTWaUlxxWi
ICkPYmpIoh
Why doesn't it utilize the information/embedding of input image?
true
gTWaUlxxWi
ICkPYmpIoh
Again I think more research into the architecture of the aggregator is needed.
false
gTWaUlxxWi
ICkPYmpIoh
The model used in the experiments is very limited (ResNet-8). I feel it needs to include larger models and tested on larger dataset with more classes to really show its performance on hetergenous settings.
false
WGLu9Mv8mn
hMXyM5JOIk
The motivation is not clear to me. While the method has been tested for action tasks, notably skeletal action recognition, it doesn't seem tailored for this particular domain. Why was the method applied to action recognition and not to a standard few-shot class-incremental learning task?
true
WGLu9Mv8mn
hMXyM5JOIk
The compared methods should be the few-shot class-incremental learning works, instead of the prompt tuning works. Drawing comparisons with prompt tuning methods, which aren't tailored for few-shot scenarios, might not provide a balanced perspective.
true
WGLu9Mv8mn
hMXyM5JOIk
What is the meaning of * ** ^ in Table 3. How can the method achieve around 3% performance for both old and new classes?
true
WGLu9Mv8mn
hMXyM5JOIk
To the best of my knowledge, few-shot class-incremental learning and continual few-shot learning are indeed distinct tasks. It might be worthwhile to revisit and clarify this in the study.
true
t4pWejxMgr
5zlAU9cUmy
- Why is neuroevolution a reasonable approach for this problem?
true
t4pWejxMgr
5zlAU9cUmy
And how is it related to dealing with low-data regime in particular?
true
t4pWejxMgr
5zlAU9cUmy
The method seems like a random mix of ideas in ML without much rationale.
false
t4pWejxMgr
5zlAU9cUmy
While the results are promising, more extensive evaluation on standard settings (e.g., ImageNet transfer) would be valuable.
false
t4pWejxMgr
5zlAU9cUmy
The authors should address the computational costs associated with QDTL, including training time and resource requirements, and compare to that of baselines.
false
fV54cBCGEV
v2Vy3sd8EI
- It seems that Figure 1 experiments on iid data. Under this assumption, it seems no surprise to me that local training (p<1) is beneficial and smaller alpha leads to smaller objective gap, as $x_i^*$ is close to the global optimum $x^*$. I would need to see the case of $\alpha=0$ to see what the benefit of communicating between client is in the first place
true
fV54cBCGEV
v2Vy3sd8EI
Instead, I would like to see experiments with different levels of non-iid-ness; and how the choice of alpha influences convergence to the optimum.
false
fV54cBCGEV
v2Vy3sd8EI
For the logreg experiments, I don't see the differences between Scafflix and standard FedAvg applied to the FLIX objective.
true
fV54cBCGEV
v2Vy3sd8EI
The claim "Scafflix is much faster than GD, thanks to its local training mechanism" seems to be equivalent to saying "p<1 for FedAvg is faster...", which again is no surprise given the iid-ness of the problem. I seem to be missing the point here.
true
fV54cBCGEV
v2Vy3sd8EI
The client-specific learning rates, which supposedly is the key theoretical advantage of Scafflix, is not specified and I assume it is identical across clients (which would make sense given the iid assumption with equal amounts of data).
true
fV54cBCGEV
v2Vy3sd8EI
In case I did miss a key point of the Scafflix component for the logreg experiments, and there is indeed a difference, could you please include results of FedAvg applied to the FLIX objective as a baseline, i.e. including one acceleration component as a baseline?
false
fV54cBCGEV
v2Vy3sd8EI
I suggest concentrating on a single $\alpha$ setting as the trend across all $\alpha$ is identical.
false
fV54cBCGEV
v2Vy3sd8EI
Similarly, the sentences "In accordance with the methodology outlined in FedJax (Ro et al., 2021), we distribute these samples randomly across 3,400 devices." and " The Shakespeare dataset, used for next character prediction tasks, contains a total of 16,068 samples, which we distribute randomly across 1,129 devices." seem to suggest that you are "randomly distributing" data, leading to iid-splits across clients. Scanning the code (I am not familiar with Jax, or FedJax in particular), it seems that the dataset is the "original", meaning non-iid by writer-id (Femnist) or character (Shakespeare). Please clarify the 'random distributing'.
true
fV54cBCGEV
v2Vy3sd8EI
For the "Baselines" section of 4.2, I am not sure I understand the selection of the learning rates. Do you independently optimize for the local learning rate per-client to achieve highest validation score, or do you fix the learning rate across clients to be the same for finding the $x_i^*$?
true
fV54cBCGEV
v2Vy3sd8EI
More critically, how do you select the $\gamma_i$ for the actual Scafflix algorithm? Since you fix $alpha_i$ to be identical across clients, I expect the $\gamma_i$ to be different across clients - otherwise it would appear that you are running FedAvg with the FLIX objective. What am I missing here?
true
fV54cBCGEV
v2Vy3sd8EI
Anecdotally and from my own experience, these models overfit easily due to the small amount of training-data per client.
true
fV54cBCGEV
v2Vy3sd8EI
In Figure 1, I interpret the objective gap computed with $f(x^k)$ as using the server-side model evaluated on the entire global training-dataset; could you confirm that?
true
fV54cBCGEV
v2Vy3sd8EI
For the NN experiments in Figures 2,3 and the appendix, do you equally evaluate the server-side model, here on the concatenation of all clients' test-datasets?
true
fV54cBCGEV
v2Vy3sd8EI
While this is certainly interesting and relevant as an indication for test-client performance, I believe a paper about model personalization in FL should compare to the average client-specific models' performances. Specifically, what is the quality of $x_i^*$ (i.e. no communication, no across-clients knowledge sharing) as well as the performance of $\tilde{x}_i^*$, (i.e. the performance of the client-specific personalized model following the FLIX objection as you detailed in the introduction).
false
fV54cBCGEV
v2Vy3sd8EI
Scafflix as described in Algorithm 1, as well as your theoretical analysis, does not consider client subsampling. For the theoretical analysis, this should be mentioned as a draw-back IMO.
true
fV54cBCGEV
v2Vy3sd8EI
For your empirical analysis, please comment on how introducing client subsampling (are you equating this to "batch-size" in 4.4.2?) provides insights about your algorithm Scafflix.
true
fV54cBCGEV
v2Vy3sd8EI
For Figure 2, does FLIX, which "uses the SGD method" do multiple local updates (i.e. p<1)? If yes, then the difference to Scafflix would to different local learning rates - is that correct?
true
fV54cBCGEV
v2Vy3sd8EI
Assuming I understood correctly and different local learning rates is a key component of Scafflix, what is the distribution of local learning rates that you find empirically? What is the empirical difference compared to using the same local learning rate (as I assume corresponds to the FLIX baseline?)
true
fV54cBCGEV
v2Vy3sd8EI
Specifically, I am missing a more detailed discussion around the elements that make Scafflix different from prior work. I.e. what specifically is the "tuning" of i-Scaffnew for FLIX, as well as how do you perform the "individualization" for Scaffnew through $\gamma_i$ in experiments?
true
HZdJgJ8ldH
cPlK8gXlsM
I have some other questions as follows. 1. Does the web search-based method can be extended to a continual learning setting?
true
HZdJgJ8ldH
cPlK8gXlsM
Would the method fail if using a non-contrastive pre-trained model?'
true
HZdJgJ8ldH
cPlK8gXlsM
The refinement process does not work well on Food and ImageNet datasets. Is there any explanation for this?
true
HZdJgJ8ldH
cPlK8gXlsM
In Figure 4, when using $D_{uncertain}^{cls}$, $D_{uncertain}^{cap}$, $D_{uncertain}^{desc}$ together, why exclude the results with refinement?
true
HZdJgJ8ldH
cPlK8gXlsM
Experiments with LoRA are interesting. However, only $D_{uncertain}^{cls}$ case is included. experiments on ImageNet are missing.
false
W478nWXfwO
EY69iHaiiv
I think such a level of modification to the environment doesn't qualify as developing a benchmark. It's more accurate to say that some environments were constructed to verify our conclusions.
false
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
13