Hiroki Naganuma

Overview

Papers

On the Implicit Geometry of Cross-Entropy Parameterizations for Label-Imbalanced Data

The Role of Codeword-to-Class Assignments in Error Correcting Codes: An Empirical Study

Do Bayesian Neural Networks Need To Be Fully Stochastic?

Toward Fairness in Text Generation via Mutual Information Minimization based on Importance Sampling

Domain Adaptation under Missingness Shift

Freeze then Train: Towards Provable Representation Learning under Spurious Correlations and Feature Noise

[Adapting to Latent Subgroup Shifts via Concepts and Proxies](

https://virtual.aistats.org/virtual/2023/poster/5823)

On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks

Federated Learning under Distributed Concept Drift

On double-descent in uncertainty quantification in overparametrized models

Don’t be fooled: label leakage in explanation methods and the importance of their quantitative evaluation

Explicit Regularization in Overparametrized Models via Noise Injection

Covariate-informed Representation Learning to Prevent Posterior Collapse of iVAE

The ELBO of Variational Autoencoders Converges to a Sum of Entropies

Linear Convergence of Gradient Descent For Overparametrized Finite Width Two-Layer Linear Networks With General Initialization

Mind the (optimality) Gap: A Gap-Aware Learning Rate Scheduler for Adversarial Nets

Riemannian Accelerated Gradient Methods via Extrapolation

Global-Local Regularization Via Distributional Robustness

Unifying local and global model explanations by functional decomposition of low dimensional structures

Acknowlegements

I want to thank my PhD supervisor Ioannis Mitliagkas, for supporting my participation in the AISTATS2023. I want to express our deepest gratitude to Ioannis for his support!