You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 1 Oct 2021 • Yue Liu, Ethan X. Fang, Junwei Lu

Our proposed method aims to infer general ranking properties of the BTL model.

no code implementations • 15 Aug 2021 • Yan Li, Caleb Ju, Ethan X. Fang, Tuo Zhao

We show that BPPA attains non-trivial margin, which closely depends on the condition number of the distance generating function inducing the Bregman divergence.

no code implementations • 28 Dec 2020 • Han Zhong, Ethan X. Fang, Zhuoran Yang, Zhaoran Wang

In particular, we focus on a variance-constrained policy optimization problem where the goal is to find a policy that maximizes the expected value of the long-run average reward, subject to a constraint that the long-run variance of the average reward is upper bounded by a threshold.

no code implementations • 4 Sep 2020 • Yining Wang, Yi Chen, Ethan X. Fang, Zhaoran Wang, Runze Li

We consider the stochastic contextual bandit problem under the high dimensional linear model.

no code implementations • ICLR 2020 • Yan Li, Ethan X. Fang, Huan Xu, Tuo Zhao

Specifically, we show that for any fixed iteration $T$, when the adversarial perturbation during training has proper bounded L2 norm, the classifier learned by gradient descent based adversarial training converges in direction to the maximum L2 norm margin classifier at the rate of $O(1/\sqrt{T})$, significantly faster than the rate $O(1/\log T}$ of training with clean data.

no code implementations • 7 Jun 2019 • Yan Li, Ethan X. Fang, Huan Xu, Tuo Zhao

Specifically, we show that when the adversarial perturbation during training has bounded $\ell_2$-norm, the classifier learned by gradient descent based adversarial training converges in direction to the maximum $\ell_2$-norm margin classifier at the rate of $\tilde{\mathcal{O}}(1/\sqrt{T})$, significantly faster than the rate $\mathcal{O}(1/\log T)$ of training with clean data.

no code implementations • 18 Dec 2017 • Zhuoran Yang, Lin F. Yang, Ethan X. Fang, Tuo Zhao, Zhaoran Wang, Matey Neykov

Existing nonconvex statistical optimization theory and methods crucially rely on the correct specification of the underlying "true" statistical models.

no code implementations • 24 Sep 2016 • Ethan X. Fang, Han Liu, Kim-Chuan Toh, Wen-Xin Zhou

This paper studies the matrix completion problem under arbitrary sampling schemes.

no code implementations • NeurIPS 2016 • Mengdi Wang, Ji Liu, Ethan X. Fang

The ASC-PG is the first proximal gradient method for the stochastic composition problem that can deal with nonsmooth regularization penalty.

no code implementations • 16 Dec 2014 • Ethan X. Fang, Yang Ning, Han Liu

This paper proposes a decorrelation-based approach to test hypotheses and construct confidence intervals for the low dimensional component of high dimensional proportional hazards models.

no code implementations • 14 Nov 2014 • Mengdi Wang, Ethan X. Fang, Han Liu

For smooth convex problems, the SCGD can be accelerated to converge at a rate of $O(k^{-2/7})$ in the general case and $O(k^{-4/5})$ in the strongly convex case.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.