首页| English| 中国科学院

A Simple Yet Effective Strategy to Robustify the Meta Learning Paradigm

副标题:

时间:2023-06-19  来源:

报告人:王琦,国防科技大学

时间地点:621日上午11点,N213

Abstract

Meta learning is a promising paradigm to enable skill transfer across tasks. Most previous methods employ the empirical risk minimization principle in optimization. However, the resulting worst fast adaptation to a subset of tasks can be catastrophic in risk-sensitive scenarios. To robustify fast adaptation, this paper optimizes meta learning pipelines from a distributionally robust perspective and meta trains models with the measure of expected tail risk. We take the two-stage strategy as heuristics to solve the robust meta learning problem, controlling the worst fast adaptation cases at a certain probabilistic level. Experimental results show that our simple method can improve the robustness of meta learning to task distribution, alleviate the heavy tail effect in risk, and reduce the conditional expectation of the worst fast adaptation risk.

 

参考链接:

https://github.com/hhq123gogogo/work_in_progress/blob/main/2023A%20Simple%20Yet%20Effective%20Strategy%20to%20Robustify%20the%20Meta%20Learning%20Paradigm.pdf 

相关附件
相关文档