{"667159":{"#nid":"667159","#data":{"type":"event","title":"CSIP Seminar: Beyond UCB: The Curious Case of Non-linear Ridge Bandits","body":[{"value":"\u003Ch3\u003E\u003Cstrong\u003ECenter for Signals and Information Processing (CSIP)\u0026nbsp;Seminar\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDate:\u003C\/strong\u003E\u0026nbsp;Tuesday, April 11,\u0026nbsp;2023\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ETime:\u003C\/strong\u003E\u0026nbsp;3:00 p.m. - 4:00 p.m. EST\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELocation:\u0026nbsp;\u003C\/strong\u003ECentergy Building 5126.\u0026nbsp;The associated zoom link is:\u0026nbsp;\u003Ca href=\u0022https:\/\/gatech.zoom.us\/j\/99851266161\u0022 target=\u0022_blank\u0022 title=\u0022https:\/\/gatech.zoom.us\/j\/99851266161\u0022\u003Ehttps:\/\/gatech.zoom.us\/j\/99851266161\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESpeaker:\u0026nbsp;\u003C\/strong\u003ENived Rajaraman\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESpeakers\u0027 Title:\u003C\/strong\u003E\u0026nbsp;Fourth year Ph.D. student in the EECS Department at Berkeley\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESeminar Title:\u0026nbsp;\u003C\/strong\u003EBeyond UCB: The Curious Case of Non-linear Ridge Bandits\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAbstract:\u0026nbsp;\u003C\/strong\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EThere is a large volume of work on bandits and reinforcement learning when the reward\/value function satisfies some form of linearity. But what happens if the reward is non-linear? Two curious phenomena arise for non-linear bandits: first, in addition to the \u0022learning phase\u0022 with a standard\u0026nbsp;\u003Cimg alt=\u0022\u0022 src=\u0022image\/jpeg;base64,\/9j\/4AAQSkZJRgABAQAASABIAAD\/4QCARXhpZgAATU0AKgAAAAgABQESAAMAAAABAAEAAAEaAAUAAAABAAAASgEbAAUAAAABAAAAUgEoAAMAAAABAAIAAIdpAAQAAAABAAAAWgAAAAAAAABIAAAAAQAAAEgAAAABAAKgAgAEAAAAAQAAACSgAwAEAAAAAQAAAA4AAAAA\/+0AOFBob3Rvc2hvcCAzLjAAOEJJTQQEAAAAAAAAOEJJTQQlAAAAAAAQ1B2M2Y8AsgTpgAmY7PhCfv\/AABEIAA4AJAMBIgACEQEDEQH\/xAAfAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgv\/xAC1EAACAQMDAgQDBQUEBAAAAX0BAgMABBEFEiExQQYTUWEHInEUMoGRoQgjQrHBFVLR8CQzYnKCCQoWFxgZGiUmJygpKjQ1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4eLj5OXm5+jp6vHy8\/T19vf4+fr\/xAAfAQADAQEBAQEBAQEBAAAAAAAAAQIDBAUGBwgJCgv\/xAC1EQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4+Tl5ufo6ery8\/T19vf4+fr\/2wBDAAICAgICAgMCAgMFAwMDBQYFBQUFBggGBgYGBggKCAgICAgICgoKCgoKCgoMDAwMDAwODg4ODg8PDw8PDw8PDw\/\/2wBDAQICAgQEBAcEBAcQCwkLEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBD\/3QAEAAP\/2gAMAwEAAhEDEQA\/AP02vP2uPCkHhXVvHEd3Z\/ZtMlu5E0thL9vuLGxkdJZlfiNZHSNpYoyCCu1WdWY7foCL4h6OjeJtZ1W8t9N8O+GCkVxdzvsUSGCO5kdmYgLGkcqDnnduzwBWbcfB7wzPoWo+D\/OuI\/DGrXM11daWrKIHNzKZ7iINt8xYZpGZpIw2DuZRhCVrX07wrbWes+KNOvI4rvSfFDC7khkGcO0EdpPEykYMbpGjderOCMYoAyviHPf+IPI8DeH5ZhNewveXclrcyWkq2kX3I0uYWWSFrmXbGHRgwjErKQyivA\/A\/gjWdS8Z65b2N1qE8Pg3VdOs5ZX8UeJJY55mhhursCKfUniZYEmQbZFdZDuRlAr374V\/DJvhp4YbQZNYn1m6UCCO9nVRMllbgx2UH8Qb7PDtUsf9Y++QgFyK2PBHgex+Hunapb2d7c6gdTv7vVJ5bryvMa4u5DLJzFHGNoJ2oCPlUKoOAKAL\/gzxI\/iTTbl7lFjvdNvLmwulTOzzrZyhZc8hZF2yKCSQGAJyK62uL8C+H20LSbma4ZXvNYvLjUrgoSUEl0+4IpIBIjTagOBnbnAziu0oA\/\/Z\u0022 \/\u003E\u0026nbsp;regret, there is an \u0022initialization phase\u0022 with a fixed sample cost determined by the nature of the reward function; second, achieving the smallest sample cost in the initialization phase requires new learning algorithms beyond traditional ones such as UCB.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EFor a special family of non-linear bandits taking the form of a \u201cridge\u0022 function\u0026nbsp;\u003Cimg alt=\u0022\u0022 src=\u0022image\/jpeg;base64,\/9j\/4AAQSkZJRgABAQAASABIAAD\/4QCARXhpZgAATU0AKgAAAAgABQESAAMAAAABAAEAAAEaAAUAAAABAAAASgEbAAUAAAABAAAAUgEoAAMAAAABAAIAAIdpAAQAAAABAAAAWgAAAAAAAABIAAAAAQAAAEgAAAABAAKgAgAEAAAAAQAAACmgAwAEAAAAAQAAAA4AAAAA\/+0AOFBob3Rvc2hvcCAzLjAAOEJJTQQEAAAAAAAAOEJJTQQlAAAAAAAQ1B2M2Y8AsgTpgAmY7PhCfv\/AABEIAA4AKQMBIgACEQEDEQH\/xAAfAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgv\/xAC1EAACAQMDAgQDBQUEBAAAAX0BAgMABBEFEiExQQYTUWEHInEUMoGRoQgjQrHBFVLR8CQzYnKCCQoWFxgZGiUmJygpKjQ1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4eLj5OXm5+jp6vHy8\/T19vf4+fr\/xAAfAQADAQEBAQEBAQEBAAAAAAAAAQIDBAUGBwgJCgv\/xAC1EQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4+Tl5ufo6ery8\/T19vf4+fr\/2wBDAAICAgICAgMCAgMFAwMDBQYFBQUFBggGBgYGBggKCAgICAgICgoKCgoKCgoMDAwMDAwODg4ODg8PDw8PDw8PDw\/\/2wBDAQICAgQEBAcEBAcQCwkLEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBD\/3QAEAAP\/2gAMAwEAAhEDEQA\/AP2Y1r4h6nZ\/Ei88N26Lb6N4a0ddY1WeS3kkMiTmdYooZAyqjDyGYkhsgFQB1qRvGWqwaR4audRuUtZtQgl1jUm2KUtdOgi86VB67Wkii3Hkgs3UVL4W8HXuoS+L9a8YLB9o8WFLSSG2keWOKztoTbrGJHSNiS7SuflGN+OcZNC68Gz3\/h3w\/purPHctZQzaHqBDMou7C5j+zS4wMhmZIpcdipUHvQB514b+O3jnxTbXlz4V8Mrrtylrod+1kGezktIdWluC8UjzLh5ILWFJGwBl3wu7KqfqO8v7PTLCXUtWnis7a3QyTSyuEijVRlmZ2wAo9TisHwr4Q0rwjBdJp5eae+kSW4nlKmSVo4khTO0KoCRxqiqAAAPUkme00SRpNaj1ic6lY6pMGS2uAskcUXkpG0QXYPkZlZtrbuWJzg4ABxmp+JfEGm2\/i+2WRZr\/AEJF1OzUqoFxZOhdYjjoS8UsW7qAFY5J5xP+Gh\/hp\/z+Sf8AfH\/162NR0HU9TtfF1yJY473XyNKtnySLa0RTED93lg8k0uOhJCk4Gao\/8KB+GP8A0DD\/AN90Af\/Z\u0022 \/\u003E\u0026nbsp;for a non-linear monotone function\u0026nbsp;\u003Cimg alt=\u0022\u0022 src=\u0022image\/jpeg;base64,\/9j\/4AAQSkZJRgABAQAASABIAAD\/4QCARXhpZgAATU0AKgAAAAgABQESAAMAAAABAAEAAAEaAAUAAAABAAAASgEbAAUAAAABAAAAUgEoAAMAAAABAAIAAIdpAAQAAAABAAAAWgAAAAAAAABIAAAAAQAAAEgAAAABAAKgAgAEAAAAAQAAAAigAwAEAAAAAQAAAA4AAAAA\/+0AOFBob3Rvc2hvcCAzLjAAOEJJTQQEAAAAAAAAOEJJTQQlAAAAAAAQ1B2M2Y8AsgTpgAmY7PhCfv\/AABEIAA4ACAMBIgACEQEDEQH\/xAAfAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgv\/xAC1EAACAQMDAgQDBQUEBAAAAX0BAgMABBEFEiExQQYTUWEHInEUMoGRoQgjQrHBFVLR8CQzYnKCCQoWFxgZGiUmJygpKjQ1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4eLj5OXm5+jp6vHy8\/T19vf4+fr\/xAAfAQADAQEBAQEBAQEBAAAAAAAAAQIDBAUGBwgJCgv\/xAC1EQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4+Tl5ufo6ery8\/T19vf4+fr\/2wBDAAICAgICAgMCAgMFAwMDBQYFBQUFBggGBgYGBggKCAgICAgICgoKCgoKCgoMDAwMDAwODg4ODg8PDw8PDw8PDw\/\/2wBDAQICAgQEBAcEBAcQCwkLEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBD\/3QAEAAH\/2gAMAwEAAhEDEQA\/AP2F1a\/8QeIvjxY+Fbee\/sNH8M6VHqs72s8ccVzcXs0kMMdwmSzxhIJhsK\/eIbIIU17vXg\/wWvZvGF\/4q+KF5CtrNrN2umJbrIZRDDozSQEbyqbt07TOPlHysM85Fe8UAf\/Z\u0022 \/\u003E, we derive upper and lower bounds on the optimal fixed cost of learning, and in addition, on the entire \u201clearning trajectory\u201d via differential equations. In particular, we propose a two-stage exploration algorithm which first finds a good initialization, and subsequently exploits local linearity in the learning phase. We prove that this algorithm is statistically optimal. In contrast, several classical and celebrated algorithms, such as UCB and algorithms relying on online\/offline regression oracles, are proven to be suboptimal.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EThis is based on a recent joint work with Yanjun Han, Jiantao Jiao, and Kannan Ramchandran:\u0026nbsp;\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/2302.06025\u0022\u003Ehttps:\/\/arxiv.org\/abs\/2302.06025\u003C\/a\u003E.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESpeaker Bio:\u003C\/strong\u003E\u0026nbsp;Nived Rajaraman is currently a 4th year PhD student in the EECS Department at Berkeley, advised by Jiantao Jiao and Kannan Ramchandran. He received his undergraduate degree from IIT Madras in 2019. His research interests lie in reinforcement learning, online learning and bandits, statistical machine learning and its interplay with non-convex optimization.\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ENived Rajaraman,\u0026nbsp;a fourth year Ph.D. student in the EECS Department at Berkeley,\u0026nbsp;will present the April 11 CSIP Seminar, \u0022Beyond UCB: The Curious Case of Non-linear Ridge Bandits.\u0022\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Featuring Nived Rajaraman, Ph.D. candidate at the EECS Department at Berkeley"}],"uid":"36172","created_gmt":"2023-04-07 10:41:37","changed_gmt":"2023-04-07 10:52:01","author":"dwatson71","boilerplate_text":"","field_publication":"","field_article_url":"","field_event_time":{"event_time_start":"2023-04-11T15:00:00-04:00","event_time_end":"2023-04-11T16:00:00-04:00","event_time_end_last":"2023-04-11T16:00:00-04:00","gmt_time_start":"2023-04-11 19:00:00","gmt_time_end":"2023-04-11 20:00:00","gmt_time_end_last":"2023-04-11 20:00:00","rrule":null,"timezone":"America\/New_York"},"location":"Centergy Building 5126","extras":[],"groups":[{"id":"1255","name":"School of Electrical and Computer Engineering"}],"categories":[],"keywords":[{"id":"192224","name":"CSIP Seminar"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[{"id":"1795","name":"Seminar\/Lecture\/Colloquium"}],"invited_audience":[{"id":"78761","name":"Faculty\/Staff"},{"id":"78771","name":"Public"},{"id":"78751","name":"Undergraduate students"}],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EKiran Kokilepersaud\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:kpk6@gatech.edu\u0022\u003Ekpk6@gatech.edu\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u0026nbsp;\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}