BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20230831T095745Z
LOCATION:Dischma
DTSTART;TZID=Europe/Stockholm:20230626T143000
DTEND;TZID=Europe/Stockholm:20230626T150000
UID:submissions.pasc-conference.org_PASC23_sess162_msa116@linklings.com
SUMMARY:Deep-Reinforcement-Learning-Based Drag Reduction in Turbulent Chan
 nel Flows
DESCRIPTION:Minisymposium\n\nRicardo Vinuesa and Luca Guastoni (KTH Royal 
 Institute of Technology), Jean Rabault (Norwegian Meteorological Institute
 ), and Hossein Azizpour (KTH Royal Institute of Technology)\n\nWe introduc
 e a reinforcement-learning (RL) environment to design and benchmark contro
 l strategies aimed at reducing drag in turbulent fluid flows enclosed in a
  channel. The environment provides a framework for computationally-efficie
 nt, parallelized, high-fidelity fluid simulations, ready to interface with
  established RL agent programming interfaces. This allows for both testing
  existing deep reinforcement learning (DRL) algorithms against a complex, 
 turbulent physical system. The control is applied in the form of blowing a
 nd suction at the wall, while the observable state is defined as the veloc
 ity fluctuations at a given distance from the wall. Given the complex nonl
 inear nature of turbulent flows, the control strategies proposed so far in
  the literature are physically grounded, but too simple. DRL, by contrast,
  enables leveraging high-dimensional data to design advanced control strat
 egies. In an effort to establish a benchmark for testing data-driven contr
 ol strategies, we compare opposition control, a state-of-the-art turbulenc
 e-control strategy from the literature, and a commonly-used DRL algorithm,
  deep deterministic policy gradient. Our results show that DRL leads to 43
 % and 30% drag reduction in a minimal and a larger channel (at a friction 
 Reynolds number of 180), respectively, outperforming the classical opposit
 ion control by around 20 and 10 percentage points, respectively.\n\nDomain
 : Computer Science, Machine Learning, and Applied Mathematics &#8232;\n\nSession
  Chairs: Timothy C Germann (Los Alamos National Laboratory) and Ramesh Bal
 akrishnan (Argonne National Laboratory)
END:VEVENT
END:VCALENDAR
