BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20230831T095745Z
LOCATION:Dischma
DTSTART;TZID=Europe/Stockholm:20230626T150000
DTEND;TZID=Europe/Stockholm:20230626T153000
UID:submissions.pasc-conference.org_PASC23_sess162_msa223@linklings.com
SUMMARY:Graph Neural Networks for Interpretable Data-Based Modeling of Flu
 id Flows
DESCRIPTION:Minisymposium\n\nShivam Barwey (Argonne National Laboratory), 
 Varun Shankar and Venkatasubramanian Viswanathan (Carnegie Mellon Universi
 ty), and Venkatram Vishwanath and Romit Maulik (Argonne National Laborator
 y)\n\nReduced-order modeling strategies based on neural networks can accel
 erate traditional computational fluid dynamics simulations for rapid desig
 n optimization and prediction of a wide range of fluid flows. To realize t
 his vision of improved modeling, key limitations -- namely, incompatibilit
 y with unstructured data representations and latent space interpretability
  -- prohibiting their extension into practical flow configurations must be
  tackled. This work addresses these limitations with a novel graph neural 
 network (GNN) architecture. In the context of fluid flow compression, it i
 s shown how the method produces a latent graph that (a) can be visualized 
 in physical space directly, (b) identifies coherent structures in the doma
 in, and (c) is described by an adjacency matrix that adapts in time with t
 he evolution of the flow. Model outputs are assessed on an unsteady and un
 structured turbulent fluid flow dataset for both autoencoding and forecast
 ing applications, and additional emphasis is placed on demonstrating the s
 calability of underlying graph operations on modern GPU-based compute node
 s. Through the ability to unify autoencoder-based reduction and physical i
 nterpretability into a single framework, this work presents a pathway for 
 improved data-driven modeling in complex geometry configurations.\n\nDomai
 n: Computer Science, Machine Learning, and Applied Mathematics &#8232;\n\nSessio
 n Chairs: Timothy C Germann (Los Alamos National Laboratory) and Ramesh Ba
 lakrishnan (Argonne National Laboratory)
END:VEVENT
END:VCALENDAR
