{"id":3662,"date":"2023-03-15T19:16:06","date_gmt":"2023-03-15T11:16:06","guid":{"rendered":"http:\/\/www.cgs-network.org\/cgi23\/?page_id=3662"},"modified":"2023-05-08T21:49:15","modified_gmt":"2023-05-08T13:49:15","slug":"cgi-nfr2023","status":"publish","type":"page","link":"http:\/\/www.cgs-network.org\/cgi23\/cgi-nfr2023\/","title":{"rendered":"CGI-Neural Fluid Rendering (CGI-NFR2023) Challenge"},"content":{"rendered":"\n<h3 class=\"wp-block-heading\">Challenge Introduction<\/h3>\n\n\n\n<p>The progress made in neural rendering has led to a multitude of applications in computer graphics and 3D vision. However, the existing methods, such as the Neural Radiance Field (NeRF), have primarily focused on modeling static scenes or dynamic scenes featuring rigid bodies, rather than grasping the physical environment as a whole. This presents a challenging question on how to develop a neural renderer from sequences of multi-view images. To address this issue, we introduce a dataset consisting of multi-view images of various fluid scenes with distinct fluid properties. The competition is aimed at researchers who can devise models that can consider the physical properties of fluids and generate novel views, while also expanding the model to simulate fluid dynamics and forecast future frames.<\/p>\n\n\n\n<p><strong>Task 1: Novel view synthesis<\/strong><\/p>\n\n\n\n<p>This task focuses on rendering novel views of given scenes.Participants are expected to render images from 5 novel views of 10 scenes within the 50 training time steps (#0 &#8211; #49), whose camera poses are provided.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.cgs-network.org\/cgi23\/wp-content\/uploads\/2023\/03\/X7kkSrK1_o-678x1024.png\" alt=\"\" class=\"wp-image-3754\" width=\"339\" height=\"512\" srcset=\"http:\/\/www.cgs-network.org\/cgi23\/wp-content\/uploads\/2023\/03\/X7kkSrK1_o-678x1024.png 678w, http:\/\/www.cgs-network.org\/cgi23\/wp-content\/uploads\/2023\/03\/X7kkSrK1_o-199x300.png 199w, http:\/\/www.cgs-network.org\/cgi23\/wp-content\/uploads\/2023\/03\/X7kkSrK1_o-768x1160.png 768w, http:\/\/www.cgs-network.org\/cgi23\/wp-content\/uploads\/2023\/03\/X7kkSrK1_o.png 1235w\" sizes=\"auto, (max-width: 339px) 100vw, 339px\" \/><\/figure><\/div>\n\n\n\n<p><strong>Task 2: Future roll-outs<\/strong><\/p>\n\n\n\n<p>This task focuses on predicting future roll-outs.Participants are expected to render the #59 time step in the first view of all scenes in the dataset.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.cgs-network.org\/cgi23\/wp-content\/uploads\/2023\/03\/V4jWjlmS_o-1024x308.png\" alt=\"\" class=\"wp-image-3755\" width=\"512\" height=\"154\" srcset=\"http:\/\/www.cgs-network.org\/cgi23\/wp-content\/uploads\/2023\/03\/V4jWjlmS_o-1024x308.png 1024w, http:\/\/www.cgs-network.org\/cgi23\/wp-content\/uploads\/2023\/03\/V4jWjlmS_o-300x90.png 300w, http:\/\/www.cgs-network.org\/cgi23\/wp-content\/uploads\/2023\/03\/V4jWjlmS_o-768x231.png 768w, http:\/\/www.cgs-network.org\/cgi23\/wp-content\/uploads\/2023\/03\/V4jWjlmS_o.png 1621w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/figure><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Challenge Organization<\/h3>\n\n\n\n<p><strong>Challenge Co-Chairs<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Yunbo Wang, Shanghai Jiao Tong University, China<\/li><\/ul>\n\n\n\n<p><strong>Organizing Team <\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Xiangming Zhu, Shanghai Jiao Tong University, China<\/li><li>Haijian Chen, Shanghai Jiao Tong University, China<\/li><li>Haochen Yuan, Shanghai Jiao Tong University, China<\/li><li>Hong-Xing Yu, Stanford University, USA<\/li><li>Yunbo Wang, Shanghai Jiao Tong University, China<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Challenge Webpage<\/h3>\n\n\n\n<p><em><strong><a href=\"https:\/\/codalab.lisn.upsaclay.fr\/competitions\/11567\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"CGI-NFR2023\uff08\u5728\u65b0\u7a97\u53e3\u6253\u5f00\uff09\">CGI-NFR2023<\/a><\/strong><\/em> <\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Challenge Contact<\/h3>\n\n\n\n<p>If you have any questions regarding the challenge, please feel free to post them on the forum. Additionally, if you require further information, please contact higerchen&nbsp;[at]&nbsp;sjtu.edu.cn.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Challenge Introduction The pro &#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-3662","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"http:\/\/www.cgs-network.org\/cgi23\/wp-json\/wp\/v2\/pages\/3662","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.cgs-network.org\/cgi23\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"http:\/\/www.cgs-network.org\/cgi23\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"http:\/\/www.cgs-network.org\/cgi23\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.cgs-network.org\/cgi23\/wp-json\/wp\/v2\/comments?post=3662"}],"version-history":[{"count":21,"href":"http:\/\/www.cgs-network.org\/cgi23\/wp-json\/wp\/v2\/pages\/3662\/revisions"}],"predecessor-version":[{"id":3834,"href":"http:\/\/www.cgs-network.org\/cgi23\/wp-json\/wp\/v2\/pages\/3662\/revisions\/3834"}],"wp:attachment":[{"href":"http:\/\/www.cgs-network.org\/cgi23\/wp-json\/wp\/v2\/media?parent=3662"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}