Keynote Speaker 1

Mohammed Bennamoun, Winthrop Professor
The University of Western Australia, Department of Computer Science and Software Engineering

Title: 3D Vision for Intelligent Robots
Abstract: In structured settings like industrial environments, robotic technology has exhibited remarkable efficiency. However, its deployment in dynamic and less predictable environments, such as domestic settings, remains a challenge. Robots, in areas like agility, power, and precision, often surpass human abilities. Yet, they still encounter difficulties in tasks like object and person identification, linguistic interpretation, manual dexterity, and social interaction and understanding capabilities.
The quest for computer vision systems mirroring human visual abilities has been arduous. Two primary obstacles have been: (i) the absence of 3D sensors that can parallel the human eye’s capability to concurrently record visual attributes (e.g., colour and texture) and the dynamic surface shapes of objects, and (ii) the lack of real-time data processing algorithms. However, with the recent emergence of cost-effective 3D sensors, there’s a surge in the creation of functional 3D systems. These span from 3D biometric systems, e.g., for face recognition, to assistive home robotic systems to assist the elderly with mild cognitive impairment.
The objective of the talk will be to describe few 3D computer vision projects and tools used towards the development of a platform for assistive robotics in messy living environments. Various systems with applications and their motivations will be described including 3D object recognition, 3D face/ear biometrics, grasping of unknown objects, and systems to estimate the 3D pose of a person.

Mohammed Bennamoun is Winthrop Professor in the Department of Computer Science and Software Engineering at the University of Western Australia (UWA) and is a researcher in computer vision, machine/deep learning, robotics, and signal/speech processing. He has published 4 books (available on Amazon), 1 edited book, 1 Encyclopedia article, 14 book chapters, 200+ journal papers, 270+ conference publications, 16 invited and keynote publications. His h-index is 72 and his number of citations is 25,200+ (Google Scholar). He was awarded 70+ competitive research grants, from the Australian Research Council, and numerous other Government, UWA and industry Research Grants. He successfully supervised 30+ PhD students to completion. He won the Best Supervisor of the Year Award at Queensland University of Technology (1998) and received award for research supervision at UWA (2008 and 2016) and Vice-Chancellor Award for mentorship (2016). He delivered conference tutorials at major conferences, including IEEE CVPR 2016, Interspeech 2014, IEEE ICASSP, and ECCV. He was also invited to give a Tutorial at an International Summer School on Deep Learning (DeepLearn 2017).

Keynote Speaker 2

Alyn Rockwood, Chief Scientist
Boulder Graphics

Talk Title: Splossoms: Spherical Blossoms, a Spherical Analog for Polynomial Curves.
Abstract: The blossom of a polynomial is a multi-affine function of euclidean space with the same number of variables as the degree of the polynomial. It provides many insights to the polynomial and simplifies methods not otherwise apparent. One example is the de Casteljau algorithm for computing and subdividing a Bezier curve. This report describes a blossom for a parametric de Casteljau-like curve on the sphere, leading to similar insights and simplification of algorithms on the sphere. Two earlier such methods are the well-known SLERP and SQUAD interpolations of points on the sphere.  These methods are re-formulated with our new concept, the splossom, which plays the role of a blossom in spherical space. Some of its implications are briefly sketched to illustrate its potential.
The splossom itself is neatly described in terms of spinors in Geometric Algebra. This development follows the Geometric Algebra approach and points to considerable further research within its broad vista.

Alyn Rockwood is Chief Scientist at Boulder Graphics, developing 3D computer graphics. Until recently, he was Professor of Applied Mathematics and Associate Director of the Geometric Modeling and Scientific Visualization Research Center at King Abdullah University of Science & Technology (KAUST) in Saudi Arabia. Dr. Rockwood has been involved with computer graphics and related research for more than 35 years. At the pioneering graphics company Evans and Sutherland, he led a team that first achieved certification for a pilot training simulator, which allowed pilots to train completely for new aircraft on a simulator. At Silicon Graphics, Inc., he developed the method for rendering curved surfaces in real time that is integral to OpenGL today.  He was SIGGRAPH Papers Chair in 1999, Conference Chair in 2003 and SIGGRAPH Asia Papers’ Chair in 2013.  Before moving to KAUST, Dr. Rockwood held academic positions at both Arizona State University and Colorado School of Mines. He has received several teaching awards, the COFES 2007 Innovation in Technology Award, the CAD Society “Heroes of Engineering” Award. The SIGGRAPH’s Outstanding Service Award.2017 He received his PhD in applied mathematics from Cambridge University, IK

Keynote Speaker 3

Daisuke Iwai, Associate Professor
Osaka University, Graduate School of Engineering Science

Talk Title: Appearance Editing of Real-World Objects using Projection Mapping
Abstract: Projection mapping, also known as spatial augmented reality (AR), overlays computer graphics onto physical surfaces and provides users with an AR experience without requiring them to wear or hold any bothersome display hardware. It has been applied in various fields, not only in entertainment but also in medicine, design, education, and makeup. The ultimate goal of projection mapping is to alter the appearance of real-world objects to achieve desired colors and textures, similar to what we can achieve in computer graphics. However, achieving flexible control of surface appearance is not always straightforward due to technical limitations of the projector hardware. For example, projected content becomes blurred on non-planar surfaces due to the shallow depth-of-field of a projector and gets shadowed when a user’s body occludes the projected light. To overcome these technical limitations, we have applied a computational display approach in which we jointly optimize the projector hardware, algorithms, and the target surface while considering human perceptual properties. In this talk, I will introduce some of these technologies that not only enable flexible control of real-world appearances but also provide a novel visual experience that goes beyond what is possible with conventional optics. Lastly, I will present our recent neural projection mapping framework that allows users to edit the appearance of real-world objects using natural language scripts.

Daisuke Iwai is an Associate Professor at the Graduate School of Engineering Science, Osaka University in Japan. Ater receiving his PhD degree from Osaka University in 2007, he started his career at Osaka University. He was also a visiting scientist at Bauhaus-University Weimar, Germany, from 2007 to 2008, and a visiting Associate Professor at ETH, Switzerland, in 2011. His research interests include augmented reality, projection mapping, and human-computer interaction. He is currently serving as an Associate Editor of IEEE Transactions on Visualization and Computer Graphics (TVCG), and previously served as Program Chairs of IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (2021, 2022) and IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (2022). His publications received Best Paper Awards at IEEE VR (2015), IEEE Symposium on 3D User Interfaces (3DUI) (2015), and IEEE ISMAR (2021). He is a recipient of JSPS Prize (2023).

Keynote Speaker 4

Prof. Hongbo Fu
City University of Hong Kong, School of Creative Media

Talk title: Towards More Accessible Tools for Content Creation
Abstract: Traditional game and filming industries heavily rely on professional artists to make 2D and 3D visual content. In contrast, future industries such as metaverse and 3D printing highly demand digital content from personal users. With modern software, ordinary users can easily produce text documents, create simple drawings, make simple 3D models consisting of primitives, take images/videos, and possibly edit them with pre-defined filters. However, creating photorealistic images from scratch, fine-grained image retouching (e.g., for body reshaping), detailed 3D models, vivid 3D animations, etc., often require extensive training with professional software and is time-consuming, even for skillful artists. Generative AI, e.g., based on ChatGPT and Midjourney, recently has taken a big step and allows the easy generation of unique and high-quality images from text prompts. However, various problems, such as controllability and generation beyond images, still need to be solved. Besides AI, the recent advance in Augmented/Virtual Reality (AR/VR) software and hardware brings unique challenges and opportunities for content creation. In this talk, I will introduce my attempts to lower the barrier of content creation, making such tools more accessible to novice users. I will mainly focus on sketch-based portrait generation and content creation with AR/VR.

Hongbo Fu is a Professor at the School of Creative Media, City University of Hong Kong. Before joining CityU, he had postdoctoral research training at the Imager Lab, University of British Columbia, Canada, and the Department of Computer Graphics, Max-Planck-Institut Informatik, Germany. He received a Ph.D. degree in computer science from the Hong Kong University of Science and Technology in 2007 and a BS degree in information sciences from Peking University, China, in 2002. His primary research interests fall in computer graphics, human-computer interaction, and computer vision. His research has led to over 100 scientific publications, including 60+ papers in the best graphics/vision journals (ACM TOG, IEEE TVCG, IEEE PAMI) and 20+ papers in the best vision/HCI conferences (CVPR, ICCV, ECCV, CHI, UIST). His recent works have received a Silver Medal from Special Edition 2022 Inventions Geneva Evaluation Days (IGED), the Best Demo awards at the Emerging Technologies program, SIGGRAPH Asia in 2013 and 2014, and the Best Paper awards from CAD/Graphics 2015 and UIST 2019.
He was the Organization Co-Chair of Pacific Graphics 2012, the Program Chair/Co-chair of CAD/Graphics 2013 & 2015, SIGGRAPH Asia 2013 (Emerging Technologies) & 2014 (Workshops), Pacific Graphics 2018, Computational Visual Media 2019, and the Conference Chair of SIGGRAPH Asia 2016 and Expressive 2018. He was on the SIGGRAPH Asia Conference Advisory Group and is currently Vice-Chairman of the Asia Graphics Association. He has served as an Associate Editor of The Visual Computer, Computers & Graphics, and Computer Graphics Forum.

Special Session – 3D Medical Image Processing, Quality Enhancement and Analysis

Keynote Speaker 5

Yudong Zhang
University of Leicester, School of Computing and Mathematical Sciences

Talk title: Recent Advances in Medical Image Processing and Analysis
Abstract: The medical image processing and analysis field has witnessed remarkable advancements in recent years, largely attributed to the incredible potential of artificial intelligence and deep learning theories and techniques. This talk aims to provide an overview of our group’s advancements in artificial intelligence in medical image processing and analysis. The talk will begin with an introduction to deep learning and its vital variants, such as convolutional neural networks, advanced pooling networks, graph convolutional networks, attention neural networks, weakly supervised networks, vision transformers, etc. We will explore how these neural networks can be tailored and applied to various medical imaging modalities, including magnetic resonance imaging, computed tomography, and histopathology slides. Furthermore, we will discuss the challenges faced in medical image processing and analysis, such as limited labeled data, class imbalance, and interpretability, and delve into the theories and techniques employed to mitigate these issues.

Prof. Yudong Zhang is a Chair Professor at the School of Computing and Mathematical Sciences, University of Leicester, UK. His research interests include deep learning and medical image analysis. He is the Fellow of IET, Fellow of EAI, and Fellow of BCS. He is the Senior Member of IEEE and ACM. He is the Distinguished Speaker of ACM. He was 2019, 2021 & 2022 recipient of Clarivate Highly Cited Researcher. He has (co)authored over 400 peer-reviewed articles. There are more than 60 ESI Highly Cited Papers and 6 ESI Hot Papers in his (co)authored publications. His citation reached 27567 in Google Scholar (h-index 91). He is the editor of Neural Networks, IEEE TITS, IEEE TCSVT, IEEE JBHI, etc. He has conducted many successful industrial projects and academic grants from NIH, Royal Society, British Council, GCRF, EPSRC, MRC, BBSRC, Hope, and NSFC. He has served as (Co-)Chair for more than 60 international conferences (including more than 20 IEEE or ACM conferences). More than 70 news presses have reported his research outputs, such as Reuters, BBC, Telegraph, Mirror, Physics World, UK Today News, etc.

Special Session – 3D Medical Image Processing, Quality Enhancement and Analysis

Keynote Speaker 6

Lichi Zhang
Shanghai Jiao Tong University, School of Biomedical Engineering

Title: Intelligent Medical Image Analysis and Computer-aided Diagnosis
Abstract: Medical image analysis and computer-aided diagnosis are highly-demanded in nowadays, which can assist doctors in alleviating the diagnosis burden and resolving the subjectivity issues in the interpretation of medical image. Recently there have been significant advancements in these fields with the integration of deep learning techniques, which have been developed rapidly over the last decade. However, there are several challenges that need to be addressed in the actual clinical scenarios, including the high variability and complex anatomical structures of the medical images, lack of interpretability in deep learning model, and limitations in data collection for model training. This talk will introduce our recent research in the field of medical image analysis and computer-aided diagnosis, which consists of three parts including the brain MR image processing, computer-aided diagnosis for knee osteoarthritis (OA) disease and TCT histopathology image processing and high-throughput screening. I will also present the methods relevant to these topics such as image segmentation, object detection, image reconstruction and etc, and how they can overcome the aforementioned challenges.

Bio: Lichi Zhang is an Associate Professor at the School of Biomedical Engineering, Shanghai Jiao Tong University. He received a Ph.D. degree in computer science from the University of York, UK, and a BS degree in network engineering from Beijing University of Posts and Telecommunications, China. From 2014 to 2017, he was a postdoc researcher at the University of North Carolina at Chapel Hill, US, and Shanghai Jiao Tong University, China. He was selected for Shanghai Pujiang Talent Program, and has also hosted and participated in the National Natural Science Foundation of China Grants, National Key Research and Development Program of China Grant and etc. His research interests include medical image analysis, computer-aided diagnosis and computer vision. He has published more than 90 academic papers in Medical Image Analysis, IEEE TMI, Pattern Recognition, NPJ Digital Medicine, MICCAI and other journals and conferences renowned in the fields of medical image analysis and computer vision. He is also serving as the Junior Editor of Aging and Disease Journal, and Guest Associate Editor of Frontiers in Neuroscience.