Title: Generative AI – Opportunities, Challenges, and Open Questions

Generative AI has received a lot of attention due to the tremendous success of ChapGPT. Large foundation models have been trained, leading to various demos and potential applications such as text-to-image and text-to-video cross-domain generations. Resources have been invested in building massive computational and storage infrastructures. Furthermore, data collection and cleaning are essential to high system performance. In the face of these rapid developments, this panel will discuss opportunities, challenges, and open questions associated with generative AI. Some exemplary topics are given below:

  • Today’s generative AI is tilted more toward “engineering” than “science.” Will this be a concern in the long run?
  • What are the major shortcomings of the current large foundation models?
  • How vital are “data collection and cleaning” tasks in generative AI? How do large companies carry out such tasks? Will we run out of data? If so, how soon?
  • Will “copyright,” “plagiarism,” and “hallucination” be issues? How can we address them? How can we trust the answers?
  • What roles can small AI companies and academia with limited resources play?
  • What are the future R&D directions of generative AI? What will be the next big breakthroughs?

 

  •  

Moderators:

 

C.-C. Jay Kuo, University of Southern California

Zicheng Liu, AMD

 

 

Panelists:

Rogerio Feris, IBM

Lijuan Wang, Microsoft

 

 

Jiebo Luo, University of Rochester

Junsong Yuan, State University of New York at Buffalo