线下活动 × 深圳 | 大咖云集!第11届国际博士生论坛报名开启
?
國際博士生論壇(International Doctoral Forum)是由清華大學和香港中文大學于 2006 年聯合發起的一項學術交流活動,至今已有 12 年的歷史。2014 年起西北工業大學參與承辦。論壇由清華大學、香港中文大學、西北工業大學輪流承辦,旨在推進北京、香港、深圳、西安各高校相關領域的老師和學生的交流與合作。?
論壇主題涵蓋多媒體(Multimedia)、自然語言處理(Natural Language Processing)、互聯網數據挖掘(Web Mining)、網絡及大數據(Networking and Big Data)、人工智能(Artificial Intelligence)等多個領域,并邀請來自相關領域學術界、產業界的學者、專家親臨指導、作特邀報告等。論壇期間還會召開圓桌會議交流、學術思想秀等專題活動,促進與會的老師和同學們的交流和合作。?
博士生論壇為同學們搭建了學習交流的平臺,聘請清華大學、香港中文大學、西北工業大學相關專業領域的領導、老師擔任指導委員會,由同學們親自策劃和組織,包括議題確定、論文投稿和審稿、大會報告邀請、論壇日程安排、優秀論文評審、本地組織等各個方面,有效鍛煉了與會同學們各方面的能力。借助論壇搭建的平臺,不少參加往屆博士生論壇的同學業已成長為在相關領域有較大影響力的優秀青年學者。?
論壇自 2006 年首次舉辦以來,今年已是第十一屆。歷屆論壇的舉辦地分別為北京(2006 年、2008 年、2010 年、2015 年),香港(2007 年、2009 年、2016 年),深圳香港聯合(2011 年),西安(2014 年、2017 年)。歷屆論壇獲得了來自各高校老師和同學們的積極支持和參與,也取得了良好的效果。以論壇為契機,相關領域的老師和同學們深入交流和相互討論學習,并達成了諸多合作,取得了較好的效果。論壇的影響力也在不斷擴大,獲得了來自中國大陸、香港、臺灣、澳門、乃至海外諸多院校的支持與參與。?
今年的國際博士生論壇于 12 月 6 日-7 日在深圳舉行,由清華大學深圳研究生院具體承辦,將舉行為期 2 天的學術研討和交流活動。論壇主題為:Multimedia、Intelligent Speech Interaction、Web Mining、Networking and Big Data。論壇首日包括開幕式(Opening Ceremony)、并安排 4 個特邀報告(Invited Talk)、科技園高科技企業參觀(UBTech and Tencent Binhai Mansion);論壇第二天安排 3 個特邀報告(Invited talk)、1 個特殊議題專題報告(Special Session: Dialogue with AI Companies)、10 組口頭分論壇報告(Oral Sessions)、晚宴及頒獎儀式(Banquet and Best Paper Award Ceremony)等。?
論壇特邀報告講者包括:佐治亞理工李錦輝教授,香港中文大學邢國良、周博磊教授,西北工業大學謝磊教授,清華大學劉知遠、賈珈、袁春教授。特殊議題專題報告安排人工智能相關企業介紹最新的研究成果,并和大家進行交流,講者包括好未來人工智能實驗室(TAL AI Lab)楊嵩、深圳壹秘科技陳文明、深圳聲希科技劉鵬飛。?
參加論壇的學生總人數為 60-70 人,參加論壇的老師人數為 10 人左右。?
歡迎大家報名參加論壇開幕式、特邀報告以及特殊議題環節。?
論壇開幕式:12 月 6 日,9:00-9:45?
第一組特邀報告:12 月 6 日,9:45-12:15(包括特邀報告 1-4)?
第二組特邀報告:12 月 7 日,8:30-10:00(包括特邀報告 5-7)?
Dialogue with AI Companies特殊議題報告:12 月 7 日,10:30-12:00(包括特殊議題報告 1-3)
?時間??2018 年 12 月 6 日,9:45-10:45
?地點??清華大學深圳研究生院,CII 一樓多功能廳
?佐治亞理工李錦輝教授?
Chin-Hui Lee is a professor at School of Electrical and Computer Engineering, Georgia Institute of Technology. Before joining academia in 2001, he had accumulated 20 years of industrial experience ending in Bell Laboratories, Murray Hill, as a Distinguished Member of Technical Staff and Director of the Dialogue Systems Research Department. Dr. Lee is a Fellow of the IEEE and a Fellow of ISCA. He has published over 500 papers and 30 patents, with more than 42,000 citations and an h-index of 80 on Google Scholar. He received numerous awards, including the Bell Labs President's Gold Award in 1998. He won the SPS's 2006 Technical Achievement Award for “Exceptional Contributions to the Field of Automatic Speech Recognition”. In 2012 he gave an ICASSP plenary talk on the future of automatic speech recognition. In the same year he was awarded the ISCA Medal in scientific achievement for “pioneering and seminal contributions to the principles and practice of automatic speech and speaker recognition”.
?報告題目??Knowledge-rich Speech Processing: Beyond Current Deep Learning
Deep neural networks (DNNs) are becoming ubiquitous in designing speech processing algorithms. However, the robustness issues that have hindered a wide-spread deployment of speech technologies for decades still have not been fully resolved. In this talk, we first discuss capabilities and limitations of deep learning technologies. Next, we illustrate three knowledge-rich techniques, namely: (1) automatic speech attribute transcription (ASAT) integrating acoustic phonetic knowledge into speech processing and computer assisted pronunciation training (CAPT), (2) Bayesian DNNs leveraging upon speaker information for adaptation and system combination, and (3) DNN-based speech pre-processing, demonstrating better acoustics leads to more accurate speech recognition. Finally, we argue that domain knowledge in speech, language and acoustics is heavily needed beyond current blackbox deep learning in order to formulate sustainable whitebox solutions to further advance speech technologies.
?時間??2018 年 12 月 6 日,10:45-11:15
?地點??清華大學深圳研究生院,CII 一樓多功能廳
?清華大學劉知遠教授?
Zhiyuan Liu is an associate professor at the Department of Computer Science and Technology, Tsinghua University. He received his Ph.D. degree in Computer Science from Tsinghua in 2011. His research interests include representation learning, knowledge graphs and social computation, and has published more than 60 papers in top-tier conferences and journals of AI and NLP including ACL, IJCAI and AAAI, cited by more than 3500 according to Google Scholar.
?報告題目??Knowledge-Guided Natural Language Processing?
Recent years have witnessed the advances of deep learning techniques in various areas of NLP. However, as a typical data-driven approach, deep learning suffers from the issue of poor interpretability. A potential solution is to incorporate large-scale symbol-based knowledge graphs into deep learning. In this talk, I will present recent works on knowledge-guided deep learning methods for NLP.
?時間??2018 年 12 月 6 日,11:15-11:45
?地點??清華大學深圳研究生院,CII 一樓多功能廳
?香港中文大學邢國良教授?
Guoliang Xing is currently a Professor in the Department of Information Engineering, the Chinese University of Hong Kong. Previously, he was a faculty member at Michigan State University, U.S. His research interests include Embedded AI, Edge/Fog Computing, Cyber-Physical Systems, Internet of Things (IoT), security, and wireless networking. He received the B.S. and M.S degrees from Xi’an Jiao Tong University, China, in 1998 and 2001, the D.Sc. degree from Washington University in St. Louis, in 2006. He is an NSF CAREER Award recipient in 2010. He received two Best Paper Awards and five Best Paper Nominations at several first-tier conferences including ICNP and IPSN. Several mobile health technologies developed in his lab won Best App Awards at the MobiCom conference and were successfully transferred to the industry. He received the Withrow Distinguished Faculty Award from Michigan State University in 2014. He serves as the General Chair for IPSN 2016 and TPC Co-Chair for IPSN 2017.
?報告題目??Edge AI for Data-Intensive Internet of Things?
Internet of Things (IoT) represent a broad class of systems which interact with the physical world by tightly integrating sensing, communication, and compute with physical objects. Many IoT applications are data-intensive and mission-critical in nature, which generate significant amount of data that must be processed within stringent time constraints. It’s estimated that 0.75 GB of data can be produced by an autonomous vehicle each second. The existing Cloud computing paradigm is inadequate for such applications due to significant or unpredictable delay and concerns on data privacy.?
In this talk, I will present our recent work on Edge AI, which aims to address the challenges of data-intensive IoT by intelligently distributing compute, storage, control and networking along the continuum from Cloud to Things. First, I will present ORBIT, a system for programming Edge systems and partitioning compute tasks among network tiers to minimize the system power consumption while meeting application deadlines. ORBIT has been employed in several systems for seismic sensing, vision-based tracking, and multi-camera 3D reconstruction. Second, I will briefly describe several systems we developed for mobile health, smart cities, volcano and aquatic monitoring, which integrate domain-specific physical models with AI algorithms. We have conducted several large-scale field deployments for these systems, including installing a seismic sensor network at two live volcanoes in Ecuador and Chile.
?時間??2018 年 12 月 6 日,11:45-12:15
?地點??清華大學深圳研究生院,CII 一樓多功能廳
?清華大學賈珈教授?
Jia Jia is a tenured associate professor in Department of Computer Science and Technology, Tsinghua University. Her main research interest is affective computing and human computer speech interaction. She has been awarded ACM Multimedia Grand Challenge Prize (2012), Scientific Progress Prizes from the National Ministry of Education as the First Person-in-charge (2016), IJCAI Early Career Spotlight (2018), ACM Multimedia Best Demo Award (2018) and ACM SIGMM Emerging Leaders (2018). She has authored about 70 papers in leading conferences and journals including T-KDE, T-MM, T-MC, T-ASLP, T-AC, ACM Multimedia, AAAI, IJCAI, WWW etc. She also has wide research collaborations with Tencent, SOGOU, Huawei, Siemens, MSRA, Bosch, etc.
?報告題目??Mental Health Computing via Harvesting Social Media Data?
Psychological stress and depression are threatening people’s health. It is non-trivial to detect stress or depression timely for proactive care. With the popularity of social media, people are used to sharing their daily activities and interacting with friends on social media platforms, making it feasible to leverage online social media data for stress and depression detection. In this talk, we will systematically introduce our work on stress and depression detection employing large-scale benchmark datasets from real-world social media platforms, including 1) stress-related and depression-related textual, visual and social attributes from various aspects, 2) novel hybrid models for binary stress detection, stress event and subject detection, and cross-domain depression detection, and finally 3) several intriguing phenomena indicating the special online behaviors of stressed as well as depressed people. We would also like to demonstrate our developed mental health care applications at the end of this talk.
?時間??2018 年 12 月 7 日,8:30-9:00
?地點??清華大學深圳研究生院,CII 一樓多功能廳
?西北工業大學謝磊教授?
Lei Xie is currently a Professor in the School of Computer Science, Northwestern Polytechnical University, Xian, China. From 2001 to 2002, he was with the Department of Electronics and Information Processing, Vrije Universiteit Brussel (VUB), Brussels, Belgium, as a Visiting Scientist. From 2004 to 2006, he worked in the Center for Media Technology (RCMT), City University of Hong Kong. From 2006 to 2007, he worked in the Human-Computer Communications Laboratory (HCCL), The Chinese University of Hong Kong. His current research interests include audio, speech and language processing, multimedia and human-computer interaction. He is currently an associate editor of IEEE/ACM Trans. on Audio, Speech and Language Processing. He has published more than 140 papers in major journals and proceedings, such as IEEE TASLP, IEEE TMM, Signal Processing, Pattern Recognition, ACM Multimedia, ACL, INTERSPEECH and ICASSP.
?報告題目??Meeting the New Challenges in Speech Processing: Some NPU-ASLP Approaches?
Speech has become a popular human-machine interface due to fast development of deep learning, big data and super-computing. We can see many applications in smartphones, TVs, robots and smart speakers. However, for further wide deployments of speech interfaces, there are still many challenges we have to face, such as noise interferences, inter- and intra-speaker variations, speaking styles and low-resource scenarios. In this talk, I will introduce several approaches, recently developed in the Audio, Speech and Language Processing Group, Northwestern Polytechnical University (NPU-ASLP) team, to meet these challenges in speech recognition, speech enhancement and speech synthesis.
?時間??2018 年 12 月 7 日,9:00-9:30
?地點??清華大學深圳研究生院,CII 一樓多功能廳
?香港中文大學周博磊教授?
Bolei Zhou is an Assistant Professor with the Information Engineering Department at the Chinese University of Hong Kong. He received his PhD in computer science at Massachusetts Institute of Technology (MIT). His research is in computer vision and machine learning, focusing on visual scene understanding and interpretable deep learning. He received the Facebook Fellowship, Microsoft Research Fellowship, MIT Greater China Fellowship, and his research was featured in media outlets such as TechCrunch, Quartz, and MIT News.
?報告題目??Deep Visual Scene Understanding?
Deep learning has made great progress in computer vision, achieving human-level object recognition. However, visual scene understanding, which aims at interpreting objects and their spatial relations in complex scene context, remains challenging. In this talk I will first introduce the recent progress of deep learning for visual scene understanding. From the 10-million image dataset Places to the pixel-level annotated dataset ADE20K, I will show the power of data and its synergy with interpretable deep neural networks for better scene recognition and parsing. Then I will talk about the trend of visual recognition from supervised learning towards more active learning scenario. Applications including city-scale perception and spatial navigation will be discussed.
?時間??2018 年 12 月 7 日,9:30-10:00
?地點??清華大學深圳研究生院,CII 一樓多功能廳
?清華大學袁春教授?
Chun Yuan is currently an Associate Professor with the Division of Information Science and Technology at Graduate school at Shenzhen, Tsinghua University. He received his M.S. and Ph.D. degrees from the Department of Computer Science and Technology, Tsinghua University, Beijing, China, in 1999 and 2002, respectively. He once worked at the INRIA-Rocquencourt, Paris, France, as a Post-doc research fellow from 2003 to 2004. In 2002, he worked at Microsoft Research Asia, Beijing, China, as an intern. His research interests include computer vision, machine learning and multimedia technologies. He is now the executive vice director of “Tsinghua-CUHK Joint Research Center for Media Sciences, Technologies and Systems”.
?報告題目??Event Level Video Captioning based on Attentional RNN?
Video understanding is a hotspot and challenge subject featured by jointly knowledge of natural language processing (NLP) and computer vision. More and more commercial application of online multimedia content requires better automatic understanding of video events. Unlike image captioning, video captioning faces more obstacles. First, video is complex data form to get and utilize feature, comparing to image. The temporal change makes sufficient information and different methods have their own shortages in mining temporal information. Second, in the task of captioning, the generation of sentence is required to extract dynamic information from videos. While some methods deal well with short ant monotone actions, mining with longer and more complex actions is next goal. Third, some new tasks like captioning multiple events, call for new algorithm to get event-level processing. When generating sentences, correctly generate words like “continue” or “another” is one manifestation of good exploit context information.
?時間??2018 年 12 月 7 日,10:30-11:00
?地點??清華大學深圳研究生院,CII 一樓多功能廳
?聲希科技(SpeechX)CTO 劉鵬飛博士?
Dr. Pengfei Liu received his B.E. and M.E. degrees from The East China Normal University and the Ph.D. degree from The Chinese University of Hong Kong. His research areas are natural language processing and deep learning, particularly on sentiment analysis and dialog systems. He developed the SEEMGO system which ranked 5th in the task of aspect-based sentiment analysis at SemEval-2014, and received the Technology Progress Award in JD Dialog Challenge in 2018. Dr. Liu previously worked at SAP Labs China in Shanghai, The Chinese University of Hong Kong, and Wisers AI lab in Hong Kong, where he led a team to conduct research on deep learning-based sentiment analysis. He is currently the CTO of SpeechX.
?報告題目??Developing a Personalized Emotional Conversational Agent for Learning Spoken English?
The spoken English skill is critical but challenging for non-native learners in China due to lack of enough practice, while improving spoken English is in large demand among learners of different ages. This talk presents our ongoing project at SpeechX on developing a personalized emotional conversational agent which aims to provide a virtual partner for language learners to practice their spoken English. Such an agent is personalized based on each learner’s English level and interests, and meanwhile gives appropriate responses according to the learner’s emotions. Developing the agent involves a lot of research challenges such as consistency and personalization in dialog systems, multimodal emotion recognition, expressive speech synthesis and so on. In this talk, we will briefly introduce our work responding to these challenges, present a preliminary proof-of-concept prototype and discuss future research perspectives.
?時間??2018 年 12 月 7 日,11:00-11:30
?地點??清華大學深圳研究生院,CII 一樓多功能廳
?深圳壹秘科技(eMeet)CEO 陳文明?
陳文明,深圳壹秘科技有限公司創始人,中歐國際工商學院 EMBA。在音視頻、智能語音、智能家居、物聯網領域工作 18 年;曾于 TCL 就職 10 余年,歷任研發總經理、產品總經理、電聲事業部總經理、創新事業部總經理;2016 年 8 月創立深圳壹秘科技有限公司。
?報告題目??專業商務智能語音的應用及挑戰?
深圳壹秘科技有限公司成立于 2016 年,專注移動辦公產品創新及智能服務。研發的人工智能會議服務系統,基于智能語音前端陣列算法技術、自然語言處理技術、網絡通訊技術,服務于全球移動辦公及智能會議市場。報告將以壹秘產品及服務的應用場景及市場潛力作為切入點,分享深圳壹秘科技有限公司爭做智能語音單項技術應用冠軍的心路歷程,進而從前端語音處理的技術瓶頸、后端語言處理技術的挑戰機遇兩方面闡述專業商務智能語音的應用和挑戰。
?時間??2018 年 12 月 7 日,11:30-12:00
?地點??清華大學深圳研究生院,CII 一樓多功能廳
?好未來AILAB語音技術負責人楊嵩?
楊嵩,歷任思必馳高級語音工程師、蘇州馳聲研發主管、好未來 AILAB 語音技術負責人。研究方向為語音識別、語音評測。一直致力于中高考英語口語機器評分,在線教育課堂質量自動化評估等方面工作,在該領域擁有多項專利。2014 年獲中國人工智能學會頒發的“吳文俊人工智能科學技術獎進步獎”。
?報告題目??AI在教育領域落地的探索?
好未來教育集團以“科技推動教育進步”作為自己的使命,深入發掘AI技術和教育場景的結合點。針對教學資源不均衡,優質師資不足的現狀,大力發展各個場景的“AI 老師”;針對學生能力發展不平衡,推廣個性化教學。此外為教育的各個環節引入不同的AI評測技術;在線下課堂教學中提供智慧教室的解決方案,讓教室擁有眼睛(攝像頭),耳朵(麥克風),大腦(云)及其他器官(答題器,iPad),引入音視頻量化教學過程,評價課堂的教學質量;在線上課堂通過識別和分析課堂內容,評價師生間的交互狀況,抽取相關特征對師生進行匹配,提高教學效率。好未來以 AI 技術為引擎,持續探索未來教育的新模式。
長按識別二維碼,馬上報名!
▼
深圳市南山區麗水路 2279 號,清華大學深圳研究生院 CII 一樓多功能廳(國際會議中心)
?
現在,在「知乎」也能找到我們了
進入知乎首頁搜索「PaperWeekly」
點擊「關注」訂閱我們的專欄吧
關于PaperWeekly
PaperWeekly 是一個推薦、解讀、討論、報道人工智能前沿論文成果的學術平臺。如果你研究或從事 AI 領域,歡迎在公眾號后臺點擊「交流群」,小助手將把你帶入 PaperWeekly 的交流群里。
▽ 點擊 |?閱讀原文?| 立刻報名
總結
以上是生活随笔為你收集整理的线下活动 × 深圳 | 大咖云集!第11届国际博士生论坛报名开启的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 从变分编码、信息瓶颈到正态分布:论遗忘的
- 下一篇: 实录分享 | 计算未来轻沙龙:计算机视觉