2018年，6月22日，“CCF走进高校”活动来到上海理工大学。活动邀请了华中科技大学计算机学院副院长廖小飞教授的代表刘海坤副教授和清华大学电子工程系长聘副教授汪玉博士，分别作了“异构内存计算系统的构造：进展与问题”和“Towards Efficient Deep Learning Processing on FPGA/Edge”的专题学术报告。会议由上海市浦江学者、计算机科学与工程系副教授裴颂文主持。
In 2018, on June 22, the ' CCF went to college' event came to the University of Shanghai for Science and Technology. The event has invited Professor Liu Haikun, Associate Dean of the School of Computer Science, Huazhong University of Science and Technology, and Associate Professor Wang Yu, Associate Professor of Electronic Engineering, Tsinghua University. They gave the speeches on 'The Structure of Heterogeneous Memory Computing Systems: Progress and Problems' and 'Towards Efficient Deep Learning Processing on FPGA/Edge' respectively. The meeting was hosted by SongWen Pei, an associate professor of Shanghai Pujiang Scholars and Department of Computer Science and Engineering.
At the report, professor Liu introduced the key technologies of memory computing and a series of technical challenges faced by new memory computing: system architecture, operating system, programming model, data management and hardware equipment. It also introduces the latest research results of Huazhong University of Science and Technology in the field of memory computing: hybrid memory system simulator and simulator, reconfigurable hybrid memory architecture, cache replacement strategy, cache management of stacked DRAM, etc. Direction of development.
Professor Wang talked about his understanding of smart chips and accelerated computing, and introduced his work on deep learning hardware acceleration. And he introduces the difficulties and solutions for deep learning, including improving the parallelism of computing platforms, increasing the efficiency of reading and writing memory, and how to improve the computing power of sparse matrices. And he also demonstrated their's achievements in implementing voice and video stream compression with FPGAs.
In the interactive session, teachers and students of the School of Optoelectronic Information and Computer Engineering of Shanghai University of Technology and Science actively interacted with Liu Haikun and Wang Yu. The speakers greatly appreciated the students' good thinking and high-quality questions, and patiently answered the questions of the students. The students benefited a lot. The report will be successfully concluded with the warm applause of the participants.
The impressions of Students:
(YanFei Ji)Professor Wang Yu explained the energy consumption problem of deep learning from the shallower to the deeper. Based on the FPGA design accelerator, the neural network was accelerated to build a deep learning chip and platform. First of all, Teacher Wang Yu explained what he wanted to do, why he wanted to do it, and how to do it to the extreme. He paid great attention to personal thinking. If you want to know clearly, the direction is a key issue. After listening to the explanations of the two teachers, I hope that China will have its own simulator in the chip field, and its own chip will push the China Core to the world!
(JiHong Yuan)At this morning, I was fortunate enough to participate in the two report lectures that CCF brought to college and brought us. After listening to it, I benefited a lot. Professor Liu Haikun explained in detail some structures about heterogeneous memory, and detailed analysis of it, as well as some of his research results, such as HME, MALRU and so on. I am deeply touched by this, let me have a deeper and more cutting-edge understanding of heterogeneous types of memory systems. Wang Yu analyzed the deep learning accelerator on the FPGA platform and described the most advanced deep learning chips and platforms. Not only has our vision been opened up in technology, but it has also taught me how to logically develop academic thinking in academic research. I am very grateful to CCF for coming into college. This lecture has given me a lot of experience and brought a lot of enlightenment to my future academic path.
(TianMa Shen)Professor Wang Yu introduced three five-year plans for his own research work and his work on deep learning hardware acceleration. On the issue of energy consumption, Professor Wang emphasized the consumption of 0-1 inversion and the consumption of data migration, and it is also the starting point of its own acceleration algorithm. In the experimental part, Professor Wang showed his great success in accelerating hardware. In terms of performance and occupancy, the hardware performance of the design is similar to the NVIDIA Titan V (the latest GPU high-performance graphics card).
（HouZan Luo）Teacher Wang's presentations opened my brain wave hole. Regarding the AI chip, I started to pay attention recently, because I am running a target detection algorithm with a very slow speed on my computer with the GPU. But our ultimate goal is to apply to the mobile phone. Teacher Wang and his team have already done it, and I admire him very much. At present,the development of artificial intelligence chips is just beginning, we deeply need chips with strong computing power and low power consumption. Teacher Wang specializes in this field to solve the pain points of deep learning. I hope that Teacher Wang will achieve better achievements in the future and make greater contributions to the Chinese chip industry.