News

  • 07/2020 Ph.D. student Ruizhe Cai has passed the dissertation defense and joins Facebook Inc.
  • 07/2020 The collaborative paper led by Kaidi on adversarial T-shirt has been accepted in ECCV as Spotlight paper. Congrats!
  • 07/2020 The automatic pattern generation and mobile DNN acceleration led by Xiaolong has been accepted in ECCV. Congrats!
  • 07/2020 The CoCoPIE acceleration framework ” CoCoPIE: Enabling Real-Time AI on Off-the-Shelf Mobile Devices via Compression-Compilation Co-Design” has been conditionally accepted by Communications of the ACM (CACM). Congrats!
  • 06/2020 Yanzhi attends the DARPA/NSF joint RTML PI meeting and presents his work on DNN model compression and mobile acceleration.
  • 06/2020 Ph.D. student Ao Ren has passed the dissertation defense and becomes Prof. Ren.
  • 06/2020 Tianyun’s paper “StructADMM: Achieving Ultra-High Efficiency in Structured Pruning for DNNs” has been conditionally accepted by IEEE TNNLS (Impact Factor 12.18).
  • 06/2020 The real-time on-mobile 3D activity detection has been reported in Medium.
  • 06/2020 Yanzhi has received the U.S. Army Research Office Young Investigator Award.
  • 06/2020 Yanzhi has written the cover letter for IEEE Trans. on Computers, “Introduction to the Special Issue on Machine Learning Architectures and Accelerators”.
  • 06/2020 Yanzhi’s group has an invited paper on privacy-aware DNN weight pruning in GLSVLSI 2020.
  • 06/2020 The CoCoPIE acceleration framework enables, for the first time, on-mobile real-time acceleration of 3D activity detection networks (e.g., C3D, R(2+1)D, S3D) using off-the-shelf mobile devices. We can achieve only 9ms per frame performance without accuracy loss, outperforming current frameworks by 30X speedup. Please see our demos.
  • 06/2020 Yanzhi presents his work on DNN model compression and mobile acceleration to on HealthDL Workshop with Mobisys 2020.
  • 06/2020 Yanzhi has received an NSF Award on design automation of superconducting electronics. Thanks NSF!
  • 06/2020 The CoCoPIE system and demonstration has been awarded in IEEE ISLPED Design Contest 2020.
  • 06/2020 The AQFP-based DNN acceleration framework has been reported in MIT TR China, also in Sohu (搜狐), NetEase (网易), Zhihu (知乎), Tencent (腾讯快报), MyZaker, TWGreatDaily, TianYanCha, KKNews, CANNews, etc.
  • 05/2020 “CoCoPIE: A software solution for putting real artificial intelligence in smaller spaces” reported in W&M News, also in TechXplore.
  • 05/2020 Yanzhi presents his work on DNN model compression and mobile acceleration to Vivo Inc.
  • 05/2020 The CoCoPIE acceleration framework has been reported in Xinzhiyuan (新智元), also cited in Tencent (腾讯快报), Sohu (搜狐). Another report is in Jiqizhixin (机器之心), also cited in Sina (新浪财经), thepaper.cn (澎湃).
  • 05/2020 The adversarial T-shirt work has been reported in “This ugly T-shirt makes you invisible to facial recognition tech” by Wired (UK), also in Dazed, MIT News, LatestTechNews (UK), TechPowerNews, NEU News, HeadTopics (UK)
  • 05/2020 The CoCoPIE Bilibili Channel is open here. Welcome to check and advise.
  • 04/2020 The CoCoPIE system and demonstration paper “Towards Real-Time DNN Inference on Mobile Platforms with Model Pruning and Compiler Optimization“, have been accepted in IJCAI 2020 (proceeding paper in demonstration track). It introduces the CoCoPIE mobile acceleration of three key applications: automatic style transfer, superresolution, and auto-coloring.
  • 04/2020 The CoCoPIE team and framework have been reported by Medium, and also in WebSystemer, MC.AI.
  • 04/2020 The CoCoPIE YouTube Channel is open here. Welcome to check and advise!
  • 04/2020 The key, conceptual paper of CoCoPIE: “CoCoPIE: Making Mobile AI Sweet as PIE — Compression-Compilation Co-Design Goes a Long Way” is on Arxiv. It introduces our key idea of compression-compilation co-design of DNNs, achieving real-time execution of most representative DNNs using off-the-shelf mobile devices, outperforming existing frameworks by up to 180X. Using the CoCoPIE solution, the pure software-based framework even outperforms representative ASIC and FPGA solutions of DNNs in terms of energy efficiency and performance.
  • 04/2020 Yanzhi receives DARPA award on deep neural networks and acceleration for wireless networking and signal processing applications. Thanks DARPA!
  • 04/2020 The postdoc/visiting scholar, Chen Pan, will join Dept. of CSE at Texas A&M Corpus Christi, as tenure-track assistant professor.
  • 04/2020 Yanzhi will serve as PC Member/Reviewer for NeurIPS, 2020.
  • 04/2020 Yanzhi will serve as PC Member for ICCAD, 2020.
  • 04/2020 First time paper acceptance in ICS 2020. Congrats to Runbin, Peiyan and Tong! Our paper “CSB-RNN: A faster-than-realtime RNN acceleration framework with compressed structured blocks” presents a novel FPGA-based acceleration framework for compressed RNNs with structured blocks, achieving beyond real-time RNN acceleration and outperforming state-of-art by multiple times.
  • 04/2020 Two Ph.D. students, Yanyue Xie and Qing Jin, will join Yanzhi’s research group in Fall 2020.
  • 04/2020 Our work “Non-Structured DNN Weight Pruning Considered Harmful” is conditionally accepted by IEEE TNNLS (Impact Factor 12.18). It has a strong conclusion that non-structured DNN weight pruning is not preferred on any platform. We suggest not to continue working on sparsity-aware DNN acceleration with non-structured weight pruning.
  • 04/2020 Our collaborative work “A Survey of Stochastic Computing Neural Networks for Machine Learning Applications” is conditionally accepted by IEEE TNNLS (Impact Factor 12.18). Thanks Prof. Jie Han’s group at University of Alberta for the leading effort!
  • 04/2020 The collaborative paper “AntiDOte: Attention-based dynamic optimization for neural network runtime efficiency” receives best paper nomination at DATE 2020.
  • 04/2020 Yanzhi will serve as Guest Editor for TCAS-II Special Issue.
  • 04/2020 Receives equipment support from Extreme Scale Science and Engineering Discovery Environment.
  • 03/2020 Yanzhi will serve as organizer (together with Prof. Zhenman Fang at Simon Fraser University) of ROAD4NN: Research Open Automatic Design for Neural Networks workshop with DAC 2020.
  • 03/2020 Receives funding from NSF CMMI: Physics-Reinforced Deep learning for Structural Metamodeling. Thanks NSF! Thanks Prof. Hao Sun for leading effort!
  • 03/2020 Ph.D. student Ao Ren has accepted the offer as an Assistant Professor in Department of Electrical and Computer Engineering at Clemson University, starting Fall 2020.
  • 02/2020 Yanzhi receives funding award from MathWorks. Thanks MathWorks!
  • 02/2020 One collaborative work on graph processing has been accepted in PLDI 2020. Thanks Xuehai’s group (USC) for leading effort!
  • 02/2020 Yanzhi presents his work on DNN model compression and mobile acceleration at Semiconductor Research Corporation project meeting at Santa Clara, CA.
  • 02/2020 Yanzhi presents his work on DNN model compression and acceleration at SRC Inc., Syracuse NY.
  • 02/2020 Yanzhi presents his work on DNN model compression and acceleration at MathWorks.
  • 02/2020 Yanzhi presents his work on DNN model compression and mobile acceleration at the ECE Forum of Northeastern University.
  • 02/2020 Five papers accepted in DAC with co-authors from our group (but Yanzhi is only on two…). These papers include the collaborative papers “PCNN: Pattern-based fine-grained regular pruning towards optimizing CNN accelerators” and “PIM-Prune: Fine-grained DCNN pruning for crossbar-based process-in-memory architecture”, “3D CNN acceleration on FPGA using hardware-aware pruning” led by Mengshu, “RTMobile: Beyond real-time mobile acceleration for RNNs for speech recognition” led by Peiyan, and “FTDL: A tailored FPGA-overlay for deep learning with high scalability” led by Runbin.
  • 01/2020 Three presentations from Yanzhi’s group at BARC workshop: pattern-based pruning and compressoin-compilation co-design, privacy-aware weight pruning of DNNs, block-based colum-row pruning and FPGA acceleration.
  • 01/2020 Boston Area Computer Architecture (BARC) Workshop has been successfully held at Egan Research Center, Northeastern University. Website: https://bostonarch.github.io/2020/.
  • 01/2020 “Deep neural networks are coming to your phone. Here’s how that could change your life” reported in News@Northeastern, TechXplore, and FlipBoard.
  • 01/2020 Our work on AutoCompress (Automatic DNN structured pruning for ultra-high compression rates, paper here) gets reported in Jiqizhixin (机器之心), Xinzhiyuan (新智元), Qbitai (量子位), DiDi news (滴滴出行), also cited in Toutiao (今日头条) Zhihu (知乎), Sina (新浪), thepaper.cn (澎湃), Tencent (腾讯快报), Tech.ifeng (凤凰网科技频道), Sina (新浪科技), Sohu (搜狐), CSDN Blog, cocook, Linkresearcher (领研), shangyexinzhi (商业新知), gooyi, Baybox
  • 01/2020 Our work “3D capsule networks for object classification with weight pruning” as been accepted in IEEE Access.
  • 01/2020 Collaborative paper on stochastic computing-based neural network acceleration in near-threshold computing accepted in ISCAS 2020.
  • 01/2020 Yanzhi serves as guest editor of CCF Trans. on High Performance Computing – Special Issue on Disruptive Computing Technologies.
  • 01/2020 Yanzhi’s work on speeding up AI covered in USC Viterbi Communications.
  • 01/2020 Our collaborative work “PatDNN: Achieving real-time DNN execution on mobile devices with pattern-based weight pruning” has been accepted by ASPLOS 2020. PatDNN achieves by far the fastest DNN execution using mobile devices, potentially real-time execution of all DNNs!
  • 01/2020 Our work on PCONV (model compression and compiler co-optimization for DNNs, paper here) gets reported in Xinzhiyuan (新智元), also cited in Liaoba (中国联通), sina (新浪财经), sina tech (新浪科技), NetEase (网易), zhuanzhi.ai (专知), gmx (共鸣新闻), xueqiu (雪球投资), and in the efficient DNN list here.
  • 01/2020 Yanzhi will become the chair of BARC (Boston Area Computer Architecture Workshop) 2020, held in Northeastern University in Jan. 31.
  • 01/2020 Yanzhi serves as track chair for GLS-VLSI 2020.
  • 01/2020 Yanzhi will serve as the committee member in DAC’s latest breaking results, 2020.
  • 12/2019 Successfully held a workshop on DNN model compression, compiler optimization, and FPGA acceleration with over 50 attendees. Presenters/visitors from are Northeastern University, Boston University, IBM-MIT research, College of William and Mary, U. Iowa, MathWorks, China Academic of Science ICT, UIUC, U. Notre Dame, USC, U. Connecticut, Hong Kong U., Syracuse Univ., U Pitt, etc.
  • 12/2019 Our work on adversarial T-shirt to evade neural network detection featured in VentureBeat, The Register, NEU News, Boston Globe, Import AI, Quartz, ODSC, VICE , and has been cited/quoted by over 120 media.
  • 12/2019 Yanzhi attended ColdFlux Meeting at Yokohama National University, and gave two presentations on AQFP-based deep learning acceleration and AQFP placement and routing.
  • 12/2019 Congratulations on Pu’s paper on adversarial robustness accepted in ICLR 2020.
  • 12/2019 Yanzhi gave a presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at National Tsinghua University, Taiwan.
  • 12/2019 Yanzhi gave a presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at Shenzhen Institutes of Advanced Technology.
  • 12/2019 Yanzhi visited and gave a presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at Tencent Inc.
  • 12/2019 Yanzhi attended Embedded AI Summit and gave an invited presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” as well as poster presentation on PCONV.
  • 12/2019 Invited lecture on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at Course Embedded Machine Learning at Dept. of ECE at Rice University.
  • 12/2019 Invited lecture on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at Course Introduction to Computer Engineering at Northeastern University.
  • 12/2019 Yanzhi will become PC member at IJCAI 2020.
  • 11/2019 Our work on protecting neural networks with hierarchical random switching (IJCAI 2019) featured in TechTalks, Medium , IBM Research Blog, and has been cited by other 20 media.
  • 11/2019 Yanzhi has received the Massachusetts Acorn Innovation Award. Thanks MA Technology Transfer Center!
  • 11/2019 6 papers accepted by AAAI 2020 from Yanzhi and Xue’s group! Including “PCONV: the missing but desirable sparsity in DNN weight pruning for real-time execution on mobile device” that achieves by far the fastest mobile DNN execution, real-time for almost all DNNs; “AutoCompress: an automatic DNN structured pruning framework for ultra-high compression rates” that achieves by far the highest (structured) DNN compression rates; “DARB: a density-adaptive regular-block pruning for deep neural networks”; “Embedding compression with isotropic iterative quantization”.
  • 11/2019 One paper accepted by FPGA 2020, “FTDL: An FPGA-tailored architecture for deep learning applications”.
  • 11/2019 Yanzhi attended NSF CPS PI Meeting at Arlington, VA.
  • 11/2019 Recent work on pattern-based pruning (PCONV) and real-time mobile acceleration of DNNs has been presented by Xiaolong at International Workshop on Highly Efficient Neural Processing (HENP), NYC, Cyber-Physical Systems Security Workshop, RI, and Workshop on MLIR for HPC, Atlanta GA, and Zhengang at HALO workshop with ICCAD.
  • 11/2019 Recent work on AutoCompress has been presented by Zhengang at HALO workshop with ICCAD.
  • 11/2019 Yanzhi will become track chair of GLS-VLSI 2020.
  • 11/2019 Yanzhi gave a presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at Peking University, Center for Energy-efficient Computing and Applications (CECA).
  • 11/2019 Yanzhi gave a presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at Beijing Institute of Technology, Computer Science Department.
  • 11/2019 Yanzhi gave a presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at Beihang University, Computer Science Department.
  • 11/2019 Yanzhi gave an invited presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at Asilomar Conference, California.
  • 11/2019 Two collaborative papers accepted by DATE 2020: “When sorting network meets parallel bistream: A fault-tolerant parallel ternary neural network (TNN) accelerator based on stochastic computing”, and “AntiDOte: Attention-based dynamic optimization for neural network runtime efficiency”.
  • 10/2019 Yanzhi gave a presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at University of California, Santa Barbara.
  • 10/2019 Yanzhi gave a presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at AI Research Seminar in CS Department, Boston University.
  • 10/2019 Yanzhi serves as Data Science Program search committee member.
  • 10/2019 Yanzhi gave a presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at HENP workshop at ESWEEK, New York City.
  • 10/2019 Presentation on “Deep compressed pneumonia detection for low-power embedded devices” at Hardware-Aware Learning workshop at MICCAI 2019.
  • 10/2019 Yanzhi visited Shanghai Jiaotong University (groups of Prof. Li Jiang and Prof. Weikang Qian).
  • 10/2019 Yanzhi gave a presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at CS Department Seminar at Shenzhen University.
  • 10/2019 Yanzhi visits Moffet Inc. and gave a presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices”.
  • 10/2019 Yanzhi gave a presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at CS Department Seminar at Chinese University of Hong Kong.
  • 10/2019 Yanzhi gave a presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at CS Department Seminar at City University of Hong Kong.
  • 10/2019 Yanzhi visited USC (Prof. Massoud Pedram) and talks about the recent work ” From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices “.
  • 10/2019 Yanzhi presents his work on DNN model compression and acceleration at the ECE Forum of Northeastern University.
  • 09/2019 Yanzhi gave a presentation on “Model Compression vs. Robustness of DNNs — Can We Have Both?” at Foundations of Safe Learning workshop organized by MIT-IBM Watson AI Lab.
  • 09/2019 Yanzhi gave a presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at ECE Graduate Seminar at University of Rhode Island.
  • 09/2019 Yanzhi visited Brown University (Prof. Iris Bahar) and talks about the recent work ” From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices “.
  • 09/2019 Receives funding from NSF RTML: RTML: Large: Efficient and adaptive real-time learning for next generation wireless systems. Thanks NSF!
  • 09/2019 Our collaborative work on AQFP energy efficient analysis and deep learning acceleration has been reported in ScienceDaily and AAAS, and has been cited by over 20 media.
  • 09/2019 Yanzhi will serve as TPC member in DAC 2020.
  • 09/2019 Yanzhi will serve as TPC member in ISQED 2020.
  • 09/2019 Yanzhi will serve as reviewer for CVPR 2020.
  • 09/2019 Yanzhi serves as reviewer for HPCA 2020.
  • 09/2019 Yanzhi gave a presentation on “From 7,000X model compression to 100X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at Air Force Research Lab, Rome, NY.
  • 09/2019 Yanzhi visited Syracuse University and presented the recent work on mobile acceleration of DNNs.
  • 09/2019 Ruizhe’s paper on AQFP superconducting circuit buffer and splitter insertion has been accepted by ICCD 2019.
  • 09/2019 Yanzhi gave a presentation on “From 5,000X model compression to 50X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at CASPA (Chinese American Semiconductor Professional Association).
  • 09/2019 Yanzhi visited San Jose Campus of Northeastern University and FutureWei.
  • 09/2019 Kaidi’s paper accepted by NeurIPS, 2019.
  • 09/2019 Student Yifan Gong receives Dean’s Fellowship at NEU.
  • 08/2019 The collaborative work with IBM has been reported by TowardsDataScience.
  • 08/2019 Our collaborative work “GraphQ: Scalable PIM-based graph processing” has been accepted in MICRO 2019. (acceptance rate 18.6%)
  • 08/2019 Two papers accepted by ASP-DAC 2020. One on structured DNN pruning and quantization and mapping framework to memristor crossbar arrays. The other one on 3D printing object detection and data set generation.
  • 08/2019 Receives $133K supplement funding from NSF CPS Medium: Enabling multimodal sensing, real-time onboard detection and adaptive control for fully autonomous unmanned aerial systems. Thanks NSF!
  • 08/2019 Receives funding from NSF SPX on FASTLEAP: FPGA based compact deep learning platform. Thanks NSF!
  • 08/2019 Receives funding from NSF CNS on Content-Based Viewport Prediction Framework for Live Virtual Reality Streaming. Thanks NSF!
  • 08/2019 Tianyun’s paper “Generation of Low Distortion Adversarial Attacks via Convex Programming” has been accepted by ICDM 2019 (acceptance rate 18.5%).
  • 08/2019 Tianyun’s paper “Generation of Low Distortion Adversarial Attacks via Convex Programming” has received Best Paper Nomination (finally Top 3) in KDD 2019 AdvML workshop.
  • 08/2019 Yanzhi will serve as Guest Editor of IEEE Trans. on Computers, Special Issue on “Emerging Technologies and Trends in Machine Learning Architectures”.
  • 08/2019 Our collaborative paper “Memory augmented deep recurrent neural network for video question answering” has been accepted by IEEE Trans. on Neural Networks and Learning Systems (TNNLS) (Impact Factor 12.18).
  • 08/2019 Yanzhi receives funding from SRC (Semiconductor Research Coorporation) on automatic UAV using deep learning technique and hardware acceleration. Thanks SRC!
  • 07/2019 Our work “Deep compressed pneumonia detection for low-power embedded devices,” has been accepted for presentation in Hardware-Aware Machine Learning workshop in MICCAI 2019.
  • 07/2019 Yanzhi gave a presentation on “From 5,000X model compression to 50X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at AliBaba.
  • 07/2019 Yanzhi gave a presentation on “From 5,000X model compression to 50X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at Facebook.
  • 07/2019 Yanzhi gave a presentation on “From 5,000X model compression to 50X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at Achronix Inc.
  • 07/2019 Our collaborative paper “Design of atomically-thin-body field-effect sensors and pattern recognition neural networks for ultra-sensitive and intelligent trace explosive detection” has been accepted by 2D Materials (Impact Factor 6.9).
  • 07/2019 Yanzhi gave a presentation on “From 5,000X model compression to 50X acceleration: Achieving real-time execution of ALL DNNs on mobile devices” at Computer Science Department of UCLA.
  • 07/2019 Yanzhi gave two presentations on ” From 5,000X model compression to 50X acceleration: Achieving real-time execution of ALL DNNs on mobile devices ” at seminar/group meeting at EE Department of USC.
  • 07/2019 Yanzhi attended the Program Review Meeting of IARPA at USC.
  • 07/2019 Our work “Adversarial robustness vs. model compression, or both?” has been accepted by ICCV 2019. It is the first work to address the tradeoff between adversarial robustness and DNN weight pruning. Enhanced ADMM algorithm can enhance the robustness under the same compression rate.
  • 07/2019 Yanzhi will serve as TPC member of SRC at MICRO 2019.
  • 07/2019 Yanzhi will serve as TPC member of ISQED 2019.
  • 07/2019 The second Ph.D. student, Ning Liu, will start as a superstar employee at DiDi AI Research (DiDi Inc.).
  • 07/2019 Our work “Non-Structured DNN Weight Pruning Considered Harmful” is on Arxiv. It integrates our most recent progresses on DNN weight pruning and weight quantization. It has a strong conclusion that non-structured DNN weight pruning is not preferred on any platform. We suggest not to continue working on sparsity-aware DNN acceleration with non-structured weight pruning.
  • 07/2019 Yanzhi receives FPGA testing, toolset, and framework support from Achronix Inc. Thanks Achronix!
  • 07/2019 The Google Equipment Award (student Fuming Guo) has been expanded to a total of 2K TPU-V3 until Jan. 2020. Thanks Google!
  • 07/2019 Yanzhi presents an invited paper on IDE development, logic synthesis and buffer/splitter insertation of AQFP superconducting technology at ISVLSI 2019, Miami.
  • 07/2019 Our collaborative paper with YNU “AQFP: Towards building extremely energy-efficient circuits and systems” has been accepted by Nature Scientific Reports.
  • 06/2019 Yanzhi serves as session chair of DAC 2019.
  • 06/2019 Yanzhi presents the EDA tool for superconducting electronics in DAC 2019 Birds-of-a-Feather meeting “Open-Source Academic EDA Software”.
  • 06/2019 Yanzhi organizes a panel on the superconducting EDA in SLIP workshop collocated with DAC 2019.
  • 06/2019 Student Geng Yuan and Yanzhi help to organize the System Design Contest at DAC 2019. Geng Yuan receives 2019 System Design Contest Special Service Recognition Award.
  • 06/2019 Receives a Google Equipment Award (student Fuming Guo), allowing for usage of 110 TPU-V2 and a cluster of 512 TPU-V3. Thanks Google!
  • 06/2019 Yanzhi serves as an organizer of the HALO workshop co-located with ICCAD 2019.
  • 06/2019 Yanzhi attended the Third ACSIC Symposium on Frontiers in Computing (SOFC) and organized a panel.
  • 06/2019 Yanzhi presented two posters on SOFC “26ms inference time for ResNet-50” and “Non-structured DNN weight pruning considered harmful”.
  • 06/2019 Ph.D. student Caiwen Ding has passed the Ph.D. defense, and formally becomes Dr. Ding or Prof. Ding.
  • 06/2019 Yanzhi gave an presentation about “5,000X model compression in DNNs; But, is it truly desirable?” visited Center for Energy-Efficient Computing and Applications at Peking University.
  • 06/2019 Yanzhi gave an presentation about “5,000X model compression in DNNs; But, is it truly desirable?” at Peking University (School of Software & Microelectronics).
  • 06/2019 Yanzhi gave an presentation about “5,000X model compression in DNNs; But, is it truly desirable?” Seminar was given at Qingwei Intelligent Co.
  • 06/2019 Yanzhi gave an presentation about “5,000X model compression in DNNs; But, is it truly desirable?” at Tsinghua University (Institute for Interdisciplinary Information Sciences).
  • 06/2019 Yanzhi gave an presentation about “5,000X model compression in DNNs; But, is it truly desirable?” at Tsinghua University.
  • 06/2019 Yanzhi gave an presentation about “5,000X model compression in DNNs; But, is it truly desirable?” at Institute of Computing Technology (ICT) Chinese Academy of Sciences.
  • 06/2019 Yanzhi gave an presentation about “5,000X model compression in DNNs; But, is it truly desirable?” at Shanghai Tech. University.
  • 06/2019 Yanzhi gave an presentation about “5,000X model compression in DNNs; But, is it truly desirable?” at Shanghai Jiaotong University.
  • 06/2019 Yanzhi gave an presentation about “5,000X model compression in DNNs; But, is it truly desirable?” at East China Normal University.
  • 06/2019 Yanzhi gave an presentation about “5,000X model compression in DNNs; But, is it truly desirable?” at Yokohama National University, Japan.
  • 05/2019 Ph.D. student Caiwen Ding has presented around 20 times on block-circulant matrix based deep learning acceleration at different universities. He is delighted to join the Dept. of CSE at University of Connecticut as a Tenure-Track Assistant Professor.
  • 05/2019 Two collaborative works have been accepted in IJCAI 2019 (acceptance rate 17.8%), including a hierarchical random switching scheme for better defense over DNN adversarial attacks, and interpreting and evaluating neural network robustness.
  • 05/2019 Our collaborative work “26ms inference time for ResNet-50: Towards real-time execution of all DNNs on smartphone” appear on ICML 2019 workshop. We achieve the fastest DNN execution on mobile devices, multiple times than state-of-the-arts, with the help of compilers.
  • 05/2019 Our work “Toward extremely low bit and lossless accuracy in DNNs with progressive ADMM” appear on ICML 2019 workshop. For the first time, we demonstrate that fully-binarized DNNs (with all layer weights binarized) can be lossless in accuracy on MNIST and CIFAR-10 datasets. We demonstrate the first fully-binarized ResNet on ImageNet dataset.
  • 05/2019 Our work “ResNet can be pruned 60X: Introducing network purification and unused path removal (P-RM) after weight pruning” is on ArXiv. It extends our ADMM-based structured pruning for DNNs, achieving the best weight pruning rates without accuracy loss. For ResNet on CIFAR-10 dataset, it achieves unprecedented structured weight pruning rate of 59.8X, which is 35.4X improvement compared with competing method!
  • 05/2019 An invited paper “IDE development, logic synthesis and buffer/splitter insertion framework for AQFP superconducting circuits” on ISVLSI 2019.
  • 05/2019 Our collaborative paper on superconducting electronics for deep learning acceleration is accepted by ISEC 2019, the premier conference on superconducting electronics.
  • 05/2019 Yanzhi serves as session chair of GLSVLSI 2019.
  • 05/2019 Yanzhi serves as TPC member of ICCD 2019.
  • 05/2019 Yanzhi attended IARPA SuperTools TEM-4 Meeting at Synopsys Headquarter.
  • 05/2019 Receives funding from SRC on real-time DNN deployment on UAVs. Thanks SRC!
  • 05/2019 Yanzhi gave an presentation about “5,000X model compression in DNNs; But, is it truly desirable?” at Northwestern University.
  • 05/2019 Yanzhi gave an presentation about “ADMM-based weight pruning for real-time deep learning acceleration on mobile devices”, Invited Talk at GLS-VLSI.
  • 04/2019 Yanzhi serves as TPC member of ICCAD 2019.
  • 04/2019 Yanzhi gave an presentation about “5,000X model compression in DNNs; But, is it truly desirable?” at the University of Miami.
  • 04/2019 Yanzhi gave an presentation about “5,000X model compression in DNNs; But, is it truly desirable?” at the University of Central Florida.
  • 04/2019 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Wayne State University.
  • 03/2019 Our paper “A stochastic computing based deep learning framework using adiabatic quantum-flux-parametron superconducting technology” has been accepted in ISCA 2019 (acceptance rate 17.0%). This work is in collaboration with Olivia Chen and Prof. Yoshikawa at Yokohama National University, and testing was performed at YNU. It is the first of deep learning acceleration using superconducting technology. Functionality and energy efficiency have been verified using tapeout testing. At this time, it achieves the highest energy efficiency – 4 to 5 orders of magnitude higher than CMOS-based implementations.
  • 03/2019 Three collaborative works have been accepted in CVPR 2019 (acceptance rate 25.2%), including (i) a DNN-oriented JPEN compression technique against adversarial examples, (ii) multi-channel attention selection GAM for cross-view image translation (oral presentation, acceptance rate 6%), and (iii) machine vision guided 3D medical image compression.
  • 03/2019 Our work “Progressive DNN compression: A key to achieve ultra-high weight pruning and quantization rates using ADMM” is on ArXiv. It extends our ADMM-based weight pruning method to a progressive framework, achieving the best weight pruning rates without accuracy loss: 348X on LeNet-5, 44X on AlexNet, 34X on VGGNet, and 9.2X on ResNet-50. The latter three on ImageNet dataset.
  • 03/2019 Our work “StructADMM: A systematic, high-efficiency framework of structured weight pruning for DNNs” is on ArXiv. It works on structured weight pruning for DNNs and is at this time with the highest performance on representative DNNs, 3X – 28X compared with competing methods.
  • 03/2019 Our work “Second rethinking of network pruning in the adversarial setting” is on Arxiv. It analyzes the fact that the robustness of DNNs may be degraded after network pruning, and develops effective methods to mitigate this adversarial effect.
  • 03/2019 Our collaborative work “Experience-driven congestion control; when multi-path TCP meets deep reinforcement learning” has been accepted in IEEE JSAC.
  • 03/2019 Yanzhi serves as publicity chair of NanoArch 2019.
  • 03/2019 Receives funding from NSF IRES on U.S.-Japan International Research Experiences. Thanks NSF!
  • 03/2019 Receives funding from DiDi Inc. Thanks!
  • 03/2019 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at North Carolina State University.
  • 03/2019 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Virginia Tech.
  • 03/2019 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at College of William and Mary.
  • 03/2019 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at WarnerMedia Inc.
  • 02/2019 Two collaborative works on robustness and security of DNNs have been accepted in DAC 2019 (acceptance rate 22%). The first work “Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural Networks” describes a new attack to deep learning acceleration hardware. The second work is on fault-tolerant DNNs “A Fault-Tolerant Neural Network Architecture”.
  • 02/2019 One collaborative work “Structured adversarial attack: towards general implementation and better interpretability” has been accepted in ICLR 2019. It investigates the interpretability of adversarial attacks on DNNs.
  • 02/2019 Three collaborative works have been accepted in GLSVLSI 2019, including a hardware simulator of DNN acceleration, a majority logic synthesis framework of AQFP superconducting circuits, and an invited paper on ADMM-based weight pruning of DNNs for mobile device accelerations.
  • 02/2019 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Northeastern University Seminar Series.
  • 02/2019 Yanzhi gave an presentation about “A systematic framework of model compression and adversarial security of deep learning systems,” Invited Talk, at International Workshop on Built-in Security: Architecture, Chip, and System, co-held with HPCA 2019.
  • 02/2019 Yanzhi gave an presentation about “A systematic framework of model compression of deep learning systems for autonomous devices,” Keynote Talk, at International Workshop on Architectures and Systems for Autonomous Devices (ASAD), co-held with HPCA 2019.
  • 02/2019 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at University of Texas Austin.
  • 01/2019 The collaborative work “A General Framework to Map Neural Networks onto Neuromorphic Processor” is accepted as an invited paper in ISQED 2019.
  • 01/2019 Yanzhi serves as the TPC member of IJCAI 2019.
  • 01/2019 Yanzhi serves as publicity chair of ISVLSI 2019.
  • 01/2019 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Rice University.
  • 01/2019 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Spectral MD Inc.
  • 01/2019 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at University of Texas Dallas.
  • 01/2019 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Texas A&M University.
  • 01/2019 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Texas State University.
  • 01/2019 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Web Seminar Given to Inspirit IoT Inc.
  • 01/2019 Student Shaokai Ye gave two presentations on ADMM-based DNN model compression during his internship at SenseTime Inc.
  • 12/2018 Our paper “ADMM-NN: An algorithm-hardware co-design framework of DNNs using ADMM” has been accepted by ASPLOS 2019 (acceptance rate 17.4%). It presents a systematic, unified framework of DNN weight pruning and quantization using the powerful optimization tool ADMM. At this time, it achieves the highest weight storage reduction, up to 1,910X, for DNNs, almost two orders of magnitude higher than competing methods.
  • 12/2018 Our paper “Deep reinforcement learning for dynamic treatment regimes on medical registry data” has been accepted by Nature Scientific Reports.
  • 12/2018 Yanzhi was MIT Technology Review TR35 China Finalist, 2018.
  • 12/2018 Zhe Li (primary advisor Prof. Qinru QIu) graduated and joined Google AI Perceptron.
  • 12/2018 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Shanghai Jiaotong University.
  • 12/2018 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Capital Normal University.
  • 12/2018 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Beijing Institute of Technology University.
  • 12/2018 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Beihang University (BUAA).
  • 12/2018 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Peking University (School of Software & Microelectronics).
  • 12/2018 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Tsinghua University.
  • 11/2018 Our paper “REQ-YOLO: A Resource-Aware, Efficient Quantization Framework for Object Detection on FPGAs” has been accepted by FPGA 2019 (acceptance rate 25%). It presents a novel quantization scheme to quantize each weight to a sum of two power-of-2 numbers. At this time, it represents the highest performance and energy efficiency in FPGA implementation of (YOLO) object detection tasks.
  • 11/2018 Yanzhi’s first authored paper “Universal Approximation Property and Equivalence of Stochastic Computing-based Neural Networks and Binary Neural Networks” has been accepted by AAAI 2019 (acceptance rate 16.2%).
  • 11/2018 The collaborative work “CircConv: A Structured Convolution with Low Complexity” has been accepted by AAAI 2019 (acceptance rate 16.2%). This work improves from the algorithm level the block-circulant based deep learning acceleration framework.
  • 11/2018 The collaborative work “A 65nm 0.39-to-140.3TOPS/W 1-to-12b Unified Neural Network Using Block-Circulant-Enabled Transpose-Domain Acceleration with 8.1X Higher TOPS/mm2 and 6T HBST-TRAM-based 2D Data-Reuse Architecture” has been accepted in ISSCC 2019. It is the first solid-state tapeout of block-circulant based DNN acceleration framework.
  • 11/2018 Our paper “E-RNN: Design Optimization for Efficient Recurrent Neural Networks in FPGAs” has been accepted in HPCA 2019 (acceptance rate 19.7%). This work improves the block-circulant based deep learning acceleration of RNNs from hardware implementation and ADMM-based algorithm improvement. At this time, it represents the highest performance and energy efficiency in FPGA implementation of RNNs.
  • 11/2018 Yanzhi serves as track chair of GLSVLSI 2019.
  • 11/2018 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Boston University.
  • 11/2018 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at New England Computer Vision (NECV) Workshop.
  • 11/2018 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” Seminar Given to IBM Research Cambridge.
  • 11/2018 Yanzhi gave an presentation about “Towards 1,000X model compression in deep neural networks” at Hardware and Algorithms for Learning On-a-Chip (HALO) Workshop, co-held with ICCAD 2018.
  • 09/2018 Two collaborative works on robustness and security of DNNs have been accepted as invited papers in ASP-DAC 2019. The two works are “ADMM attack: an enhanced adversarial attack for deep neural networks with undetectable distortions”, and “A system-level perspective to understand the vulnerability of deep learning systems”.
  • 09/2018 The collaborative work “Reinforced Adversarial Attacks on Deep Neural Networks using ADMM” has been accepted by GlobalSIP 2018. At this time, it is the strongest white-box adversarial attack generation technique using the advanced optimization technique ADMM.
  • 09/2018 Yanzhi serves as TPC member of DAC 2019.
  • 09/2018 Yanzhi serves as ERC member of ASPLOS 2019.
  • 09/2018 The NSF ASIC I/UCRC Center has been awarded. Thanks NSF!
  • 09/2018 One invited paper on deep learning cybersecurity has been accepted in ASP-DAC 2018.
  • 09/2018 One invited paper on artificial intelligence in hardware platform in ISQED 2018.
  • 09/2018 Yanzhi gave an invited presentation about energy-efficient deep learning systems at University of Southern California.
  • 09/2018 Yanzhi attended the IARPA Annual PI Meeting at USC.
  • 08/2018 Yanzhi will serve as Program Committee member at International Conference on Reconfigurable Computing and FPGAs (ReConFig 2018).
  • 08/2018 Yanzhi will serve as Program Committee member in ISQED 2019.
  • 08/2018 Yanzhi will give an invited presentation on ADMM-based DNN model compression and efficient implementation in the HALO workshop co-located with ICCAD 2018.
  • 07/2018 Yanzhi will serve as committee member/reviewer for ICLR 2019.
  • 07/2018 The Phase I I/UCRC Center: Center for Alternative Sustainable and Intelligent Computing (ASIC) has been recommended by NSF. Thanks NSF!
  • 07/2018 A collaborative paper on fingerprinting intelligent 3D printers accepted in CCS 2018.
  • 07/2018 A collaborative paper on convergence set based enumerative FSM accepted in MICRO 2018.
  • 07/2018 The paper “A memristor-based optimization framework for AI applications” receives Popular Paper in IEEE Circuits and Systems Magazine.
  • 07/2018 Yanzhi will serve as Program Committee member/Reviewer of GlobalSIP 2019 and BioCAS 2019.
  • 07/2018 One paper on ADMM-based weight pruning for DNNs accepted in ECCV 2018, with 20x+ weight pruning ratio for ImageNet applications.
  • 07/2018 One paper on ADMM-based adversarial attacks for DNNs accepted in ACM Multimedia 2018.
  • 06/2018 Yanzhi will work as an unpaid research assistant professor starting from Aug. 2018 at Syracuse University.
  • 06/2018 New journal paper on Spiking LSTM has been accepted in IEEE JETCAS, 2018.
  • 06/2018 Two papers on Defensive Dropbox for DNNs and Superconducting Josephson Junctions (special paper) accepted in ICCAD 2018.
  • 06/2018 Yanzhi will serve as Program Committee member of AAAI 2019.
  • 06/2018 Yanzhi will attend and serve as session chair at Design Automation Conference (DAC) and SLIP at San Francisco.
  • 06/2018 Journal paper HEIF on high efficiency inference framework of deep learning accepted in IEEE Trans. on CAD.
  • 06/2018 Yanzhi attended the IARPA site meeting at USC, LA.
  • 06/2018 Yanzhi attended the Second ACSIC Symposium on Frontiers in Computing (SOFC) and presented a poster on ADMM-based model compression for deep learning, Dallas.
  • 06/2018 Yanzhi attended the 1st Forum on Frontiers of Science & Engineering: Everything towards AI at Seattle.
  • 05/2018 Collaborative work on stochastic computing-based deep learning accepted in IEEE Trans. Computers.
  • 05/2018 Yanzhi will serve as TPC member in DATE, 2019.
  • 05/2018 Yanzhi will serve as TPC member in INFOCOM, 2019.
  • 04/2018 Yanzhi will serve as TPC member in ICCD, 2018.
  • 04/2018 Yanzhi will present a poster at ACSIC Symposium on Frontiers in Computing (SOFC-2018).
  • 04/2018 Collaborative papers accepted in ISVLSI, 2018.
  • 04/2018 One collaborative paper on recommendation systems using deep learning accepted in ICPR, 2018.
  • 04/2018 Yanzhi has participated in a CUSE Grant on “Quantum Information, Emerging Technologies and Fundamental Physics”.
  • 04/2018 Yanzhi attends the IARPA SuperTools TEM meeting at Synopsys.
  • 04/2018 Yanzhi will serve as TPC member in ACM/IEEE ASPDAC 2019.
  • 04/2018 Yanzhi will serve as an NSF panelist.
  • 04/2018 Yanzhi will serve in the panel on “Low-Power and Trusted Machine Learning” in GLS-VLSI 2018.
  • 04/2018 Yanzhi will serve as TPC member in INFOCOM 2019.
  • 04/2018 Yanzhi will serve as TPC member in ACM/IEEE International Conference on Computer Aided Design (ICCAD), 2018.
  • 03/2018 Two papers on systematic weight pruning and block-circulant recurrent neural networks accepted by ICLR workshop 2018.
  • 03/2018 Yanzhi will give a presentation on ADMM with deep learning at Syracuse University.
  • 03/2018 Yanzhi gives a presentation on energy efficient deep learning systems at Peking University.
  • 03/2018 Yanzhi gives a presentation on energy efficient deep learning systems at Tsinghua University.
  • 03/2018 One invited paper on hardware deep learning systems in GLS-VLSI 2018.
  • 02/2018 Collaborative project with Dr. Qinru Qiu has been funded by Intel.
  • 03/2018 Yanzhi gives a presentation on energy efficient deep learning systems at Yokohama National University, Japan.
  • 03/2018 One collaborative paper on stochastic computing-based deep learning systems have been accepted in IEEE Trans. on Computers, 2018.
  • 02/2018 One paper on UAV Trajectory Control using Deep Reinforcement Learning is accepted by 2018 DAC Work-in-Progress Poster sessions.
  • 02/2018 One paper on JPEG-based defense mechanism against adversarial example attacks in deep learning systems has been accepted by DAC 2018.
  • 02/2018 Invited papers on deep reinforcement learning applications and memristor crossbar applictions have been accepted by Elsevier Journal on Nano Communications, 2018.
  • 02/2018 Yanzhi gives a presentation at energy-efficient deep learning systems at Rice University.
  • 01/2018 Yanzhi will serve as a TPC member at ACM/IEEE CODES+ISSS conference, 2018.
  • 01/2018 Yanzhi will serve as publicity chair at IEEE SLIP 2018, which is co-located with DAC 2018 at San Francisco.
  • 01/2018 One paper on model-free control for distributed stream data processing using deep reinforcement learning is accepted by VLDB 2018.
  • 01/2018 Yanzhi gives an invited presentation on energy-efficient deep learning systems at Northeastern University.
  • 01/2018 One paper on hybrid energy storage for cloud computing systems is accepted by PLOS-One​.
  • 11/2017 ​One paper is accepted by INFOCOM 2018.
  • 11/2017 One paper receives Best Paper Nomination at ISQED. Congratulations to Xiaolong, Geng, Yipeng, and Ao!
  • 11/2017 One paper is accepted by FPGA 2018.
  • 11/2017 One paper is accepted by ASPLOS 2018.
  • 11/2017 One paper is accepted by AAAI 2018.
  • 11/2017 Three papers are accepted by DATE 2018.
  • 11/2017 One paper on memristor for AI applications is accepted by IEEE Circuits and Systems Magazine.
  • 11/2017 One paper on hybrid energy storage for cloud computing systems is conditionally accepted by PLOS-One​.
  • 11/2017 Yanzhi will serve as Track Chair of EDA at GLSVLSI, 2018.
  • 11/2017 Yanzhi gives an invited presentation on energy-efficient deep learning systems at Cornell University.
  • 11/2017 Yanzhi will give an invited presentation on energy-efficient deep learning systems at Air Force Lab.
  • 10/2017 Yanzhi serves as external reviewer of ASPLOS, 2018.
  • 10/2017 Yanzhi attends IARPA Gold Flux project Kickoff meeting at San Jose, CA.
  • 10/2017 Yanzhi visits University of California, San Diego.
  • 10/2017 Yanzhi gives an invited presentation on energy-efficient deep learning systems at University of Pittsburgh.
  • 10/2017 One paper is accepted by IEEE Design & Test Magazine.
  • 09/2017 Yanzhi visits University of California, Los Angeles.
  • 09/2017 Yanzhi gives an invited presentation on energy-efficient deep learning systems at University of Southern California.
  • 09/2017 Yanzhi gives an invited presentation on energy-efficient deep learning systems at New York city.
  • 09/2017 Two collaborative papers are accepted by ASP-DAC 2018.
  • 09/2017 One collaborative paper (with FIU and UCF) is nominated for Best Paper Award for ASP-DAC 2018.
  • 08/2017 Yanzhi gives invited presentations on energy-efficient deep learning systems at Wuhan University and Huazhong University of Science and Technology.
  • 08/2017 Yanzhi serves as a panelist at the annual seminar at Peking University Center of Energy-Efficient Computing and Applications.
  • 08/2017 NSF CPS Medium Proposal has been funded, which focuses on intelligent UAVs.
  • 08/2017 The I/UCRC Planning Workshop has been held at University of Notre Dame. Thanks Dr. Yiyu Shi for organizing!
  • 08/2017 Remotely present the research on Bayesian neural networks to DARPA USC Meeting.
  • 08/2017 One paper is accepted by ICCD 2017.
  • 08/2017 One paper is accepted by IEEE Trans. on Circuits and Systems II.
  • 08/2017 Yanzhi serves as TPC Member for DATE 2018.
  • 08/2017 Yanzhi serves as TPC Member for ISQED 2018.
  • 07/2017 Yanzhi serves as TPC Member for INFOCOM 2018.
  • 07/2017 One paper is accepted by ACM/IEEE International Symposium on Microarchitecture (MICRO).
  • 07/2017 One paper is accepted by IEEE International Conference on Healthcare Informatics.
  • 06/2017 NSF Medium Proposal (together with Arizona State University) has been funded, which focuses on deep learning techniques in wireless networking.
  • 06/2017 Yanzhi organizes a panel on deep learning and neuromorphic computing at IEEE SLIP, co-located with DAC 2017.
  • 06/2017 Receives equipment and license donation from Altera (Intel) and Xilinx.
  • 06/2017 Three papers (including one invited) are accepted by ACM/IEEE ICCAD.
  • 06/2017 One paper is accepted by IEEE Design and Test of Computers.
  • 05/2017 One paper is accepted as oral presentation by International Conference on Machine Learning (ICML).
  • 05/2017 One paper is accepted in International Symposium on Low Power Electronic Design.
  • 05/2017 One paper is accepted by IEEE Trans. on Sustainable Computing.
  • 05/2017 One paper is accepted by ACM Trans. on Cyber-Physical Systems.
  • 05/2017 Yanzhi serves as TPC Member for ASP-DAC 2018.
  • 05/2017 Yanzhi gives an invited presentation on energy-efficient deep learning systems at SUNY Buffalo.
  • 05/2017 Receives funding from IARPA on superconducting based micro-processors, leading institution University of Southern California.
  • 05/2017 Receives Best Paper Award and Best Student Presentation Award at IEEE ICASSP, which is top 3 among 2,000+ submissions. Congratulations to Sijia and Ao!
  • 05/2017 Three students (Ruizhe, Xiaolong, Hongjia) receive the A. Richard Newton Award at Design Automation Conference. Congratulations!
  • 05/2017 Yanzhi serves as TPC Member for ICCD 2017.