Record:   Prev Next
作者 Chen, Yu-Ting., author
書名 Customizable computing / Yu-Ting Chen, Jason Cong, Michael Gill, Glenn Reinman, and Bingjun Xiao
出版項 San Rafael, California (1537 Fourth Street, San Rafael, CA 94901 USA) : Morgan & Claypool, 2015
國際標準書號 9781627057684 e-book
9781627057677 print
國際標準號碼 10.2200/S00650ED1V01Y201505CAC033 doi
book jacket
說明 1 online resource (xi, 106 pages) : illustrations
text rdacontent
electronic isbdmedia
online resource rdacarrier
系列 Synthesis lectures on computer architecture, 1935-3243 ; # 33
Synthesis digital library of engineering and computer science
Synthesis lectures in computer architecture ; # 33. 1935-3243
附註 Part of: Synthesis digital library of engineering and computer science
Includes bibliographical references (pages 89-103)
1. Introduction --
2. Road map -- 2.1 Customizable system-on-chip design -- 2.1.1 Compute resources -- 2.1.2 On-chip memory hierarchy -- 2.1.3 Network-on-chip -- 2.2 Software layer --
3. Customization of cores -- 3.1 Introduction -- 3.2 Dynamic core scaling and defeaturing -- 3.3 Core fusion -- 3.4 Customized instruction set extensions -- 3.4.1 Vector instructions -- 3.4.2 Custom compute engines -- 3.4.3 Reconfigurable instruction sets -- 3.4.4 Compiler support for custom instructions --
4. Loosely coupled compute engines -- 4.1 Introduction -- 4.2 Loosely coupled accelerators -- 4.2.1 Wire-speed processor -- 4.2.2 Comparing hardware and software LCA management -- 4.2.3 Utilizing LCAs -- 4.3 Accelerators using field programmable gate arrays -- 4.4 Coarse-grain reconfigurable arrays -- 4.4.1 Static mapping -- 4.4.2 Run-time mapping -- 4.4.3 CHARM -- 4.4.4 Using composable accelerators --
5. On-chip memory customization -- 5.1 Introduction -- 5.1.1 Caches and buffers (scratchpads) -- 5.1.2 On-chip memory system customizations -- 5.2 CPU cache customizations -- 5.2.1 Coarse-grain customization strategies -- 5.2.2 Fine-grain customization strategies -- 5.3 Buffers for accelerator-rich architectures -- 5.3.1 Shared buffer system design for accelerators -- 5.3.2 Customization of buffers inside an accelerator -- 5.4 Providing buffers in caches for CPUs and accelerators -- 5.4.1 Providing software-managed scratchpads for CPUs -- 5.4.2 Providing buffers for accelerators -- 5.5 Caches with disparate memory technologies -- 5.5.1 Coarse-grain customization strategies -- 5.5.2 Fine-grain customization strategies --
6. Interconnect customization -- 6.1 Introduction -- 6.2 Topology customization -- 6.2.1 Application-specific topology synthesis -- 6.2.2 Reconfigurable shortcut insertion -- 6.2.3 Partial crossbar synthesis and reconfiguration -- 6.3 Routing customization -- 6.3.1 Application-aware deadlock-free routing -- 6.3.2 Data flow synthesis -- 6.4 Customization enabled by new device/circuit technologies -- 6.4.1 Optical interconnects -- 6.4.2 Radio-frequency interconnects -- 6.4.3 RRAM-based interconnects --
7. Concluding remarks -- Bibliography -- Authors' biographies
Abstract freely available; full-text restricted to subscribers or individual document purchasers
Compendex
INSPEC
Google scholar
Google book search
Since the end of Dennard scaling in the early 2000s, improving the energy efficiency of computation has been the main concern of the research community and industry. The large energy efficiency gap between general-purpose processors and application-specific integrated circuits (ASICs) motivates the exploration of customizable architectures, where one can adapt the architecture to the workload. In this Synthesis lecture, we present an overview and introduction of the recent developments on energy-efficient customizable architectures, including customizable cores and accelerators, on-chip memory customization, and interconnect optimization. In addition to a discussion of the general techniques and classification of different approaches used in each area, we also highlight and illustrate some of the most successful design examples in each category and discuss their impact on performance and energy efficiency. We hope that this work captures the state-of-the-art research and development on customizable architectures and serves as a useful reference basis for further research, design, and implementation for large-scale deployment in future computing systems
Also available in print
Mode of access: World Wide Web
System requirements: Adobe Acrobat Reader
Title from PDF title page (viewed on July 25, 2015)
鏈接 Print version: 9781627057677
主題 Computer architecture
accelerator architectures
memory architecture
multiprocessor interconnection
parallel architectures
reconfigurable architectures
memory
green computing
Alt Author Cong, Jason., author
Gill, Michael., author
Reinman, Glenn., author
Xiao, Bingjun., author
Record:   Prev Next