ACM DL

Architecture and Code Optimization (TACO)

Menu

Search Issue
enter search term and/or author name

Archive


ACM Transactions on Architecture and Code Optimization (TACO), Volume 12 Issue 4, January 2016

Reuse Distance-Based Probabilistic Cache Replacement
Subhasis Das, Tor M. Aamodt, William J. Dally
Article No.: 33
DOI: 10.1145/2818374

This article proposes Probabilistic Replacement Policy (PRP), a novel replacement policy that evicts the line with minimum estimated hit probability under optimal replacement instead of the line with maximum expected reuse distance. The...

MINIME-GPU: Multicore Benchmark Synthesizer for GPUs
Etem Deniz, Alper Sen
Article No.: 34
DOI: 10.1145/2818693

We introduce MINIME-GPU, a novel automated benchmark synthesis framework for graphics processing units (GPUs) that serves to speed up architectural simulation of modern GPU architectures. Our framework captures important characteristics of...

Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology
Li Tan, Zizhong Chen, Shuaiwen Leon Song
Article No.: 35
DOI: 10.1145/2822893

Ever-growing performance of supercomputers nowadays brings demanding requirements of energy efficiency and resilience, due to rapidly expanding size and duration in use of the large-scale computing systems. Many application/architecture-dependent...

Tumbler: An Effective Load-Balancing Technique for Multi-CPU Multicore Systems
Kishore Kumar Pusukuri, Rajiv Gupta, Laxmi N. Bhuyan
Article No.: 36
DOI: 10.1145/2827698

Schedulers used by modern OSs (e.g., Oracle Solaris 11™ and GNU/Linux) balance load by balancing the number of threads in run queues of different cores. While this approach is effective for a single CPU multicore system, we show that it can...

Four Metrics to Evaluate Heterogeneous Multicores
Erik Tomusk, Christophe Dubach, Michael O’boyle
Article No.: 37
DOI: 10.1145/2829950

Semiconductor device scaling has made single-ISA heterogeneous processors a reality. Heterogeneous processors contain a number of different CPU cores that all implement the same Instruction Set Architecture (ISA). This enables greater flexibility...

SPCM: The Striped Phase Change Memory
Morteza Hoseinzadeh, Mohammad Arjomand, Hamid Sarbazi-Azad
Article No.: 38
DOI: 10.1145/2829951

Phase Change Memory (PCM) devices are one of the known promising technologies to take the place of DRAM devices with the aim of overcoming the obstacles of reducing feature size and stopping ever growing amounts of leakage power. In exchange for...

Two-Level Hybrid Sampled Simulation of Multithreaded Applications
Chuntao Jiang, Zhibin Yu, Lieven Eeckhout, Hai Jin, Xiaofei Liao, Chengzhong Xu
Article No.: 39
DOI: 10.1145/2818353

Sampled microarchitectural simulation of single-threaded applications is mature technology for over a decade now. Sampling multithreaded applications, on the other hand, is much more complicated. Not until very recently have researchers proposed...

Integrated Mapping and Synthesis Techniques for Network-on-Chip Topologies with Express Channels
Sandeep D'souza, Soumya J, Santanu Chattopadhyay
Article No.: 40
DOI: 10.1145/2831233

The addition of express channels to a traditional mesh network-on-chip (NoC) has emerged as a viable solution to solve the problem of high latency. In this article, we address the problem of integrated mapping and synthesis for express...

PARSECSs: Evaluating the Impact of Task Parallelism in the PARSEC Benchmark Suite
Dimitrios Chasapis, Marc Casas, Miquel Moretó, Raul Vidal, Eduard Ayguadé, Jesús Labarta, Mateo Valero
Article No.: 41
DOI: 10.1145/2829952

In this work, we show how parallel applications can be implemented efficiently using task parallelism. We also evaluate the benefits of such parallel paradigm with respect to other approaches. We use the PARSEC benchmark suite as our test bed,...

A Framework for Application-Guided Task Management on Heterogeneous Embedded Systems
Francisco Gaspar, Luis Taniça, Pedro Tomás, Aleksandar Ilic, Leonel Sousa
Article No.: 42
DOI: 10.1145/2835177

In this article, we propose a general framework for fine-grain application-aware task management in heterogeneous embedded platforms, which allows integration of different mechanisms for an efficient resource utilization, frequency scaling, and...

Managing Mismatches in Voltage Stacking with CoreUnfolding
Ehsan K. Ardestani, Rafael Trapani Possignolo, Jose Luis Briz, Jose Renau
Article No.: 43
DOI: 10.1145/2835178

Five percent to 25% of power could be wasted before it is delivered to the computational resources on a die, due to inefficiencies of voltage regulators and resistive loss. The power delivery could benefit if, at the same power, the...

FaultSim: A Fast, Configurable Memory-Reliability Simulator for Conventional and 3D-Stacked Systems
Prashant J. Nair, David A. Roberts, Moinuddin K. Qureshi
Article No.: 44
DOI: 10.1145/2831234

As memory systems scale, maintaining their Reliability Availability and Serviceability (RAS) is becoming more complex. To make matters worse, recent studies of DRAM failures in data centers and supercomputer environments have highlighted that...

Adaptive Correction of Sampling Bias in Dynamic Call Graphs
Byeongcheol Lee
Article No.: 45
DOI: 10.1145/2840806

This article introduces a practical low-overhead adaptive technique of correcting sampling bias in profiling dynamic call graphs. Timer-based sampling keeps the overhead low but sampling bias lowers the accuracy when either observable call events...

Fence Placement for Legacy Data-Race-Free Programs via Synchronization Read Detection
Andrew J. Mcpherson, Vijay Nagarajan, Susmit Sarkar, Marcelo Cintra
Article No.: 46
DOI: 10.1145/2835179

Shared-memory programmers traditionally assumed Sequential Consistency (SC), but modern systems have relaxed memory consistency. Here, the trend in languages is toward Data-Race-Free (DRF) models, where, assuming annotated synchronizations and the...

Optimizing Control Transfer and Memory Virtualization in Full System Emulators
Ding-Yong Hong, Chun-Chen Hsu, Cheng-Yi Chou, Wei-Chung Hsu, Pangfeng Liu, Jan-Jan Wu
Article No.: 47
DOI: 10.1145/2837027

Full system emulators provide virtual platforms for several important applications, such as kernel and system software development, co-verification with cycle accurate CPU simulators, or application development for hardware still in development....

The Polyhedral Model of Nonlinear Loops
Aravind Sukumaran-Rajam, Philippe Clauss
Article No.: 48
DOI: 10.1145/2838734

Runtime code optimization and speculative execution are becoming increasingly prominent to leverage performance in the current multi- and many-core era. However, a wider and more efficient use of such techniques is mainly hampered by the...

Citadel: Efficiently Protecting Stacked Memory from TSV and Large Granularity Failures
Prashant J. Nair, David A. Roberts, Moinuddin K. Qureshi
Article No.: 49
DOI: 10.1145/2840807

Stacked memory modules are likely to be tightly integrated with the processor. It is vital that these memory modules operate reliably, as memory failure can require the replacement of the entire socket. To make matters worse, stacked memory...

Automatic Vectorization of Interleaved Data Revisited
Andrew Anderson, Avinash Malik, David Gregg
Article No.: 50
DOI: 10.1145/2838735

Automatically exploiting short vector instructions sets (SSE, AVX, NEON) is a critically important task for optimizing compilers. Vector instructions typically work best on data that is contiguous in memory, and operating on non-contiguous data...

A Filtering Mechanism to Reduce Network Bandwidth Utilization of Transaction Execution
Lihang Zhao, Lizhong Chen, Woojin Choi, Jeffrey Draper
Article No.: 51
DOI: 10.1145/2837028

Hardware Transactional Memory (HTM) relies heavily on the on-chip network for intertransaction communication. However, the network bandwidth utilization of transactions has been largely neglected in HTM designs. In this work, we propose a cost...

Enabling PGAS Productivity with Hardware Support for Shared Address Mapping: A UPC Case Study
Olivier Serres, Abdullah Kayi, Ahmad Anbar, Tarek El-Ghazawi
Article No.: 52
DOI: 10.1145/2842686

Due to its rich memory model, the partitioned global address space (PGAS) parallel programming model strikes a balance between locality-awareness and the ease of use of the global address space model. Although locality-awareness can lead to high...

On How to Accelerate Iterative Stencil Loops: A Scalable Streaming-Based Approach
Riccardo Cattaneo, Giuseppe Natale, Carlo Sicignano, Donatella Sciuto, Marco Domenico Santambrogio
Article No.: 53
DOI: 10.1145/2842615

In high-performance systems, stencil computations play a crucial role as they appear in a variety of different fields of application, ranging from partial differential equation solving, to computer simulation of particles’...

Falcon: A Graph Manipulation Language for Heterogeneous Systems
Unnikrishnan Cheramangalath, Rupesh Nasre, Y. N. Srikant
Article No.: 54
DOI: 10.1145/2842618

Graph algorithms have been shown to possess enough parallelism to keep several computing resources busy—even hundreds of cores on a GPU. Unfortunately, tuning their implementation for efficient execution on a particular hardware...

FluidCheck: A Redundant Threading-Based Approach for Reliable Execution in Manycore Processors
Rajshekar Kalayappan, Smruti R. Sarangi
Article No.: 55
DOI: 10.1145/2842620

Soft errors have become a serious cause of concern with reducing feature sizes. The ability to accommodate complex, Simultaneous Multithreading (SMT) cores on a single chip presents a unique opportunity to achieve reliable execution, safe from...

Rethinking Memory Permissions for Protection Against Cross-Layer Attacks
Jesse Elwell, Ryan Riley, Nael Abu-Ghazaleh, Dmitry Ponomarev, Iliano Cervesato
Article No.: 56
DOI: 10.1145/2842621

The inclusive permissions structure (e.g., the Intel ring model) of modern commodity CPUs provides privileged system software layers with arbitrary permissions to access and modify client processes, allowing them to manage these clients and the...

Resistive GP-SIMD Processing-In-Memory
Amir Morad, Leonid Yavits, Shahar Kvatinsky, Ran Ginosar
Article No.: 57
DOI: 10.1145/2845084

GP-SIMD, a novel hybrid general-purpose SIMD architecture, addresses the challenge of data synchronization by in-memory computing, through combining data storage and massive parallel processing. In this article, we explore a resistive...

Iteration Interleaving--Based SIMD Lane Partition
Yaohua Wang, Dong Wang, Shuming Chen, Zonglin Liu, Shenggang Chen, Xiaowen Chen, Xu Zhou
Article No.: 58
DOI: 10.1145/2847253

The efficacy of single instruction, multiple data (SIMD) architectures is limited when handling divergent control flows. This circumstance results in SIMD fragments using only a subset of the available lanes. We propose an iteration...

Integer Linear Programming-Based Scheduling for Transport Triggered Architectures
Tomi Äijö, Pekka Jääskeläinen, Tapio Elomaa, Heikki Kultala, Jarmo Takala
Article No.: 59
DOI: 10.1145/2845082

Static multi-issue machines, such as traditional Very Long Instructional Word (VLIW) architectures, move complexity from the hardware to the compiler. This is motivated by the ability to support high degrees of instruction-level parallelism...

Sensible Energy Accounting with Abstract Metering for Multicore Systems
Qixiao Liu, Miquel Moreto, Jaume Abella, Francisco J. Cazorla, Daniel A. Jimenez, Mateo Valero
Article No.: 60
DOI: 10.1145/2842616

Chip multicore processors (CMPs) are the preferred processing platform across different domains such as data centers, real-time systems, and mobile devices. In all those domains, energy is arguably the most expensive resource in a computing...

Symmetry-Agnostic Coordinated Management of the Memory Hierarchy in Multicore Systems
Miao Zhou, Yu Du, Bruce Childers, Daniel Mosse, Rami Melhem
Article No.: 61
DOI: 10.1145/2847254

In a multicore system, many applications share the last-level cache (LLC) and memory bandwidth. These resources need to be carefully managed in a coordinated way to maximize performance. DRAM is still the technology of choice in most systems....

RFVP: Rollback-Free Value Prediction with Safe-to-Approximate Loads
Amir Yazdanbakhsh, Gennady Pekhimenko, Bradley Thwaites, Hadi Esmaeilzadeh, Onur Mutlu, Todd C. Mowry
Article No.: 62
DOI: 10.1145/2836168

This article aims to tackle two fundamental memory bottlenecks: limited off-chip bandwidth (bandwidth wall) and long access latency (memory wall). To achieve this goal, our approach exploits the inherent error resilience of a wide range of...

Simultaneous Multi-Layer Access: Improving 3D-Stacked Memory Bandwidth at Low Cost
Donghyuk Lee, Saugata Ghose, Gennady Pekhimenko, Samira Khan, Onur Mutlu
Article No.: 63
DOI: 10.1145/2832911

3D-stacked DRAM alleviates the limited memory bandwidth bottleneck that exists in modern systems by leveraging through silicon vias (TSVs) to deliver higher external memory channel bandwidth. Today’s systems, however, cannot fully...

JavaScript Parallelizing Compiler for Exploiting Parallelism from Data-Parallel HTML5 Applications
Yeoul Na, Seon Wook Kim, Youngsun Han
Article No.: 64
DOI: 10.1145/2846098

With the advent of the HTML5 standard, JavaScript is increasingly processing computationally intensive, data-parallel workloads. Thus, the enhancement of JavaScript performance has been emphasized because the performance gap between JavaScript and...

DASH: Deadline-Aware High-Performance Memory Scheduler for Heterogeneous Systems with Hardware Accelerators
Hiroyuki Usui, Lavanya Subramanian, Kevin Kai-Wei Chang, Onur Mutlu
Article No.: 65
DOI: 10.1145/2847255

Modern SoCs integrate multiple CPU cores and hardware accelerators (HWAs) that share the same main memory system, causing interference among memory requests from different agents. The result of this interference, if it is not controlled well, is...

A Compile-Time Optimization Method for WCET Reduction in Real-Time Embedded Systems through Block Formation
Morteza Mohajjel Kafshdooz, Mohammadkazem Taram, Sepehr Assadi, Alireza Ejlali
Article No.: 66
DOI: 10.1145/2845083

Compile-time optimizations play an important role in the efficient design of real-time embedded systems. Usually, compile-time optimizations are designed to reduce average-case execution time (ACET). While ACET is a main concern in...