High Performance Computer Architecture - Georgia Tech



Wichtige informationen

  • Kurs
  • Online
  • Wann:
    Freie Auswahl

The HPCA course covers performance measurement, pipelining, and improved parallelism through various means.

Wichtige informationen

Wo und wann

Beginn Lage
Freie Auswahl

Was lernen Sie in diesem Kurs?

IT Law
Computer Architecture



Approx. 0

Built by Join thousands of students Course Summary

This class is offered as CS6290 at Georgia Tech where it is a part of the Online Masters Degree (OMS). Taking this course here will not earn credit towards the OMS degree.

The course begins with a lesson on performance measurement, which leads to a discussion on the necessity of performance improvement.

Pipelining, the first level of performance refinement, is reviewed. The weaknesses of pipelining will be exposed and explored, and various solutions to these issues will be studied. The student will learn hardware, software, and compiler based solutions to these issues.

Why Take This Course?

You will explore the fascinating field of computer architecture, studying the many methods developed to enhance computer performance. The trade-offs and compromises associated with each design and their effects on processor development is a captivating story that will make you a better computer scientist, regardless of your field of study.

Prerequisites and Requirements

You must be familiar with Assembly code, the C or C++ programming language, Unix or Linux, and the basics of pipelining.

See the Technology Requirements for using Udacity.

Syllabus Lesson 1: Introduction and Trends
  • Computer Architecture & Tech Trends
  • Moore's Law
  • Processor Speed, Cost, Power
  • Power Consumption
  • Fabrication Yield
Lesson 2: Performance Metrics and Evaluation
  • Measuring Performance
  • Benchmarks Standards
  • Iron Law of Performance
  • Amdahl's Law
  • Lhadma's Law
Lesson 3: Pipelining Review
  • Pipeline CPI
  • Processor Pipeline Stalls
  • Data Dependencies
  • Pipelining Outro
Lesson 4: Branches
  • Branch Prediction
  • Direction Predictor
  • Hierarchical Predictors
  • PShare
Lesson 5: Predication
  • If Conversion
  • Conditional Move
  • MOVc Summary
Lesson 6: Instruction Level Parallelism (ILP)
  • ILP Intro
  • RAW Dependencies
  • WAW Dependencies
  • Duplicating Register Values
  • Instruction Level Parallelism (ILP)
Lesson 7: Instruction Scheduling
  • Improving IPC
  • Tomasulo's Algorithm
  • Load and Store Instructions
Lesson 8: ReOrder Buffer
  • Exceptions in Out Of Order Execution
  • Branch Misprediction
  • Hardware Organization with ROB
Lesson 9: Memory Ordering
  • Memory Access Ordering
  • When Does Memory Write Happen
  • Out of Order Load Store Execution
  • Store to Load Forwarding
  • LSQ, ROB, and RS
Lesson 9: Memory
  • How Memory Works
  • One Memory Bit SRAM
  • One Memory Bit DRAM
  • Fast Page Mode
  • Connecting DRAM To The Processor
Lesson 10: Multi-Processing
  • Flynn's Taxonomy of Parallel Machines
  • Multiprocessor Needs Parallel Programs!
  • Centralized Shared Memory
  • Distributed Shared Memory
  • Message Passing Vs Shared Memory
  • Shared Memory Hardware
  • SMT Hardware Changes
  • SMT and Cache Performance