图书介绍

计算机系统 英文版2025|PDF|Epub|mobi|kindle电子书版本百度云盘下载

计算机系统 英文版
  • (美)布赖恩特(Bryant,R.E.)等著 著
  • 出版社: 北京:电子工业出版社
  • ISBN:7505396242
  • 出版时间:2004
  • 标注页数:978页
  • 文件大小:119MB
  • 文件页数:40072383页
  • 主题词:计算机系统-教材-英文

PDF下载


点此进入-本书在线PDF格式电子书下载【推荐-云解压-方便快捷】直接下载PDF格式图书。移动端-PC端通用
种子下载[BT下载速度快]温馨提示:(请使用BT下载软件FDM进行下载)软件下载地址页直链下载[便捷但速度慢]  [在线试读本书]   [在线获取解压码]

下载说明

计算机系统 英文版PDF格式电子书版下载

下载的文件为RAR压缩包。需要使用解压软件进行解压得到PDF格式图书。

建议使用BT下载工具Free Download Manager进行下载,简称FDM(免费,没有广告,支持多平台)。本站资源全部打包为BT种子。所以需要使用专业的BT下载软件进行下载。如BitComet qBittorrent uTorrent等BT下载工具。迅雷目前由于本站不是热门资源。不推荐使用!后期资源热门了。安装了迅雷也可以迅雷进行下载!

(文件页数 要大于 标注页数,上中下等多册电子书除外)

注意:本站所有压缩包均有解压码: 点击下载压缩包解压工具

图书目录

1 A Tour of Computer Systems1

1.1 Information is Bits+Context2

1.2 Programs Are Translated by Other Programs into Different Forms4

1.3 It Pays to Understand How Compilation Systems Work6

1.4 Processors Read and Interpret Instructions Stored in Memory6

1.4.1 Hardware Organization of a System7

1.4.2 Running the he ll o Program9

1.5 Caches Matter11

1.6 Storage Devices Form a Hierarchy12

1.7 The Operating System Manages the Hardware13

1.7.1 Processes15

1.7.2 Threads16

1.7.3 Virtual Memory16

1.7.4 Files18

1.8 Systems Communicate With Other Systems Using Networks18

1.9 The Next Step20

1.10 Summary20

Bibliographics Notes21

Part Ⅰ Program Structure and Execution24

2 Representing and Manipulating Information24

2.1 Information Storage28

2.1.1 Hexadecimal Notation28

2.1.2 Words32

2.1.3 Data Sizes32

2.1.4 Addressing and Byte Ordering34

2.1.5 Representing Strings40

2.1.6 Representing Code41

2.1.7 Boolean Algebras and Rings42

2.1.8 Bit-Level Operations in C46

2.1.9 Logical Operations in C49

2.1.10 Shift Operations in C50

2.2 Integer Representations51

2.2.1 Integral Data Types51

2.2.2 Unsigned and Twvo's-Complement Encodings51

2.2.3 Conversions Between Signed and Unsigned56

2.2.4 Signed vs. Unsigned in C59

2.2.5 Expanding the Bit Representation of a Number61

2.2.6 Truncating Numbers63

2.2.7 Advice on Signed vs.Unsigned65

2.3 Integer Arithmetic65

2.3.1 Unsigned Addition66

2.3.2 Twvo's-Complement Addition69

2.3.3 Twvo's-Complement Negation72

2.3.4 Unsigned Multiplication74

2.3.5 Two's-Complement Multiplication75

2.3.6 Multiplying by Powers of Two76

2.3.7 Dividing by Powers of Two77

2.4 Floating Point80

2.4.1 Fractional Binary Numbers81

2.4.2 IEEE Floating-Point Representation83

2.4.3 Example Numbers85

2.4.4 Rounding89

2.4.5 Floating-Point Operations91

2.4.6 Floating Point in C92

2.5 Summary98

Bibliographic Notes99

Homework Problems99

Solution to Practice Problems108

3 Machine-Level Representation of Programs122

3.1 A Historical Perspective125

3.2 Program Encodings128

3.2.1 Machine-Level Code129

3.2.2 Code Examples130

3.2.3 A Note on Formatting133

3.3 Data Formats135

3.4 Accessing Information136

3.4.1 Operand Specifiers137

3.4.2 Data Movement Instructions138

3.4.3 Data Movement Example141

3.5 Arithmetic and Logical Operations143

3.5.1 Load Effective Address143

3.5.2 Unary and Binary Operations144

3.5.3 Shift Operations145

3.5.4 Discussion146

3.5.5 Special Arithmetic Operations147

3.6 Control148

3.6.1 Condition Codes149

3.6.2 Accessing the Condition Codes150

3.6.3 Jump Instructions and their Encodings152

3.6.4 Translating Conditional Branches156

3.6.5 Loops158

3.6.6 Switch Statements166

3.7 Procedures170

3.7.1 Stack Frame Structure170

3.7.2 Transferring Control172

3.7.3 Register Usage Conventions173

3.7.4 Procedure Example174

3.7.5 Recursive Procedures178

3.8 Array Allocation and Access180

3.8.1 Basic Principles180

3.8.2 Pointer Arithmetic182

3.8.3 Arrays and Loops183

3.8.4 Nested Arrays183

3.8.5 Fixed Size Arrays186

3.8.6 Dynamically Allocated Arrays188

3.9 Heterogeneous Data Structures191

3.9.1 Structures191

3.9.2 Unions194

3.10 Alignment198

3.11 Putting it Together: Understanding Pointers201

3.12 Life in the Real World: Using the GDB Debugger204

3.13 Out-of-Bounds Memory References and Buffer Overflow206

3.14 Floating-Point Code211

3.14.1 Floating-Point Registers211

3.14.2 Stack Evaluation of Expressions212

3.14.3 Floating-Point Data Movement and Conversion Operations215

3.14.4 Floating-Point Arithmetic Instructions217

3.14.5 Using Floating Point in Procedures220

3.14.6 Testing and Comparing Floating-Point Values221

3.15 Embedding Assembly Code in C Programs223

3.15.1 Basic Inline Assembly224

3.15.2 Extended Form of asm226

3.16 Summary230

Bibliographic Notes231

Homework Problems231

Solutions to Practice Problems238

4 Processor Architecture254

4.1 The Y86 Instruction Set Architecture258

4.2 Logic Design and the Hardware Control Language HCL271

4.2.1 Logic Gates271

4.2.2 Combinational Circuits and HCL Boolean Expressions272

4.2.3 Word-Level Combinational Circuits and HCL Integer Expressions274

4.2.4 Set Membership278

4.2.5 Memory and Clocking279

4.3 Sequential Y86 Implementations280

4.3.1 Organizing Processing into Stages281

4.3.2 SEQ Hardware Structure291

4.3.3 SEQ Timing295

4.3.4 SEQ Stage Implementations298

4.3.5 SEQ+: Rearranging the Computation Stages305

4.4 General Principles of Pipelining309

4.4.1 Computational Pipelines309

4.4.2 A Detailed Look at Pipeline Operation311

4.4.3 Limitations of Pipelining313

4.4.4 Pipelining a System with Feedback315

4.5 Pipelined Y86 Implementations317

4.5.1 Inserting Pipeline Registers317

4.5.2 Rearranging and Relabeling Signals321

4.5.3 Next PC Prediction322

4.5.4 Pipeline Hazards323

4.5.5 Avoiding Data Hazards by Stalling328

4.5.6 Avoiding Data Hazards by Forwarding330

4.5.7 Load/Use Data Hazards335

4.5.8 PIPE Stage Implementations337

4.5.9 Pipeline Control Logic343

4.5.10 Performance Analysis352

4.5.11 Unfinished Business354

4.6 Summary359

4.6.1 Y86 Simulators360

Bibliographic Notes360

Homework Problems360

Solutions to Practice Problems365

5 Optimizing Program Performance376

5.1 Capabilities and Limitations of Optimizing Compilers379

5.2 Expressing Program Performance382

5.3 Program Example384

5.4 Eliminating Loop Inefficiencies387

5.5 Reducing Procedure Calls391

5.6 Eliminating Unneeded Memory References393

5.7 Understanding Modern Processors395

5.7.1 Overall Operation395

5.7.2 Functional Unit Performance399

5.7.3 A Closer Look at Processor Operation400

5.8 Reducing Loop Overhead408

5.9 Converting to Pointer Code412

5.10 Enhancing Parallelism415

5.10.1 Loop Splitting415

5.10.2 Register Spilling420

5.10.3 Limits to Parallelism421

5.11 Putting it Together: Summary of Results for Optimizing Combining Code423

5.11.1 Floating-Point Performance Anomaly423

5.11.2 Changing Platforms425

5.12 Branch Prediction and Misprediction Penalties425

5.13 Understanding Memory Performance429

5.13.1 Load Latency429

5.13.2 Store Latency431

5.14 Life in the Real World: Performance Improvement Techniques436

5.15 Identifying and Eliminating Performance Bottlenecks437

5.15.1 Program Profiling437

5.15.2 Using a Profiler to Guide Optimization439

5.15.3 Amdahl's Law443

5.16 Summary444

Bibliographic Notes445

Homework Problems445

Solutions to Practice Problems450

6 The Memory Hierarchy454

6.1 Storage Technologies457

6.1.1 Random-Access Memory457

6.1.2 Disk Storage464

6.1.3 Storage Technology Trends476

6.2 Locality478

6.2.1 Locality of References to Program Data478

6.2.2 Locality of Instruction Fetches480

6.2.3 Summary of Locality481

6.3 The Memory Hierarchy482

6.3.1 Caching in the Memory Hierarchy484

6.3.2 Summary of Memory Hierarchy Concepts486

6.4 Cache Memories487

6.4.1 Generic Cache Memory Organization488

6.4.2 Direct-Mapped Caches490

6.4.3 Set Associative Caches497

6.4.4 Fully Associative Caches499

6.4.5 Issues with Writes503

6.4.6 Instruction Caches and Unified Caches504

6.4.7 Performance Impact of Cache Parameters505

6.5 Writing Cache-Friendly Code507

6.6 Putting it Together: The Impact of Caches on Program Performance511

6.6.1 The Memory Mountain512

6.6.2 Rearranging Loops to Increase Spatial Locality517

6.6.3 Using Blocking to Increase Temporal Locality520

6.7 Putting It Together: Exploiting Locality in Your Programs523

6.8 Summary524

Bibliographic Notes524

Homework Problems525

Solutions to Practice Problems531

Part Ⅱ Running Programs on a System538

7 Linking538

7.1 Compiler Drivers541

7.2 Static Linking542

7.3 Object Files543

7.4 Relocatable Object Files544

7.5 Symbols and Symbol Tables545

7.6 Symbol Resolution548

7.6.1 How Linkers Resolve Multiply Defined Global Symbols549

7.6.2 Linking with Static Libraries553

7.6.3 How Linkers Use Static Libraries to Resolve References556

7.7 Relocation557

7.7.1 Relocation Entries558

7.7.2 Relocating Symbol References558

7.8 Executable Object Files561

7.9 Loading Executable Object Files564

7.10 Dynamic Linking with Shared Libraries566

7.11 Loading and Linking Shared Libraries from Applications568

7.12 Position-Independent Code (PIC)570

7.12.1 PIC Data References572

7.12.2 PIC Function Calls572

7.13 Tools for Manipulating Object Files574

7.14 Summary575

Bibliographic Notes575

Homework Problems576

Solutions to Practice Problems582

8 Exceptional Control Flow584

8.1 Exceptions587

8.1.1 Exception Handling588

8.1.2 Classes of Exceptions590

8.1.3 Exceptions in Intel Processors592

8.2 Processes594

8.2.1 Logical Control Flow594

8.2.2 Private Address Space595

8.2.3 User and Kernel Modes596

8.2.4 Context Switches597

8.3 System Calls and Error Handling599

8.4 Process Control600

8.4.1 Obtaining Process ID's600

8.4.2 Creating and Terminating Processes600

8.4.3 Reaping Child Processes605

8.4.4 Putting Processes to Sleep610

8.4.5 Loading and Running Programs611

8.4.6 Using fork and execve to Run Programs614

8.5 Signals617

8.5.1 Signal Terminology617

8.5.2 Sending Signals619

8.5.3 Receiving Signals623

8.5.4 Signal Handling Issues625

8.5.5 Portable Signal Handling631

8.5.6 Explicitly Blocking Signals633

8.6 Nonlocal Jumps635

8.7 Tools for Manipulating Processes638

8.8 Summary638

Bibliographic Notes639

Homework Problems639

Solutions to Practice Problems645

9 Measuring Program Execution Time650

9.1 The Flow of Time on a Computer System653

9.1.1 Process Scheduling and Timer Interrupts654

9.1.2 Time from an Application Program's Perspective655

9.2 Measuring Time by Interval Counting658

9.2.1 Operation658

9.2.2 Reading the Process Timers659

9.2.3 Accuracy of Process Timers660

9.3 Cycle Counters663

9.3.1 IA32 Cycle Counters663

9.4 Measuring Program Execution Time with Cycle Counters665

9.4.1 The Effects of Context Switching665

9.4.2 Caching and Other Effects667

9.4.3 The K -Best Measurement Scheme671

9.5 Time-of-Day Measurements680

9.6 Putting it Together: An Experimental Protocol683

9.7 Looking into the Future684

9.8 Life in the Real World: An Implementation of the K -Best Measurement Scheme684

9.9 Lessons Learned685

9.10 Summary686

Bibliographic Notes686

Homework Problems687

Solutions to Practice Problems688

10 Virtual Memory690

10.1 Physical and Virtual Addressing693

10.2 Address Spaces694

10.3 VM as a Tool for Caching695

10.3.1 DRAM Cache Organization696

10.3.2 Page Tables696

10.3.3 Page Hits698

10.3.4 Page Faults698

10.3.5 Allocating Pages700

10.3.6 Locality to the Rescue Again700

10.4 VM as a Tool for Memory Management701

10.4.1 Simplifying Linking701

10.4.2 Simplifying Sharing702

10.4.3 Simplifying Memory Allocation702

10.4.4 Simplifying Loading703

10.5 VM as a Tool for Memory Protection703

10.6 Address Translation704

10.6.1 Integrating Caches and VM707

10.6.2 Speeding up Address Translation with a TLB707

10.6.3 Multi-Level Page Tables709

10.6.4 Putting it Together: End-to-End Address Translation711

10.7 Case Study: The Pentium/Linux Memory System715

10.7.1 Pentium Address Translation716

10.7.2 Linux Virtual Memory System721

10.8 Memory Mapping724

10.8.1 Shared Objects Revisited725

10.8.2 The fork Function Revisited727

10.8.3 The execve Function Revisited727

10.8.4 User-Level Memory Mapping with the mmap Function728

10.9 Dynamic Memory Allocation730

10.9.1 The malloc and free Functions731

10.9.2 Why Dynamic Memory Allocation?733

10.9.3 Allocator Requirements and Goals735

10.9.4 Fragmentation736

10.9.5 Implementation Issues737

10.9.6 Implicit Free Lists737

10.9.7 Placing Allocated Blocks739

10.9.8 Splitting Free Blocks740

10.9.9 Getting Additional Heap Memory740

10.9.10 Coalescing Free Blocks741

10.9.11 Coalescing with Boundary Tags741

10.9.12 Putting it Together: Implementing a Simple Allocator744

10.9.13 Explicit Free Lists751

10.9.14 Segregated Free Lists752

10.10 Garbage Collection755

10.10.1 Garbage Collector Basics756

10.10.2 Mark&Sweep Garbage Collectors757

10.10.3 Conservative Mark&Sweep for C Programs758

10.11 Common Memory-Related Bugs in C Programs759

10.11.1 Dereferencing Bad Pointers759

10.11.2 Reading Uninitialized Memory760

10.11.3 Allowing Stack Buffer Overflows760

10.11.4 Assuming that Pointers and the Objects they Point to Are the Same Size761

10.11.5 Making Off-by-One Errors761

10.11.6 Referencing a Pointer Instead of the Object it Points to762

10.11.7 Misunderstanding Pointer Arithmetic762

10.11.8 Referencing Nonexistent Variables763

10.11.9 Referencing Data in Free Heap Blocks763

10.11.10 Introducing Memory Leaks764

10.12 Recapping Some Key Ideas About Virtual Memory764

10.13 Summary764

Bibliographic Notes765

Homework Problems766

Solutions to Practice Problems770

Part Ⅲ Interaction and Communication Between Programs776

11 System-Level I/O776

11.1 Unix I/O778

11.2 Opening and Closing Files779

11.3 Reading and Writing Files781

11.4 Robust Reading and Writing with the RIo Package783

11.4.1 RIo Unbuffered Input and Output Functions783

11.4.2 RIo Buffered Input Functions784

11.5 Reading File Metadata789

11.6 Sharing Files791

11.7 I/O Redirection793

11.8 Standard I/O795

11.9 Putting It Together: Which I/O Functions Should I Use?796

11.10 Summary797

Bibliographic Notes798

Homework Problems798

12 Network Programming800

12.1 The Client-Server Programming Model802

12.2 Networks803

12.3 The Global IP Internet807

12.3.1 IP Addresses809

12.3.2 Internet Domain Names811

12.3.3 Internet Connections815

12.4 The Sockets Interface816

12.4.1 Socket Address Structures817

12.4.2 The socket Function818

12.4.3 The connect Function818

12.4.4 Theopen_clientfdFunction819

12.4.5 The bind Function819

12.4.6 The l i s ten Function820

12.4.7 Theopen_listenfdFunction821

12.4.8 The accept Function821

12.4.9 Example Echo Client and Server823

12.5 Web Servers826

12.5.1 Web Basics826

12.5.2 Web Content827

12.5.3 HTTP Transactions828

12.5.4 Serving Dynamic Content831

12.6 Putting it Together: The TINY Web Server834

12.7 Summary841

Bibliographic Notes842

Homework Problems842

Solutions to Practice Problems843

13 Concurrent Programming846

13.1 Concurrent Programming With Processes849

13.11 A Concurrent Server Based on Processes851

13.1.2 Pros and Cons of Processes851

13.2 Concurrent Programming With 1/O Multiplexing853

13.2.1 A Concurrent Event-Driven Server Based on I/O Multiplexing856

13.2.2 Pros and Cons of I/O Multiplexing860

13.3 Concurrent Programming With Threads861

13.3.1 Thread Execution Model862

13.3.2 Posix Threads863

13.3.3 Creating Threads864

13.3.4 Terminating Threads864

13.3.5 Reaping Terminated Threads865

13.3.6 Detaching Threads865

13.3.7 Initializing Threads866

13.3.8 A Concurrent Server Based on Threads866

13.4 Shared Variables in Threaded Programs868

13.4.1 Threads Memory Model869

13.4.2 Mapping Variables to Memory870

13.4.3 Shared Variables870

13.5 Synchronizing Threads with Semaphores871

13.5.1 Progress Graphs874

13.5.2 Using Semaphores to Access Shared Variables877

13.5.3 Posix Semaphores878

13.5.4 Using Semaphores to Schedule Shared Resources879

13.6 Putting It Together: A Concurrent Server Based on Prethreading882

13.7 Other Concurrency Issues885

13.7.1 Thread Safety885

13.7.2 Reentrancy888

13.7.3 Using Existing Library Functions in Threaded Programs889

13.7.4 Races890

13.7.5 Deadlocks891

13.8 Summary894

Bibliographic Notes895

Homework Problems895

Solutions to Practice Problems899

A HCL Descriptions of Processor Control Logic905

B Error Handling925

Bibliography949

Index953

热门推荐