图书介绍

人工智能 一种现代的方法 第2版2025|PDF|Epub|mobi|kindle电子书版本百度云盘下载

人工智能 一种现代的方法 第2版
  • 罗素(Russel,S.),诺维格(Norvig,P.)著 著
  • 出版社: 北京:清华大学出版社
  • ISBN:7302128294
  • 出版时间:2006
  • 标注页数:1110页
  • 文件大小:126MB
  • 文件页数:40186573页
  • 主题词:人工智能-高等学校-教材-英文

PDF下载


点此进入-本书在线PDF格式电子书下载【推荐-云解压-方便快捷】直接下载PDF格式图书。移动端-PC端通用
种子下载[BT下载速度快]温馨提示:(请使用BT下载软件FDM进行下载)软件下载地址页直链下载[便捷但速度慢]  [在线试读本书]   [在线获取解压码]

下载说明

人工智能 一种现代的方法 第2版PDF格式电子书版下载

下载的文件为RAR压缩包。需要使用解压软件进行解压得到PDF格式图书。

建议使用BT下载工具Free Download Manager进行下载,简称FDM(免费,没有广告,支持多平台)。本站资源全部打包为BT种子。所以需要使用专业的BT下载软件进行下载。如BitComet qBittorrent uTorrent等BT下载工具。迅雷目前由于本站不是热门资源。不推荐使用!后期资源热门了。安装了迅雷也可以迅雷进行下载!

(文件页数 要大于 标注页数,上中下等多册电子书除外)

注意:本站所有压缩包均有解压码: 点击下载压缩包解压工具

图书目录

Ⅰ Artificial Intelligence1

1 Introduction1

1.1 What is AI?1

Acting humanly: The Turing Test approach2

Thinking humanly: The cognitive modeling approach3

Thinking rationally: The “laws of thought” approach4

Acting rationally: The rational agent approach4

1.2 The Foundations of Artificial Intelligence5

Philosophy (428 B.C.-present)5

Mathematics (c.800-present)7

Economics (1776-present)9

Neuroscience (1861-present)10

Psychology (1879-present)12

Computer engineering (1940-present)14

Control theory and Cybernetics (1948-present)15

Linguistics (1957-present)16

1.3 The History of Artificial Intelligence16

The gestation of artificial intelligence (1943-1955)16

The birth of artificial intelligence (1956)17

Early enthusiasm, great expectations (1952-1969)18

A dose of reality (1966-1973)21

Knowledge-based systems: The key to power? (1969-1979)22

Al becomes an industry (1980-present)24

The return of neural networks (1986-present)25

AI becomes a science (1987-present)25

The emergence of intelligent agents (1995-present)27

1.4 The State of the Art27

1.5 Summary28

Bibliographical and Historical Notes29

Exercises30

2 Intelligent Agents32

2.1 Agents and Environments32

2.2 Good Behavior: The Concept of Rationality34

Performance measures35

Rationality35

Omniscience, learning, and autonomy36

2.3 The Nature of Environments38

Specifying the task environment38

Properties of task environments40

2.4 The Structure of Agents44

Agent programs44

Simple reflex agents46

Model-based reflex agents48

Goal-based agents49

Utility-based agents51

Learning agents51

2.5 Summary54

Bibliographical and Historical Notes55

Exercises56

Ⅱ Problem-solving59

3 Solving Problems by Searching59

3.1 Problem-Solving Agents59

Well-defined problems and solutions62

Formulating problems62

3.2 Example Problems64

Toy problems64

Real-world problems67

3.3 Searching for Solutions69

Measuring problem-solving performance71

3.4 Uninformed Search Strategies73

Breadth-first search73

Depth-first search75

Depth-limited search77

Iterative deepening depth-first search78

Bidirectional search79

Comparing uninformed search strategies81

3.5 Avoiding Repeated States81

3.6 Searching with Partial Information83

Sensorless problems84

Contingency problems86

3.7 Summary87

Bibliographical and Historical Notes88

Exercises89

4 Informed Search and Exploration94

4.1 Informed (Heuristic) Search Strategies94

Greedy best-first search95

A search: Minimizing the total estimated solution cost97

Memory-bounded heuristic search101

Learning to search better104

4.2 Heuristic Functions105

The effect of heuristic accuracy on performance106

Inventing admissible heuristic functions107

Learning heuristics from experience109

4.3 Local Search Algorithms and Optimization Problems110

Hill-climbing search111

Simulated annealing search115

Local beam search115

Genetic algorithms116

4.4 Local Search in Continuous Spaces119

4.5 Online Search Agents and Unknown Environments122

Online search problems123

Online search agents125

Online local search126

Learning in online search127

4.6 Summary129

Bibliographical and Historical Notes130

Exercises134

5 Constraint Satisfaction Problems137

5.1 Constraint Satisfaction Problems137

5.2 Backtracking Search for CSPs141

Variable and value ordering143

Propagating information through constraints144

Intelligent backtracking: looking backward148

5.3 Local Search for Constraint Satisfaction Problems150

5.4 The Structure of Problems151

5.5 Summary155

Bibliographical and Historical Notes156

Exercises158

6 Adversarial Search161

6.1 Games161

6.2 Optimal Decisions in Games162

Optimal strategies163

The minimax algorithm165

Optimal decisions in multiplayer games165

6.3 Alpha-Beta Pruning167

6.4 Imperfect, Real-Time Decisions171

Evaluation functions171

Cutting off search173

6.5 Games That Include an Element of Chance175

Position evaluation in games with chance nodes177

Complexity of expectiminimax177

Card games179

6.6 State-of-the-Art Game Programs180

6.7 Discussion183

6.8 Summary185

Bibliographical and Historical Notes186

Exercises189

Ⅲ Knowledge and reasoning194

7 Logical Agents194

7.1 Knowledge-Based Agents195

7.2 The Wumpus World197

7.3 Logic200

7.4 Propositional Logic: A Very Simple Logic204

Syntax204

Semantics206

A simple knowledge base208

Inference208

Equivalence, validity, and satisfiability210

7.5 Reasoning Patterns in Propositional Logic211

Resolution213

Forward and backward chaining217

7.6 Effective propositional inference220

A complete backtracking algorithm221

Local-search algorithms222

Hard satisfiability problems224

7.7 Agents Based on Propositional Logic225

Finding pits and wumpuses using logical inference225

Keeping track of location and orientation227

Circuit-based agents227

A comparison231

7.8 Summary232

Bibliographical and Historical Notes233

Exercises236

8 First-Order Logic240

8.1 Representation Revisited240

8.2 Syntax and Semantics of First-Order Logic245

Models for first-order logic245

Symbols and interpretations246

Terms248

Atomic sentences248

Complex sentences249

Quantifiers249

Equality253

8.3 Using First-Order Logic253

Assertions and queries in first-order logic253

The kinship domain254

Numbers, sets, and lists256

The wumpus world258

8.4 Knowledge Engineering in First-Order Logic260

The knowledge engineering process261

The electronic circuits domain262

8.5 Summary266

Bibliographical and Historical Notes267

Exercises268

9 Inference in First-Order Logic272

9.1 Propositional vs.First-Order Inference272

Inference rules for quantifiers273

Reduction to propositional inference274

9.2 Unification and Lifting275

A first-order inference rule275

Unification276

Storage and retrieval278

9.3 Forward Chaining280

First-order definite clauses280

A simple forward-chaining algorithm281

Efficient forward chaining283

9.4 Backward Chaining287

A backward chaining algorithm287

Logic programming289

Efficient implementation of logic programs290

Redundant inference and infinite loops292

Constraint logic programming294

9.5 Resolution295

Conjunctive normal form for first-order logic295

The resolution inference rule297

Example proofs297

Completeness of resolution300

Dealing with equality303

Resolution strategies304

Theorem provers306

9.6 Summary310

Bibliographical and Historical Notes310

Exercises315

10 Knowledge Representation320

10.1 Ontological Engineering320

10.2 Categories and Objects322

Physical composition324

Measurements325

Substances and objects327

10.3 Actions, Situations, and Events328

The ontology of situation calculus329

Describing actions in situation calculus330

Solving the representational frame problem332

Solving the inferential frame problem333

Time and event calculus334

Generalized events335

Processes337

Intervals338

Fluents and objects339

10.4 Mental Events and Mental Objects341

A formal theory of beliefs341

Knowledge and belief343

Knowledge, time, and action344

10.5 The Internet Shopping World344

Comparing offers348

10.6 Reasoning Systems for Categories349

Semantic networks350

Description logics353

10.7 Reasoning with Default Information354

Open and closed worlds354

Negation as failure and stable model semantics356

Circumscription and default logic358

10.8 Truth Maintenance Systems360

10.9 Summary362

Bibliographical and Historical Notes363

Exercises369

Ⅳ Planning375

11 Planning375

11.1 The Planning Problem375

The language of planning problems377

Expressiveness and extensions378

Example: Air cargo transport380

Example: The spare tire problem381

Example: The blocks world381

11.2 Planning with State-Space Search382

Forward state-space search382

Backward state-space search384

Heuristics for state-space search386

11.3 Partial-Order Planning387

A partial-order planning example391

Partial-order planning with unbound variables393

Heuristics for partial-order planning394

11.4 Planning Graphs395

Planning graphs for heuristic estimation397

The GRAPHPLAN algorithm398

Termination of GRAPHPLAN401

11.5 Planning with Propositional Logic402

Describing planning problems in propositional logic402

Complexity of propositional encodings405

11.6 Analysis of Planning Approaches407

11.7 Summary408

Bibliographical and Historical Notes409

Exercises412

12 Planning and Acting in the Real World417

12.1 Time, Schedules, and Resources417

Scheduling with resource constraints420

12.2 Hierarchical Task Network Planning422

Representing action decompositions423

Modifying the planner for decompositions425

Discussion427

12.3 Planning and Acting in Nondeterministic Domains430

12.4 Conditional Planning433

Conditional planning in fully observable environments433

Conditional planning in partially observable environments437

12.5 Execution Monitoring and Replanning441

12.6 Continuous Planning445

12.7 MultiAgent Planning449

Cooperation: Joint goals and plans450

Multibody planning451

Coordination mechanisms452

Competition454

12.8 Summary454

Bibliographical and Historical Notes455

Exercises459

Ⅴ Uncertain knowledge and reasoning462

13 Uncertainty462

13.1 Acting under Uncertainty462

Handling uncertain knowledge463

Uncertainty and rational decisions465

Design for a decision-theoretic agent466

13.2 Basic Probability Notation466

Propositions467

Atomic events468

Prior probability468

Conditional probability470

13.3 The Axioms of Probability471

Using the axioms of probability473

Why the axioms of probability are reasonable473

13.4 Inference Using Full Joint Distributions475

13.5 Independence477

13.6 Bayes' Rule and Its Use479

Applying Bayes' rule: The simple case480

Using Bayes' rule: Combining evidence481

13.7 The Wumpus World Revisited483

13.8 Summary486

Bibliographical and Historical Notes487

Exercises489

14 Probabilistic Reasoning492

14.1 Representing Knowledge in an Uncertain Domain492

14.2 The Semantics of Bayesian Networks495

Representing the full joint distribution495

Conditional independence relations in Bayesian networks499

14.3 Efficient Representation of Conditional Distributions500

14.4 Exact Inference in Bayesian Networks504

Inference by enumeration504

The variable elimination algorithm507

The complexity of exact inference509

Clustering algorithms510

14.5 Approximate Inference in Bayesian Networks511

Direct sampling methods511

Inference by Markov chain simulation516

14.6 Extending Probability to First-Order Representations519

14.7 Other Approaches to Uncertain Reasoning523

Rule-based methods for uncertain reasoning524

Representing ignorance: Dempster-Shafer theory525

Representing vagueness: Fuzzy sets and fuzzy logic526

14.8 Summary528

Bibliographical and Historical Notes528

Exercises533

15 Probabilistic Reasoning over Time537

15.1 Time and Uncertainty537

States and observations538

Stationary processes and the Markov assumption538

15.2 Inference in Temporal Models541

Filtering and prediction542

Smoothing544

Finding the most likely sequence547

15.3 Hidden Markov Models549

Simplified matrix algorithms549

15.4 Kalman Filters551

Updating Gaussian distributions553

A simple one-dimensional example554

The general case556

Applicability of Kalman filtering557

15.5 Dynamic Bayesian Networks559

Constructing DBNs560

Exact inference in DBNs563

Approximate inference in DBNs565

15.6 Speech Recognition568

Speech sounds570

Words572

Sentences574

Building a speech recognizer576

15.7 Summary578

Bibliographical and Historical Notes578

Exercises581

16 Making Simple Decisions584

16.1 Combining Beliefs and Desires under Uncertainty584

16.2 The Basis of Utility Theory586

Constraints on rational preferences586

And then there was Utility588

16.3 Utility Functions589

The utility of money589

Utility scales and utility assessment591

16.4 Multiattribute Utility Functions593

Dominance594

Preference structure and multiattribute utility596

16.5 Decision Networks597

Representing a decision problem with a decision network598

Evaluating decision networks599

16.6 The Value of Information600

A simple example600

A general formula601

Properties of the value of information602

Implementing an information-gathering agent603

16.7 Decision-Theoretic Expert Systems604

16.8 Summary607

Bibliographical and Historical Notes607

Exercises609

17 Making Complex Decisions613

17.1 Sequential Decision Problems613

An example613

Optimality in sequential decision problems616

17.2 Value Iteration618

Utilities of states619

The value iteration algorithm620

Convergence of value iteration620

17.3 Policy Iteration624

17.4 Partially observable MDPs625

17.5 Decision-Theoretic Agents629

17.6 Decisions with Multiple Agents: Game Theory631

17.7 Mechanism Design640

17.8 Summary643

Bibliographical and Historical Notes644

Exercises646

Ⅵ Learning649

18 Learning from Observations649

18.1 Forms of Learning649

18.2 Inductive Learning651

18.3 Learning Decision Trees653

Decision trees as performance elements653

Expressiveness of decision trees655

Inducing decision trees from examples655

Choosing attribute tests659

Assessing the performance of the learning algorithm660

Noise and overfitting661

Broadening the applicability of decision trees663

18.4 Ensemble Learning664

18.5 Why Learning Works: Computational Learning Theory668

How many examples are needed?669

Learning decision lists670

Discussion672

18.6 Summary673

Bibliographical and Historical Notes674

Exercises676

19 Knowledge in Learning678

19.1 A Logical Formulation of Learning678

Examples and hypotheses678

Current-best-hypothesis search680

Least-commitment search683

19.2 Knowledge in Learning686

Some simple examples687

Some general schemes688

19.3 Explanation-Based Learning690

Extracting general rules from examples691

Improving efficiency693

19.4 Learning Using Relevance Information694

Determining the hypothesis space695

Learning and using relevance information695

19.5 Inductive Logic Programming697

An example699

Top-down inductive learning methods701

Inductive learning with inverse deduction703

Making discoveries with inductive logic programming705

19.6 Summary707

Bibliographical and Historical Notes708

Exercises710

20 Statistical Learning Methods712

20.1 Statistical Learning712

20.2 Learning with Complete Data716

Maximum-likelihood parameter learning: Discrete models716

Naive Bayes models718

Maximum-likelihood parameter learning: Continuous models719

Bayesian parameter learning720

Learning Bayes net structures722

20.3 Lerning with Hidden Variables: The EM Algorithm724

Unsupervised clustering: Learning mixtures of Gaussians725

Learning Bayesian networks with hidden variables727

Learning hidden Markov models731

The general form of the EM algorithm731

Learning Bayes net structures with hidden variables732

20.4 Instance-Based Learning733

Nearest-neighbor models733

Kernel models735

20.5 Neural Networks736

Units in neural networks737

Network structures738

Single layer feed-forward neural networks (perceptrons)740

Multilayer feed-forward neural networks744

Learning neural network structures748

20.6 Kernel Machines749

20.7 Case Study: Handwritten Digit Recognition752

20.8 Summary754

Bibliographical and Historical Notes755

Exercises759

21 Reinforcement Learning763

21.1 Introduction763

21.2 Passive Reinforcement Learning765

Direct utility estimation766

Adaptive dynamic programming767

Temporal difference learning767

21.3 Active Reinforcement Learning771

Exploration771

Learning an Action-Value Function775

21.4 Generalization in Reinforcement Learning777

Applications to game-playing780

Application to robot control780

21.5 Policy Search781

21.6 Summary784

Bibliographical and Historical Notes785

Exercises788

Ⅶ Communicating, perceiving, and acting790

22 Communication790

22.1 Communication as Action790

Fundamentals of language791

The component steps of communication792

22.2 A Formal Grammar for a Fragment of English795

The Lexicon of ε0795

The Grammar of ε0796

22.3 Syntactic Analysis (Parsing)798

Efficient parsing800

22.4 Augmented Grammars806

Verb subcategorization808

Generative capacity of augmented grammars809

22.5 Semantic Interpretation810

The semantics of an English fragment811

Time and tense812

Quantification813

Pragmatic Interpretation815

Language generation with DCGs817

22.6 Ambiguity and Disambiguation818

Disambiguation820

22.7 Discourse Understanding821

Reference resolution821

The structure of coherent discourse823

22.8 Grammar Induction824

22.9 Summary826

Bibliographical and Historical Notes827

Exercises831

23 Probabilistic Language Processing834

23.1 Probabilistic Language Models834

Probabilistic context-free grammars836

Learning probabilities for PCFGs839

Learning rule structure for PCFGs840

23.2 Information Retrieval840

Evaluating IR systems842

IR refinements844

Presentation of result sets845

Implementing IR systems846

23.3 Information Extraction848

23.4 Machine Translation850

Machine translation systems852

Statistical machine translation853

Learning probabilities for machine translation856

23.5 Summary857

Bibliographical and Historical Notes858

Exercises861

24 Perception863

24.1 Introduction863

24.2 Image Formation865

Images without lenses: the pinhole camera865

Lens systems866

Light: the photometry of image formation867

Color: the spectrophotometry of image formation868

24.3 Early Image Processing Operations869

Edge detection870

Image segmentation872

24.4 Extracting Three-Dimensional Information873

Motion875

Binocular stereopsis876

Texture gradients879

Shading880

Contour881

24.5 Object Recognition885

Brightness-based recognition887

Feature-based recognition888

Pose Estimation890

24.6 Using Vision for Manipulation and Navigation892

24.7 Summary894

Bibliographical and Historical Notes895

Exercises898

25 Robotics901

25.1 Introduction901

25.2 Robot Hardware903

Sensors903

Effectors904

25.3 Robotic Perception907

Localization908

Mapping913

Other types of perception915

25.4 Planning to Move916

Configuration space916

Cell decomposition methods919

Skeletonization methods922

25.5 Planning uncertain movements923

Robust methods924

25.6 Moving926

Dynamics and control927

Potential field control929

Reactive control930

25.7 Robotic Software Architectures932

Subsumption architecture932

Three-layer architecture933

Robotic programming languages934

25.8 Application Domains935

25.9 Summary938

Bibliographical and Historical Notes939

Exercises942

Ⅷ Conclusions947

26 Philosophical Foundations947

26.1 Weak AI: Can Machines Act Intelligently?947

The argument from disability948

The mathematical objection949

The argument from informality950

26.2 Strong AI: Can Machines Really Think?952

The mind-body problem954

The “brain in a vat” experiment955

The brain prosthesis experiment956

The Chinese room958

26.3 The Ethics and Risks of Developing Artificial Intelligence960

26.4 Summary964

Bibliographical and Historical Notes964

Exercises967

27 AI: Present and Future968

27.1 Agent Components968

27.2 Agent Architectures970

27.3 Are We Going in the Right Direction?972

27.4 What if AI Does Succeed?974

A Mathematical background977

A.1 Complexity Analysis and O() Notation977

Asymptotic analysis977

NP and inherently hard problems978

A.2 Vectors, Matrices, and Linear Algebra979

A.3 Probability Distributions981

Bibliographical and Historical Notes983

B Notes on Languages and Algorithms984

B.1 Defining Languages with Backus-Naur Form (BNF)984

B.2 Describing Algorithms with Pseudocode985

B.3 Online Help985

Bibliography987

Index1045

热门推荐