본문 바로가기

Etc/Game Programming

Ⅸ Game AI

1. Game AI Technologies

What is AI?

■ Artificail Intelligence (AI)

  The computer simulation of intelligent behavior

  What is Intelligence?

    • Behavior that exhibits great ability to adapt and solve complex problems

          ✓ Produce results that are completely unrealistic

    • Behavior that is close to that of human

          ✓ Humans are not always brilliant

 

Game AIs are not just “problem solving robots” but lifelike entities

 

더보기

게임 AI 기술

AI(인공지능)란 무엇인가요?

- 인공지능(AI): 컴퓨터가 사람처럼 똑똑하게 행동하도록 만드는 기술이에요.

- 지능: 환경에 잘 적응하고 복잡한 문제를 해결하는 능력을 말해요.

 

게임 속 AI의 역할

- 적: 플레이어와 싸우는 상대.

- 팀원: 플레이어를 돕는 동료.

- 전략적 적: 더 똑똑하게 싸우는 적.

- 지원 캐릭터: 플레이어를 도와주는 캐릭터.

- 자율 캐릭터: 자기 스스로 움직이는 캐릭터.

- 해설자: 게임 상황을 설명해주는 캐릭터.

- 카메라 조정: 화면을 움직이는 역할.

- 이야기 안내자: 스토리를 이끌어가는 역할.

 

지능형 에이전트의 구조

에이전트: 게임 속 캐릭터를 말해요.

1. 감지: 주변 환경을 파악해요.

2. 생각: 현재 상황과 목표를 바탕으로 행동을 결정해요.

3. 행동: 결정한 행동을 실제로 해요.

 

 AI 게임 적의 목표

- 도전적인 상대 제공: 너무 어렵지 않게 적당한 도전을 줘요.

- 예측 불가능하게: 여러 가지 방법으로 움직여서 플레이어가 예측하지 못하게 해요.

 

 AI의 간단한 행동

 

- 랜덤 움직임: 무작위로 움직여요.

- 간단한 패턴: 정해진 경로를 따라 움직여요.

- 추적: 플레이어를 쫓아가요.

- 회피: 플레이어에게서 도망가요.

 

경로 찾기

A* 알고리즘: 두 지점 사이의 최단 경로를 찾아줘요.

 

AI 시스템의 구조

1. 감지: AI가 주변을 파악해요.

2. 생각: 감지한 정보를 바탕으로 결정해요.

3. 행동: 결정한 행동을 실행해요.

 

유한 상태 기계(FSM)

FSM: 여러 상태와 전환으로 이루어진 구조예요.

- 장점: 이해하기 쉽고 코딩하기 쉬워요.

- 단점: 상태가 많아지면 복잡해져요.

 

결정 트리

결정 트리: 선택해야 할 상황을 나무 구조로 표현해요.

- 장점: 간단하고 이해하기 쉬워요.

- 단점: 더 많은 코딩이 필요하고 CPU를 많이 써요.

 

규칙 기반 시스템

규칙 시스템: 특정 조건에 따라 행동을 결정해요.

- 장점: 표현력이 좋고 모듈식이라서 쉽게 수정 가능해요.

- 단점: 메모리와 CPU를 많이 써요.

 

계획

계획: 목표를 달성하기 위해 여러 행동을 미리 정하는 거예요.

- 장점: 예측할 수 없는 행동을 할 수 있어요.

- 단점: 많은 처리 시간과 메모리가 필요해요.

 

신경망

신경망: 뇌의 신경세포를 모방한 구조예요.

- 장점: 오류를 잘 처리하고 새로운 해결책을 배워요.

- 단점: 학습에 많은 시간이 필요해요.

 

생물학에서 영감을 받은 AI

유전 알고리즘: 생물의 진화 과정을 모방해요.

- 장점: 새로운 해결책을 찾고 예제가 필요 없어요.

- 단점: 진화 과정에 많은 처리 시간이 필요해요.

 

게임 AI의 도전 과제

- 현실적인 감지 모델이 부족해요.

- 계획할 시간이 부족해요.

- 선택 가능한 행동이 너무 많아요.

- 하나의 평가 방법이나 목표로 인해 예측 가능해져요.

- 모든 지식을 코딩하기 어렵고 배우기도 어려워요.

Roles of AI in Games

Opponents
Teammates
Strategic Opponents
Support Characters
Autonomous Characters
Commentators
Camera Control
Plot and Story Guides/Directors

 

 

★ AI Entity (Test)

AI entity 종류에 대해 설명하는 간단한 문제 test

Agent
  • A virtual character in the game world
  • Ex) enemies, NPCs, sidekicks, an animated cow in a field
  • These are structured in a way similar to our brain
Abstract Controller (Master Controller)
  • A structure quite similar to the agent, but each subsystem works on a higher level than an individual
  • Ex) strategy game è The entity that acts like the master controller of the CPU side of the battle

 

 

Goals of AI action game opponent

■ Provide a challenging opponent

  Not always as challenging as a human -- Quake monsters.
  What ways should it be subhuman?

■ Not too challenging.

  Should not be superhuman in accuracy, precision, sensing, ...

■ Should not be too predictable.

  Through randomness.
  Through multiple, fine-grained responses.
  Through adaptation and learning.

 

 

제일 중요한 건 여기예요. 3가지 단계로 구성을 해요.

게임 AI의 구조에 대하여 기술하시오.

★★ Structure of an Intelligent Agent (Test) 순서 혹은 각각의 개념 문제 

Sensing: perceive features of the environment.

Thinking: decide what action to take to achieve its goals, given the current situation and its knowledge.

Acting: doing things in the world.

 - Thinking has to make up for limitations in sensing and acting.

 - The more accurate the models of sensing and acting, the more realistic the behavior.

 

 

Perfect Agent: Unrealistic

Sensing: Have perfect information of opponent (플레이어에 대한 모든 정보를 가져온다)

Thinking: Have enough time to do any calculation. 

 Know everything relevant about the world.

Action: Flawless, limitless action

 Teleport anywhere, anytime.

 

 

Simple Behavior

Random motion

  Just roll the dice to pick when and which direction to move

 

Simple pattern

  Follow invisible tracks: Galaxians
패턴을 만든 경

Tracking반적으로 많이 썼던 방법, 트래킹, 쫓아가는 것

  Pure Pursuit 현재의 위치를 가지고 이동하는 것
   • Move toward agents current position
   • Ex) Head seeking missile

 

 
  Lead Pursuit 이동한 next 위치를 찾아서 쫓아가는 것
   • Move to position in front of agent
   Collision (만날 위치로 직선을 써서 각각 교차되는 부분을 이용하는 것)

 
   • Move toward where agent will be

 

Evasive opposite of any tracking (쫓아가는 것의 반대로, 도망가는 것, 트래킹과 반대 개념)

Delay in sensing gives different effects

 

 

 

Moving in the World: Path Following

Just try moving toward goal.

 

 

 

Problem

 

Create Avoidance Regions

무한루프에 안 빠지는 방법이 못을 찍어서 탱탱한 고무줄을 만든다할 때, 이를(그림에서 초록색) Convex Hull이라고 한다. 

 

 

 

24.05.16.THU

 

Path Finding

■ A* Algorithm 

 - find a shortest-path between two points on a map

    • Starting with the base node

    • Expand nodes using the valid moves

    • Each expanded node is assigned a “score”

 

    • Iterate this process

      ✓ until these paths prove invalid or one of these paths reach the target

 

The overall score for a state  가중치를 가지고 계산하는 문제, 측정치와 추정치를 더하는 것

맨하튼 디스턴스 = 가로, 세로 3, 2일 때 5

루트13

 - f(node) = g(node) + h(node)

    • f(node): the total score  (the sum of g and h)

    • g(node):  the cost to get from the starting node to this node(측정치)

      ✓ A component that accounts for our past behavior

    • h(node): the estimated cost to get from this node to the goal(추정치)

      ✓ A component that estimate our future behavior

      ✓ The Manhattan distance (4-connected game world)

           − Manhattan(p1, p2) = | p2.x – p1.x | + | p2.z – p1.z |

                 Ex) p1 (2,2), p2 (1,0) è h = 3

      ✓ Euclidean distance (8-connected)

 

 

 

 

 

Game AI

Game AI

- A* algorithm
Quite smart, but not very realistic
Need to keep a reasonable degree of imperfection built into the AI design
- A balance between generating
behavior that is both highly evolved and sophisticated
And behavior that is more or less human
 

Game AIs are not just “problem solving robots” but lifelike entities

 

Comparison between A* and a human path finder

 

 

 

Execution Flow of an AI Engine

가장 중요하다.
3가지 단계로 분류된다. 
 

Structure of an AI System 각각의 개념 test

■ SENSE: Sensing the world (The slowest part of the AI)

All AIs need to be aware of their surroundings
• use that information in the reason/analysis phase
What is sensed and how largely depend on the type of game
• Ex) agent : an individual enemy in Quake 
  ✓ Where is the player and where is he looking?
  ✓ What is the geometry of the surroundings?
  ✓ Which weapons am I using and which is he using?
• Ex) master controller in a strategy game (Age of Empires)
  ✓ What is the balance of power in each subarea of the map?
  ✓ How much of each type of resource do I have?
  ✓ What is the breakdown of unit types: infantry, cavalry, …
  ✓ What is my status in terms of the technology tree?
  ✓ What is the geometry of the game world?

SENSE: Memory

Storing AI data is often complex
the concepts being stored are not straightforward
  ✓ In an individual level AI (the less of a problem)
Store points, orientations
and use numeric values to depict the “state” the AI is in
A master controller
These data structures are nontrivial
Ex) how do we store the balance of power, a path?
  ✓ Case-by-case solutions

         

Can it remember prior events?

For how long?

How does it forget?

 

THINK: Analysis/Reasoning Core

Uses the sensory data and the memory
to analyze the current configuration
Make a decision
Can be slow or fast depending on the number of alternatives and the sensory data to evaluate
  ✓ Slow process → Chess playing
  ✓ Fast process moving a character in Quake

 

ACT: Action/Output System

- Permeate actions and responses
- Many games exaggerate the action system
Personality and perceive intelligence was enhanced
  ✓ The character’s intensions are obvious
  ✓ Personality is conveyed
Ex) Super Mario Bros (creaturesè similar AIs)

 

 

 

 

Execution Flow of an AI Engine (Re)

 

 

2. GameAI (Finite State Machines)

Finite State Machines

Finite State Machines (FSMs)

- Also called deterministic finite automata (DFA)

    or state machine or automata

Definition
A set of states
  ✓ Represent the scenarios or configurations the AI can be immersed in
A set of transitions
  ✓ Conditions that connect two states in a directed way
 
Advantages
Intuitive to understand, Easy to code
Perform well, and represent a broad range of behaviors
 
Example
1. A dog is HUNGRY
2. If you give him a bone, he will not be hungry anymore
3. He’ll be QUIET after eating the bone
4. And he’ll become hungry after four hours of being quiet
  ✓ States (using circles) → 1 and 3
  ✓ Transitions (using lines) 2 and 4
 
 

Example FSM   각 그림의 상태가 무엇인지, 괄호, state 선택하는 문제

 

 

Example FSM - Better

 

 

 

 

Example FSM with Retreat

 

 

Hierarchical FSM

Expand a state into its own FSM

 

 

 

Non-Deterministic Hierarchical FSM (Markov Model)

 

 

 

 

FSM Evaluation

Advantages:

  ✓ Very fast one array access

  ✓ Can be compiled into compact data structure

Dynamic memory: current state

Static memory: state diagram array implementation

  ✓ Can create tools so non-programmer can build behavior

  ✓ Non-deterministic FSM can make behavior unpredictable

Disadvantages:

  ✓ Number of states can grow very fast

Exponentially with number of events: s=2^e

  ✓ Number of arcs can grow even faster: a=s^2

  ✓ Hard to encode complex memories

 

 

 


 

3. Decision Trees

Classification Problems

Task:
  ✓ Classify “objects” as one of a discrete set of “categories”
Input: set of facts about the object to be classified
  ✓ Is today sunny, overcast, or rainy
  ✓ Is the temperature today hot, mild, or cold
  ✓ Is the humidity today high or normal
Output: the category this object fits into
  ✓ Should I play tennis today or not?
  ✓ Put today into the play-tennis category or the no-tennis category

 

 

Example Problem

Classify a day as a suitable day to play tennis
  ✓ Facts about any given day include:
Outlook = <Sunny, Overcast, Rain>
Temperature = <Hot, Mild, Cool>
Humidity = <High, Normal>
Wind = <Weak, Strong>
  ✓ Output categories include:
   • PlayTennis = Yes
   • PlayTennis = No
 

 

Outlook=Overcast, Temp=Mild, Humidity=Normal, Wind=Weak => PlayTennis=Yes

 

Outlook=Rain, Temp=Cool, Humidity=High, Wind=Strong => PlayTennis=No
Outlook=Sunny, Temp=Hot, Humidity=High, Wind=Weak => PlayTennis=No

 

Classifying with a Decision Tree  빈칸 문제

 

 

 

Decision Trees

Nodes represent attribute tests
  ✓ One child for each possible value of the attribute

 

Leaves represent classifications
Classify by descending from root to a leaf

  ✓ At root test attribute associated with root attribute test

  ✓ Descend the branch corresponding to the instance’s value

  ✓ Repeat for subtree rooted at the new node
  ✓ When a leaf is reached return the classification of that leaf
Decision tree is a disjunction of conjunctions of constraints on the attribute values of an instance



 

Decision Tree for Quake  작년 시험 문제, 비워놓고 S? 이 상태는 이벤트가 뭐가 됩니까

■ Input Sensors: E=<t,f>  L=<t,f>  S=<t,f>  D=<t,f>
■ Categories (actions): Attack, Retreat, Chase, Spawn, Wander

 

Test/Quiz 비워놓고 해당되는 것을 채우는 시험문제 많이 나왔었다. 

D 죽었는가? S 보이지 않지만 소리가 들리는가?

복잡한 것을 간단하게 나타나주는 디시젼 트리의 장점

 

Decision Tree Evaluation

Advantages

  ✓ Simpler, more compact representation
  ✓ Easy to create and understand
Can also be represented as rules
  ✓ Decision trees can be learned
 

Disadvantages

  ✓ Decision tree engine requires more coding than FSM
  ✓ Need as many examples as possible
  ✓ Higher CPU cost
  ✓ Learned decision trees may contain errors

 

 

 


4. Rule-based Systems (Production Systems)

Rule Systems  어느 위치에 있으면 어떤 걸 선택하는지, 클라이언트 우선순위 10m 밖이면 뭐가 선택되는지 상황을 보고 찾아가는 것 

Finite State Machine

  ✓ Not easy to describe in terms of states and transitions

FSMs are well suited for behaviors that are
  ✓ Local in nature
    • While we are in a certain state, only a few outcomes are possible
  ✓ Sequential in nature
    • We carry out tasks after other tasks depending on certain conditions

 

Ex) virtual dog
If there’s a bone nearby and I’m hungry, I’ll eat it
If I’m hungry (but there is no bone around), I’ll wander
If I’m not hungry, but I’m sleepy, I will sleep
If I’m not hungry and not sleepy, I will bark and walk

 

 

Not local  all sates can yield any other state

There are no visible sequences

Prioritized, global behavior  rule systems

 

Rule Systems

  ✓ Provide a global model of behavior
  ✓ is better when we need to model behavior that is based on guidelines
  ✓ A set of rules that drive our AI’s behavior

Condition → Action

    • LHS of the rule : condition
     • RHS of the rule :  action
  ✓ Ex) virtual dog (우선순서는 가장 위에 있는 게 높다. _ 동작 순서대로)
    • (Hungry) && (Bone nearby) Eat it
    • (Hungry) & (No bone nearby) Wander
    • (Not hungry) & (Sleepy) Sleep
    • (Not hungry) & (Not sleepy) Bark and walk

 

 

 

Complete Picture

 

Rule Systems

Ex) the AI for a solider in a large squadron

If in contact with an enemy   combat

If an enemy is closer than 10 meters and I'm stronger than him  → chase him

If an enemy is closer than 10 meters à escape him

If we have a command from our leader pending → execute it

If a friendly soldier is fighting and I have a ranged weapon  → shoot at the enemy

Stay still (TRUE-> stay still)

 

Coding an RS for action games

A direct mapping of our rule set, in priority order, to an “if” tree  (If-then-else sentences)

 

 

 

Rule-based System Evaluation

Advantages

Corresponds to way people often think of knowledge

Very expressive

Modular knowledge

Easy to write and debug compared to decision trees

More concise than FSM

Disadvantages

Can be memory intensive

Can be computationally intensive

Sometimes difficult to debug

 

 

5. Planning

Planning

FSMs and rule sets

Very useful to model simple behavior

More complex behaviors?  Ex) chess

Solve a puzzle

Decide the weakest spot in a battlefield to attack the enemy

Select the best route to attack N ground targets in a flight

Trace a path from A to B with obstacles in between

These problem require the following

Thinking in more complex terms than numbers

Planning long-term strategies

 
 

What is Planning?

Plan: sequence of actions to get from the current situation to a goal situation

Higher level mission planning

Path planning

Planning: generate a plan

Initial state: the state the agent starts in or is currently in

Goal test: is this state a goal state?

Operators: every action the agent can perform

Also need to know how the action changes the current state

 

 

Two Approaches

State-space search

Search through the possible future states that can be reached by applying different sequences of operators

Initial state = current state of the world

Operators = actions that modify the world state

Goal test = is this state a goal state?

Plan-space search

Search through possible plans by applying operators that modify plans

Initial state = empty plan (do nothing)

Operators = add an action, remove an action, rearrange actions

Goal test = does this plan achieve the goal?

 

Two Approaches

 

 

 

 

Planning and Problem Solving  

■ State-Space Search

✓ Exploring candidate transitions and analyzing their suitability for reaching our goals

✓ Search the state-space to find ways to move from the initial conditions to our goals


 

 

 

 

 

 

 

 

시험문제에 자주내는 부분 _ 시험에 비중있게 나갔다. Planning

골은 상대방의 체력치를 0으로, 나의 체력치를 100으로 만드는 것

적은 항상 나에게 총을 쏜다는 것을 가정한 상태에서 픽업, 슛, 다시 픽업에 대해 어느것이 스코어가 높은지를 판단해서 계산한다. Two Step이 시험문제에 자주 나간다. two step을 했을 경우 픽업해서 슛하거나 아님 슛하고 또 픽업하거나 하는 경우로 나눌 수 있다. 

What should I do? 

 

 

투 스택을 했을 때 무엇을 선택하는 게 가장 좋은가?

One Step: Pickup Railgun

One Step: Shoot

One Step: Pickup Health-pak

Two Step

체력 스코어가 가장 높은 것을 선택해야 한다. 투 스택 문제

투 스택을 했을 경우 이 상태에 따라서 어떻게 되는가 선택문제로 냈다.

8개 중에서 하나 고르게끔.

 

Three Step Look-ahead (복잡해서 출제x)

Opponent: New problems

Planning Evaluation

Advantages

Less predictable behavior

Can handle unexpected situations

■  Disadvantages

Planning takes processor time

Planning takes memory

Need simple but accurate internal representations

 

 

6. Neural Network

Neurons

■ Mimic natural intelligence

✓ Networks of simple neurons

✓ Highly interconnected

✓ Adjustable weights on connections

✓ Learn rather than program

■ Each neuron has:

✓ Inputs/activation from other neurons (aj)

✓ Weights of input (Wi,j) [-1, +1]

✓ Output to other neurons (ai)

 

 

 

■ Single neuron can represent AND, OR, NOT, XOR

✓ Combinations of neuron are more powerful

■ Neuron are usually organized as layers

✓ Input layer: takes external input

✓ Hidden layer(s)

✓ Output player: external output

 

 

뭐가 들어가면 뭐가 나오는지 액션게임 예시로 시험문제 나갔었다. 

 

Neural Network for Quake

■ Four input neuron

✓ One input for each condition

■ Two neuron hidden layer

✓ Fully connected

✓ Forces generalization

■ Five output neuron

✓ One output for each action

✓ Choose action with highest output

✓ Probabilistic action selection

 

 

 

Back Propagation

Learning from examples

Examples consist of input (t) and correct output (o)

Learn if network’s output doesn’t match correct output

Adjust weights to reduce difference

Only change weights a small amount (η)

Basic neuron learning

Let f be the actual output for input t

Wi,j = Wi,j + ΔWi,j    =  Wi,j + η(f-o)aj

If output is too high (f-o) is negative so Wi,j will be reduced

If output is too low (f-o) is positive so Wi,j will be increased

If aj is negative the opposite happens

 

Neural Networks Evaluation

Advantages

Handle errors well

Graceful degradation

Can learn novel solutions

Disadvantages

Can’t understand how or why the learned network works

Examples must match real problems

Need as many examples as possible

Learning takes lots of processing

 

 

7. Biology-Inspired AI

Biology-Inspired AI

■ Biology-Inspired AI

✓ Previous AI Algorithms

  • Not biological concepts but computational structures

✓ Not in computer science but in biology and neurology

 

■ Genetic Programming

✓ Inspired by genetic theory (by Mendel)

✓ and evolution theory (by Charles Darwin)

  • The fittest survive

✓ Each generation can create a new generation

  • Crossover

üWhose DNA code will consist of combination of the parent’s respective DNA code

  • Mutation

  üMinor degree of mutations

✓ Fitness function takes an individual and determines how well it fulfills

 

Genetic Operators test

Crossover (교배)

Select two points at random

Swap genes between two points

 

■ Mutate (변이)  부모의 부분에서 임의로 변형되는 것 

✓ Small probably of randomly changing each part of a gene

 
 

 

Genetic Algorithm Evaluation

■ Advantages

Powerful optimization technique

Can learn novel solutions

No examples required to learn

 

■ Disadvantages

Fitness function must be carefully chosen

Evolution takes lots of processing

  • Can’t really run a GA during game play

Solutions may or may not be understandable

 

Example) Flappy Bird
using Neural Network and Genetic Algorithm

Artificial Neural Network

1) an input layer with 2 neurons representing what a bird sees:

   • horizontal distance to the closest gap

   • height difference to the closest gap

2) a hidden layer with 6 neurons

3) an output layer with 1 neuron which provides an action as follows:

   • if output > 0.5 then flap else do nothing

 

Genetic Algorithm

1) Create initial population of 10 units(birds) with random neural networks

2) let all units play the game simultaneously by using their own neural networks

3) For each unit calculate its fitness function to measure its quality

 

Genetic Algorithm

4) when all units died, evaluate the current population to the next one by using genetic operators (Replacement Strategy)

  1. sort the units of the current population by their fitness ranking

  2. select the top 4 units (winners) and pass them directly on to the next population

  3. create 1 offspring as a crossover product of two best winners

  4. create 3 offsprings as a crossover products of two random winners

  5. create 2 offsprings as a direct copy of two random winners

  6. apply random mutations on each offspring to add some variations

5) go back to the step 2

 

Why arent AIs better?

■ Don’t have realistic models of sensing.

■ Not enough processing time to plan ahead.

■ Space of possible actions too large to search efficiently (too high of branching factor).

■ Single evaluation function or predefined subgoals makes them predictable.

  ✓ Only have one way of doing things.

■ Too hard to encode all the relevant knowledge.

■ Too hard to get them to learn.