Let's categorise the task environments! This videos introduces you to the different types of AI environments.
3.7
Now, how we can categorize the task environment to solve AI problems. We can classify the task environment to fully observable / observable, single / multiple agent, deterministic/ non deterministic, episodic/ non episodic static / dynamic, discrete / continuous, known and unknown.
35.8
First question we have to ask ourselves. Can the agent access to the complete state of the environment? If yes, it will be fully observable environment . In chess, the virtual chess game, the agent can access to the complete state of the environment. In many real time strategy game like war game, the agent can not see the full area. There may be a hidden enemy there, so in this environment we can say this is not fully observable environment.
81.2
84.8
question: Does the task can be achieved by single agent or there are other agents needed, for example, to solve Rubik’s Cube, we only need one agent. But for chess game, no, we need a double agent to do the game, and the multi-agents could be cooperative; helping each other or competitive working against each other. Now is the environment deterministic or stochastic? We ask, does the next state of environment depends on agent action ?
129.5
if the next state of the environment is completely determined by the current state and the action executed by the agent, then we say the environment is deterministic, like in chess game while in the real world environment, typically the environment, are stochastic, for example, for a football game or always something good happened. Maybe the robot, maybe fell, so this is a stochastic environment. Next, a question we have to ask, is it (the environment) episodic or sequential in episodic, we perceive then act. So if we receive an image, we can act and say if it is an image for a cat. This is called object recognition.
194.4
Then we can forget everything about that while in sequential, this means while we affecting the environment, we also changing future decisions, ike chess, our move will influence future movement. Also, the same, an automatic cars. Now we ask ourselves, does the environment changes when the agent deliberates (thinks)? When I say think, I mean, the agents do nothing. If it change, then it (the environment) is dynamic. For example, in a chess game, the environment stays the same while the agent is thinking so it is static. While it is dynamic in automated car. There another option, which is semi- dynamic, in which the environment does not change while the agent is thinking, But the score of an agent changes.
265.9
To clarify that, think about the chess game when we play it with a clock. When time is passing, this will decrease the agent performance. The next question is this environment, the discrete or continuous? in discrete environment, we have a limited number of states like in chess game, we always have limited number of moves. On the other hand, in taxi driving Task / automated Vehicle(automated car) we have unlimited locations It could be. Finally, the last the question, is it known or unknown environment? are the rule or law of physics known to the agent or not? In chess, the agent knows the rules, while in some games ,No, we have to figure out what to do.
328.2
The agent have no idea where to go and he has to explore more.

To solve AI problems, we need to classify the task environment (the environment that surrounds the agent).

• Fully observable / Partially observable​
• Single agent / Multiple agents​
• Deterministic / Non-deterministic​
• Episodic / Non-episodic​
• Static / Dynamic​
• Discrete / Continuous ​
• Known / Unknown​

The video will explain all these terms using different examples.