The only difference between a graph and a tree is cycle. A graph may contain cycles, a tree cannot. So when you're going to implement a search algorithm on a tree, you don't need to consider the existence of cycles, but when working with an arbitrary graph, you'll need to consider them. If you don't handle the cycles, the algorithm may eventually fall in an infinite loop or an endless recursion.
Another point to think is the directional properties of the graph you're dealing with. In most cases we deal with trees that represent parent-child relationships at each edge. A DAG (directed acyclic graph) also shows similar characteristics. But bi-directional graphs are different. Each edge in a bi-directional graphs represents two neighbors. So the algorithmic approaches should differ a bit for these two types of graphs.
A tree is a special case of a graph, so whatever works for general graphs works for trees. A tree is a graph where there is precisely one path between each pair of nodes. This implies that it does not contain any cycles, as a previous answer states, but a directed graph without cycles (a DAG, directed acyclic graph) is not necessarily a tree.
However, if you know that your graph has some restrictions, e.g. that it is a tree or a DAG, you can usually find some more efficient search algorithm than for an unrestricted graph. For example, it probably does not make much sense to use A*, or its non-heuristic counterpart “Dijkstra's algorithm”, on a tree (where there is only one path to choose anyway, which you can find by DFS or BFS) or on a DAG (where an optimal path can be found by considering vertices in the order obtained by topological sorting).
As for directed vs undirected, an undirected graph is a special case of a directed one, namely the case that follows the rule “if there is an edge (link, transition) from u to v there is also an edge from v to u.
Update: Note that if what you care about is the traversal pattern of the search rather than the structure of the graph itself, this is not the answer. See, e.g., @ziggystar's answer.
Judging from the existing answers, there seems to be a lot of confusion about this concept.
The Problem Is Always a Graph
The distinction between tree search and graph search is not rooted in the fact whether the problem graph is a tree or a general graph. It is always assumed you're dealing with a general graph. The distinction lies in the traversal pattern that is used to search through the graph, which can be graph-shaped or tree-shaped.
If you're dealing with a tree-shaped problem, both algorithm variants lead to equivalent results. So you can pick the simpler tree search variant.
Difference Between Graph and Tree Search
Your basic graph search algorithm looks something like the following. With a start node start, directed edges as successors and a goal specification used in the loop condition. open holds the nodes in memory, which are currently under consideration, the open list. Note that the following pseudo code is not correct in every aspect (2).
Tree Search
open <- []
next <- start
while next is not goal {
add all successors of next to open
next <- select one node from open
remove next from open
}
return next
Depending on how you implement select from open, you obtain different variants of search algorithms, like depth-first search (DFS) (pick newest element), breadth first search (BFS) (pick oldest element) or uniform cost search (pick element with lowest path cost), the popular A-star search by choosing the node with lowest cost plus heuristic value, and so on.
The algorithm stated above is actually called tree search. It will visit a state of the underlying problem graph multiple times, if there are multiple directed paths to it rooting in the start state. It is even possible to visit a state an infinite number of times if it lies on a directed loop. But each visit corresponds to a different node in the tree generated by our search algorithm. This apparent inefficiency is sometimes wanted, as explained later.
Graph Search
As we saw, tree search can visit a state multiple times. And as such it will explore the "sub tree" found after this state several times, which can be expensive. Graph search fixes this by keeping track of all visited states in a closed list. If a newly found successor to next is already known, it won't be inserted into the open list:
open <- []
closed <- []
next <- start
while next is not goal {
add next to closed
add all successors of next to open, which are not in closed
remove next from open
next <- select from open
}
return next
Comparison
We notice that graph search requires more memory, as it keeps track of all visited states. This may compensated by the smaller open list, which results in improved search efficiency.
Optimal solutions
Some methods of implementing select can guarantee to return optimal solutions - i.e. a shortest path or a path with minimal cost (for graphs with costs attached to edges). This basically holds whenever nodes are expanded in order of increasing cost, or when the cost is a nonzero positive constant. A common algorithm that implements this kind of select is uniform cost search, or if step costs are identical, BFS or IDDFS. IDDFS avoids BFS's aggressive memory consumption and is generally recommended for uninformed search (aka brute force) when step size is constant.
Also the (very popular) A* tree search algorithm delivers an optimal solution when used with an admissible heuristic. The A* graph search algorithm, however, only makes this guarantee when it used with a consistent (or "monotonic") heuristic (a stronger condition than admissibility).
(2) Flaws of pseudo-code
For simplicity, the presented code does not:
handle failing searches, i.e. it only works if a solution can be found
Trees don't have cycles "For example imagine any tree in your head, branches don't not have direct connections to the root, but branches have connections to other branches, upward"
But in case of AI Graph-search vs Tree-search
Graph search have a good property that's whenever the algorithm explore a new node and it mark it as visited , "Regardless of the algorithm used", the algorithm typically explores all the other nodes that are reachable from the current node.
For example consider the following graph with 3 vertices A B and C, and consider the following the edges
A-B, B-C, and C-A, Well there is a cycle from C to A,
And when do DFS starting from A, A will generate a new state B, B will generate a new state C, but when C is explored the algorithm will try to generate a new state A but A is already visited thus it will be ignored. Cool!
But what about trees? well trees algorithm don't mark the visited node as visited, but trees don't have cycles, how it would get in an infinite loops?
Consider this Tree with 3 vertices and consider the following edges
A - B - C rooted at A, downward. And let's assume we are using DFS algorithm
A will generate a new state B, B will generate two states A & C, because Trees don't have "Mark a node visited if it's explored" thus maybe the DFS algorithm will explore A again, thus generating a new state B, thus we are getting in an infinite loop.
But have you noticed something, we are working on undirected edges i.e. there is a connection between A-B and B-A. of course this is not a cycle, because the cycle implies that the vertices must be >= 3 and all the vertices are distinct except the first and the last nodes.
S.T A->B->A->B->A it's not a cycle because it violates the cycling property >= 3. But indeed A->B->C->A is a cycle >= 3 distinct nodes Checked, the first and the last node are the same Checked.
Again consider the tree edges, A->B->C->B->A, of course its not a cycle, because there are two Bs, which mean not all the nodes are distinct.
Lastly you could implement a tree-search algorithm, to prevent exploring the same node twice. But that have consequences.
In simple words, tree does not contain cycles and where as graph can. So when we do search, we should avoid cycles in graphs so that we don't get into infinite loops.
Another aspect is tree will typically have some kind of topological sorting or a property like binary search tree which makes search so fast and easy compared to graphs.
I will add to the @ziggystar's answer (other answers refer to the differences between trees and graphs as data structures, which is not what the question is about, the question refers to the tree VS graph algorithms for traversing your graph!).
This somewhat confusing terminology comes from Russell and Norvig's "Artificial Intelligence A Modern Approach":
Tree-Search algorithm - is any particular algorithm used for solving your search problem. Graph-Search algorithm - is a Tree-Search algorithm augmented with a set of explored states.
Both of these algorithms are represented as a tree! The reason we call the Graph-Search algorithm a Graph-Search algorithm is because it can be represented (again - as a tree) directly on our search problem's graph.
Take a look at the map of Romania. This is our search problem's graph.
Now, we can apply many algorithms to find a path from Arad to Bucharest (Breadth-First Search, Depth-First Search, Greedy Search - anything our heart desires). All of these algorithms, however, can be divided into Tree-Search algorithms and Graph-Search algorithms.
The Tree-Search algorithm represents the solution to our Arad-to-Bucharest problem as a tree. Note the repeated "Arad" node.
The Graph-Search algorithm represents the solution to our Arad-to-Bucharest problem as a tree, too - except we remove the repeated "Arad" node from the tree.
However, thanks to this removal of repeated states, we have a nicer way to represent it - directly on the graph of our search problem, on the map of Romania! Hence the "Graph" in the "Graph-Search algorithm".
Here is some pseudocode for you. Note that the only difference between the Tree-Search algorithm and Graph-Search algorithm is the addition of the set of explored states.