I believe that their existence can be traced back to the possibility that somewhere, somebody is using a compiler with an operating system whose character set is so archaic that it doesn't necessarily have all the characters that C or C++ need to express the whole language.
The digraphs and trigraphs in C/C++ come from the days of six bit character sets used by the CDC6000 (60 bits), Univac 1108 (36 bits), DECsystem 10 and 20 systems (36 bits) each of which used a proprietary 64 character set not compatible with the ASA X3.4-1963 (Now know as ANSI X3.4-1963 "7-bit American National Standard Code for Information Interchange"). The latest revision is ANSI X3.4-1986.
Since these systems were incapable of representing all of the 96 graphical code points, many were omitted. In addition, X3.4 was coordinated with other National Standard Institutes (GBR, GER, ITA, etc) and there were code points in X3.4 which were designated as national replacement characters - the most obvious example is the # for the Britsh Pound symbol (obvious because the name of the # character is "pound sign" from it's conventional usage in US commerce - prior to the the evolution of Twitter) and the '{' '}' were also designated as national replacement characters.
Thus digraphs were introduced to provide a mechanism for those computer systems incapable of representing the characters, and also for data terminal equipment which assigned national replacement characters to the conflicting code points. Di/Tri-graphs have become a archaic artifact of computing history (a subject not taught in computer science these days).