Marker Interfaces are used to mark the capability of a class as implementing a specific interface at run-time.
The Interface Design and .NET Type Design Guidelines - Interface Design discourage the use of marker interfaces in favour of using attributes in C#, but as @Jay Bazuzi points out, it is easier to check for marker interfaces than for attributes: o is I
So instead of this:
public interface IFooAssignable {}
public class Foo : IFooAssignable
{
...
}
The .NET guidelines recommended that you do this:
public class FooAssignableAttribute : Attribute
{
...
}
[FooAssignable]
public class Foo
{
...
}
A marker interface is just an interface that is empty. A class would implement this interface as metadata to be used for some reason. In C# you would more commonly use attributes to mark up a class for the same reasons you'd use a marker interface in other languages.
This is a bit of a tangent based on the response by "Mitch Wheat".
Generally, anytime I see people cite the framework design guidelines, I always like to mention that:
You should generally ignore the framework design guidelines most of the time.
This isn't because of any issue with the framework design guidelines. I think the .NET framework is a fantastic class library. A lot of that fantasticness flows from the framework design guidelines.
However, the design guidelines do not apply to most code written by most programmers. Their purpose is to enable the creation of a large framework that is used by millions of developers, not to make library writing more efficient.
A lot of the suggestions in it can guide you to do things that:
May not be the most straightforward way of implementing something
May result in extra code duplication
May have extra runtime overhead
The .net framework is big, really big. It's so big that it would be absolutely unreasonable to assume that anyone has detailed knowledge about every aspect of it. In fact, it's much safer to assume that most programmers frequently encounter portions of the framework they have never used before.
In that case, the primary goals of an API designer are to:
Keep things consistent with the rest of the framework
Eliminate unneeded complexity in the API surface area
The framework design guidelines push developers to create code that accomplishes those goals.
That means doing things like avoiding layers of inheritance, even if it means duplicating code, or pushing all exception throwing code out to "entry points" rather than using shared helpers (so that stack traces make more sense in the debugger), and a lot of other similar things.
The primary reason that those guidelines suggest using attributes instead of marker interfaces is because removing the marker interfaces makes the inheritance structure of the class library much more approachable. A class diagram with 30 types and 6 layers of inheritance hierarchy is very daunting compared to one with 15 types and 2 layers of hierarchy.
If there really are millions of developers using your APIs, or your code base is really big (say over 100K LOC) then following those guidelines can help a lot.
If 5 million developers spend 15 mins learning an API rather than spending 60 mins learning it, the result is a net savings of 428 man years. That's a lot of time.
Most projects, however, don't involve millions of developers, or 100K+ LOC. In a typical project, with say 4 developers and around 50K loc, the set of assumptions are a lot different. The developers on the team will have a much better understanding of how the code works. That means that it makes a lot more sense to optimize for producing high quality code quickly, and for reducing the amount of bugs and the effort needed to make changes.
Spending 1 week developing code that is consistent with the .net framework, vs 8 hours writing code that is easy to change and has fewer bugs can result in:
Late projects
Lower bonuses
Increased bug counts
More time spent at the office, and less time on the beach drinking margaritas.
Without 4,999,999 other developers to absorb the costs it usually isn't worth it.
For example, testing for marker interfaces comes down to a single "is" expression, and results in less code that looking for attributes.
So my advice is:
Follow the framework guidelines religiously if you are developing class libraries (or UI widgets) meant for wide spread consumption.
Consider adopting some of them if you have over 100K LOC in your project
A marker interface allows a class to be tagged in a way that will be applied to all descendant classes. A "pure" marker interface wouldn't define or inherit anything; a more useful type of marker interfaces may be one which "inherits" another interface but defines no new members. For example, if there is an interface "IReadableFoo", one might also define an interface "IImmutableFoo", which would behave like a "Foo" but would promise anyone who uses it that nothing would change its value. A routine which accepts an IImmutableFoo would be able to use it as it would an IReadableFoo, but the routine would only accept classes that were declared as implementing IImmutableFoo.
I can't think of a whole lot of uses for "pure" marker interfaces. The only one I can think of would be if EqualityComparer(of T).Default would return Object.Equals for any type which implemented IDoNotUseEqualityComparer, even if the type also implemented IEqualityComparer. This would allow one to have an unsealed immutable type without violating the Liskov Substitution Principle: if the type seals all methods related to equality-testing, a derived type could add additional fields and have them be mutable, but the mutation of such fields wouldn't be visible using any base-type methods. It might not be horrible to have an unsealed immutable class and either avoid any use of EqualityComparer.Default or trust derived classes not to implement IEqualityComparer, but a derived class which did implement IEqualityComparer could appear as a mutable class even when viewed as a base-class object.
Since every other answer has stated "they should be avoided", it would be useful to have an explanation of why.
Firstly, why marker interfaces are used: They exist to allow the code that's using the object that implements it to check whether they implement said interface and treat the object differently if it does.
The problem with this approach is that it breaks encapsulation. The object itself now has indirect control over how it will be used externally. Moreover, it has knowledge of the system it's going to be used in. By applying the marker interface, the class definition is suggesting it expects to be used somewhere that checks for the existence of the marker. It has implicit knowledge of the environment it's used in and is trying to define how it should be being used. This goes against the idea of encapsulation because it has knowledge of the implementation of a part of the system that exists entirely outside its own scope.
At a practical level this reduces portability and reusability. If the class is re-used in a different application, the interface needs to be copied across too, and it may not have any meaning in the new environment, making it entirely redundant.
As such, the "marker" is metadata about the class. This metadata is not used by the class itself and is only meaningful to (some!) external client code so that it can treat the object in a certain manner. Because it only has meaning to the client code, the metadata should be in the client code, not the class API.
The difference between a "marker interface" and a normal interface is that an interface with methods tells the outside world how it can be used whereas an empty interface implies it's telling the outside world how it should be used.
Marker interfaces may sometimes be a necessary evil when a language does not support discriminated union types.
Suppose you want to define a method who expects an argument whose type must be exactly one of A, B, or C. In many functional-first languages (like F#), such a type can be cleanly defined as:
type Arg =
| AArg of A
| BArg of B
| CArg of C
However, in OO-first languages such as C#, this is not possible. The only way to achieve something similar here is to define interface IArg and "mark" A, B and C with it.
Of course, you could avoid using the marker interface by simply accepting type "object" as argument, but then you would lose expressiveness and some degree of type safety.
Discriminated union types are extremely useful and have existed in functional languages for at least 30 years. Strangely, to this day, all mainstream OO languages have ignored this feature -- although it has actually nothing to do with functional programming per se, but belongs to the type system.
These two extension methods will solve most of the issues Scott asserts favor marker interfaces over attributes:
public static bool HasAttribute<T>(this ICustomAttributeProvider self)
where T : Attribute
{
return self.GetCustomAttributes(true).Any(o => o is T);
}
public static bool HasAttribute<T>(this object self)
where T : Attribute
{
return self != null && self.GetType().HasAttribute<T>()
}
Now you have:
if (o.HasAttribute<FooAssignableAttribute>())
{
//...
}
versus:
if (o is IFooAssignable)
{
//...
}
I fail to see how building an API will take 5 times as long with the first pattern compared to the second, as Scott claims.
Marker interface is a total blank interface that has no body/data-members/implementation.
A class implements marker interface when required, it is just to "mark"; means it tells the JVM that the particular class is for the purpose of cloning so allow it to clone. This particular class is to Serialize its objects so please allow its objects to get serialized.
The marker interface is really just a procedural programming in an OO language.
An interface defines a contract between implementers and consumers, except a marker interface, because a marker interface defines nothing but itself. So, right out of the gate, the marker interface fails at the basic purpose of being an interface.