However you should be wary of converting to atoms based in user input as they will not be garbage collected which can lead to a memory leak. See this issue.
To build on @emaillenin's answer, you can check to see if the keys are already atoms, to avoid the ArgumentError that is raised by String.to_atom when it gets a key that is already an atom.
for {key, val} <- string_key_map, into: %{} do
cond do
is_atom(key) -> {key, val}
true -> {String.to_atom(key), val}
end
end
Here's a version of @emaillenin's answer in module form:
defmodule App.Utils do
# Implementation based on: http://stackoverflow.com/a/31990445/175830
def map_keys_to_atoms(map) do
for {key, val} <- map, into: %{}, do: {String.to_atom(key), val}
end
def map_keys_to_strings(map) do
for {key, val} <- map, into: %{}, do: {Atom.to_string(key), val}
end
end
defp atomog(map) do
atomkeys = fn {k, v}, acc ->
Map.put_new(acc, atomize_binary(k), v)
end
Enum.reduce(map, %{}, atomkeys)
end
defp atomize_binary(value) do
if is_binary(value), do: String.to_atom(value), else: value
end
Which is called recursively. After reading @Galzer's answer I'll probably convert this to use String.to_existing_atom soon.
defmodule Service.MiscScripts do
@doc """
Changes String Map to Map of Atoms e.g. %{"c"=> "d", "x" => %{"yy" => "zz"}} to
%{c: "d", x: %{yy: "zz"}}, i.e changes even the nested maps.
"""
def convert_to_atom_map(map), do: to_atom_map(map)
defp to_atom_map(map) when is_map(map),
do: Map.new(map, fn {k, v} -> {String.to_atom(k), to_atom_map(v)} end)
defp to_atom_map(v), do: v
end
Here is what I use to recursively (1) format map keys as snakecase and (2) convert them to atoms. Keep in mind that you should never convert non-whitelisted user data to atoms as they are not garbage collected.
defp snake_case_map(map) when is_map(map) do
Enum.reduce(map, %{}, fn {key, value}, result ->
Map.put(result, String.to_atom(Macro.underscore(key)), snake_case_map(value))
end)
end
defp snake_case_map(list) when is_list(list), do: Enum.map(list, &snake_case_map/1)
defp snake_case_map(value), do: value
First of all, @Olshansk's answer worked like a charm for me. Thank you for that.
Next, since the initial implementation provided by @Olshansk was lacking support for list of maps, below is my code snippet extending that.
def keys_to_atoms(string_key_map) when is_map(string_key_map) do
for {key, val} <- string_key_map, into: %{}, do: {String.to_atom(key), keys_to_atoms(val)}
end
def keys_to_atoms(string_key_list) when is_list(string_key_list) do
string_key_list
|> Enum.map(&keys_to_atoms/1)
end
def keys_to_atoms(value), do: value
This the sample I used, followed by the output after passing it to the above function - keys_to_atoms(attrs)
The explanation for this is very simple. The first method is the heart of everything which is invoked for the input of the type map.
The for loop destructures the attributes in key-value pairs and returns the atom representation of the key.
Next, while returning the value, there are three possibilities again.
The value is yet another map.
The value is a list of maps.
The value is none of the above, it's primitive.
So this time, when the keys_to_atoms method is invoked while assigning value, it may invoke one of the three methods based on the type of input.
The methods are organized in the snippet in a similar order.
def keys_to_atom(map) do
Map.new(
map,
fn {k, v} ->
v2 =
cond do
is_map(v) -> keys_to_atom(v)
v in [[nil], nil] -> nil
is_list(v) -> Enum.map(v, fn o -> keys_to_atom(o) end)
true -> v
end
{String.to_atom("#{k}"), v2}
end
)
end
I really liked Roman Bedichevskii's answer ... but I needed something that will thoroughly atomize the keys of deeply nested yaml files. This is what I came up with:
@doc """
Safe version, will only atomize to an existing key
"""
def atomize_keys(map) when is_map(map), do: Map.new(map, &atomize_keys/1)
def atomize_keys(list) when is_list(list), do: Enum.map(list, &atomize_keys/1)
def atomize_keys({key, val}) when is_binary(key),
do: atomize_keys({String.to_existing_atom(key), val})
def atomize_keys({key, val}), do: {key, atomize_keys(val)}
def atomize_keys(term), do: term
@doc """
Unsafe version, will atomize all string keys
"""
def unsafe_atomize_keys(map) when is_map(map), do: Map.new(map, &unsafe_atomize_keys/1)
def unsafe_atomize_keys(list) when is_list(list), do: Enum.map(list, &unsafe_atomize_keys/1)
def unsafe_atomize_keys({key, val}) when is_binary(key),
do: unsafe_atomize_keys({String.to_atom(key), val})
def unsafe_atomize_keys({key, val}), do: {key, unsafe_atomize_keys(val)}
def unsafe_atomize_keys(term), do: term
It's main limitation is that if you feed it a tuple {key, value} and the key is a binary, it will atomize it. That is something you want for keyword lists, but it is probably someone's edge case. In any case, YAML and JSON files don't have a concept of a tuple, so for processing those, it won't matter.
We found ourselves doing this a lot in various Elixir/Phoenix projects ...
so we created a tested+documented utility function Useful.atomize_map_keys/1 that considers all the answers in this thread.
Install by adding this line to the deps your `mix.exs: