使用流生成映射时忽略重复项

Map<String, String> phoneBook = people.stream()
.collect(toMap(Person::getName,
Person::getAddress));

当找到一个重复的元素时,我得到 java.lang.IllegalStateException: Duplicate key

在向映射添加值时是否可以忽略这种异常?

当有重复时,它应该继续忽略重复的键。

220366 次浏览

这可以使用Collectors.toMap(keyMapper, valueMapper, mergeFunction)mergeFunction参数实现:

Map<String, String> phoneBook =
people.stream()
.collect(Collectors.toMap(
Person::getName,
Person::getAddress,
(address1, address2) -> {
System.out.println("duplicate key found!");
return address1;
}
));

mergeFunction是一个函数,操作与同一个键相关联的两个值。adress1对应于收集元素时遇到的第一个地址,adress2对应于遇到的第二个地址:这个lambda只是告诉保留第一个地址,忽略第二个地址。

JavaDocs所述:

如果映射的键包含重复项(根据 Object.equals(Object))时,当 执行收集操作。如果映射的键可能有

. copy,使用toMap(Function keyMapper, Function valueMapper, BinaryOperator mergeFunction)代替

所以你应该使用toMap(Function keyMapper, Function valueMapper, BinaryOperator mergeFunction)代替。只需要提供一个合并功能,它将确定将哪个副本放入映射中。

例如,如果你不关心是哪个,打电话就可以了

Map<String, String> phoneBook = people.stream().collect(
Collectors.toMap(Person::getName, Person::getAddress, (a1, a2) -> a1));

来自主编的答案帮助了我很多,但我想添加有意义的信息,如果有人试图集团数据。

例如,如果你有两个Orders,每个产品都有相同的code,但不同的quantity,并且你想要总和数量,你可以这样做:

List<Order> listQuantidade = new ArrayList<>();
listOrders.add(new Order("COD_1", 1L));
listOrders.add(new Order("COD_1", 5L));
listOrders.add(new Order("COD_1", 3L));
listOrders.add(new Order("COD_2", 3L));
listOrders.add(new Order("COD_3", 4L));


listOrders.collect(Collectors.toMap(Order::getCode,
o -> o.getQuantity(),
(o1, o2) -> o1 + o2));

结果:

{COD_3=4, COD_2=3, COD_1=9}

或者,从javadocs中,你可以组合地址:

 Map<String, String> phoneBook
people.stream().collect(toMap(Person::getName,
Person::getAddress,
(s, a) -> s + ", " + a));

假设你有people对象列表

  Map<String, String> phoneBook=people.stream()
.collect(toMap(Person::getName, Person::getAddress));

现在你需要两个步骤:

1)

people =removeDuplicate(people);

2)

Map<String, String> phoneBook=people.stream()
.collect(toMap(Person::getName, Person::getAddress));

下面是删除重复的方法

public static List removeDuplicate(Collection<Person>  list) {
if(list ==null || list.isEmpty()){
return null;
}


Object removedDuplicateList =
list.stream()
.distinct()
.collect(Collectors.toList());
return (List) removedDuplicateList;


}

在这里添加完整的示例

 package com.example.khan.vaquar;


import java.util.Arrays;
import java.util.Collection;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;


public class RemovedDuplicate {


public static void main(String[] args) {
Person vaquar = new Person(1, "Vaquar", "Khan");
Person zidan = new Person(2, "Zidan", "Khan");
Person zerina = new Person(3, "Zerina", "Khan");


// Add some random persons
Collection<Person> duplicateList = Arrays.asList(vaquar, zidan, zerina, vaquar, zidan, vaquar);


//
System.out.println("Before removed duplicate list" + duplicateList);
//
Collection<Person> nonDuplicateList = removeDuplicate(duplicateList);
//
System.out.println("");
System.out.println("After removed duplicate list" + nonDuplicateList);
;


// 1) solution Working code
Map<Object, Object> k = nonDuplicateList.stream().distinct()
.collect(Collectors.toMap(s1 -> s1.getId(), s1 -> s1));
System.out.println("");
System.out.println("Result 1 using method_______________________________________________");
System.out.println("k" + k);
System.out.println("_____________________________________________________________________");


// 2) solution using inline distinct()
Map<Object, Object> k1 = duplicateList.stream().distinct()
.collect(Collectors.toMap(s1 -> s1.getId(), s1 -> s1));
System.out.println("");
System.out.println("Result 2 using inline_______________________________________________");
System.out.println("k1" + k1);
System.out.println("_____________________________________________________________________");


//breacking code
System.out.println("");
System.out.println("Throwing exception _______________________________________________");
Map<Object, Object> k2 = duplicateList.stream()
.collect(Collectors.toMap(s1 -> s1.getId(), s1 -> s1));
System.out.println("");
System.out.println("k2" + k2);
System.out.println("_____________________________________________________________________");
}


public static List removeDuplicate(Collection<Person> list) {
if (list == null || list.isEmpty()) {
return null;
}


Object removedDuplicateList = list.stream().distinct().collect(Collectors.toList());
return (List) removedDuplicateList;


}


}


// Model class
class Person {
public Person(Integer id, String fname, String lname) {
super();
this.id = id;
this.fname = fname;
this.lname = lname;
}


private Integer id;
private String fname;
private String lname;


// Getters and Setters


public Integer getId() {
return id;
}


public void setId(Integer id) {
this.id = id;
}


public String getFname() {
return fname;
}


public void setFname(String fname) {
this.fname = fname;
}


public String getLname() {
return lname;
}


public void setLname(String lname) {
this.lname = lname;
}


@Override
public String toString() {
return "Person [id=" + id + ", fname=" + fname + ", lname=" + lname + "]";
}


}

结果:

Before removed duplicate list[Person [id=1, fname=Vaquar, lname=Khan], Person [id=2, fname=Zidan, lname=Khan], Person [id=3, fname=Zerina, lname=Khan], Person [id=1, fname=Vaquar, lname=Khan], Person [id=2, fname=Zidan, lname=Khan], Person [id=1, fname=Vaquar, lname=Khan]]


After removed duplicate list[Person [id=1, fname=Vaquar, lname=Khan], Person [id=2, fname=Zidan, lname=Khan], Person [id=3, fname=Zerina, lname=Khan]]


Result 1 using method_______________________________________________
k{1=Person [id=1, fname=Vaquar, lname=Khan], 2=Person [id=2, fname=Zidan, lname=Khan], 3=Person [id=3, fname=Zerina, lname=Khan]}
_____________________________________________________________________


Result 2 using inline_______________________________________________
k1{1=Person [id=1, fname=Vaquar, lname=Khan], 2=Person [id=2, fname=Zidan, lname=Khan], 3=Person [id=3, fname=Zerina, lname=Khan]}
_____________________________________________________________________


Throwing exception _______________________________________________
Exception in thread "main" java.lang.IllegalStateException: Duplicate key Person [id=1, fname=Vaquar, lname=Khan]
at java.util.stream.Collectors.lambda$throwingMerger$0(Collectors.java:133)
at java.util.HashMap.merge(HashMap.java:1253)
at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1320)
at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at com.example.khan.vaquar.RemovedDuplicate.main(RemovedDuplicate.java:48)

我在分组对象时遇到过这样的问题,我总是用一个简单的方法解决它们:使用java.util.Set执行一个自定义过滤器,以删除重复的对象,无论你选择的属性如下所示

Set<String> uniqueNames = new HashSet<>();
Map<String, String> phoneBook = people
.stream()
.filter(person -> person != null && !uniqueNames.add(person.getName()))
.collect(toMap(Person::getName, Person::getAddress));

希望这对有同样问题的人有所帮助!

按对象分组

Map<Integer, Data> dataMap = dataList.stream().collect(Collectors.toMap(Data::getId, data-> data, (data1, data2)-> {LOG.info("Duplicate Group For :" + data2.getId());return data1;}));

对于任何人得到这个问题,但没有重复的钥匙在地图流,确保keyMapper函数不返回空值

追踪这个是非常烦人的,因为当它处理第二个元素时,异常会说“重复键1”;当1实际上是条目的价值而不是键时。

在我的情况下,我的keyMapper函数试图在不同的映射中查找值,但由于字符串中的错别字返回空值。

final Map<String, String> doop = new HashMap<>();
doop.put("a", "1");
doop.put("b", "2");


final Map<String, String> lookup = new HashMap<>();
doop.put("c", "e");
doop.put("d", "f");


doop.entrySet().stream().collect(Collectors.toMap(e -> lookup.get(e.getKey()), e -> e.getValue()));

感觉toMap经常工作,但并不总是java流的一个黑暗的弱点。好像他们应该叫它toUniqueMap之类的…

最简单的方法是使用Collectors.groupingBy 而不是 Collectors.toMap

默认情况下,它将返回一个List类型的输出,但碰撞问题已经解决了,这可能是你在倍数存在时想要的结果。

  Map<String, List<Person>> phoneBook = people.stream()
.collect(groupingBy((x) -> x.name));

如果一个Set类型的地址集合与一个特定的名称相关联,groupingBy可以做到:

Map<String, Set<String>> phoneBook = people.stream()
.collect(groupingBy((x) -> x.name, mapping((x) -> x.address, toSet())));

另一种方法是“开始”;使用散列或集合…并仔细跟踪以确保键在输出流中不会重复。啊。这里是一个例子,它碰巧在这个过程中存活了下来…

为了完整起见,以下是如何“减少”;副本只剩下一个。

如果你同意最后一个:

  Map<String, Person> phoneBook = people.stream()
.collect(groupingBy(x -> x.name, reducing(null, identity(), (first, last) -> last)));

如果你只想要第一个:

  Map<String, Person> phoneBook = people.stream()
.collect(groupingBy(x -> x.name, reducing(null, identity(), (first, last) -> first != null ? first : last)));

如果你想在最后加上“字符串”;(不使用identity()作为参数)。

  Map<String, String> phoneBook = people.stream()
.collect(groupingBy(x -> x.name, reducing(null, x -> x.address, (first, last) -> last)));

< a href = " https://stackoverflow.com/questions/34495816/stream-groupingby-reducing-to-first-element-of-list comment110097397_48123966 " > < / >来源

因此,本质上,groupingByreducing收集器配对,开始表现得非常类似于toMap收集器,具有类似于它的mergeFunction……结果是一样的……

可以使用lambda函数:比较从key(…)到键字符串

List<Blog> blogsNoDuplicates = blogs.stream()
.collect(toMap(b->key(b), b->b, (b1, b2) -> b1))  // b.getAuthor() <~>key(b) as Key criteria for Duplicate elimination
.values().stream().collect(Collectors.toList());


static String key(Blog b){
return b.getTitle()+b.getAuthor(); // make key as criteria of distinction
}

我有同样的情况,发现最简单的解决方案(假设你只是想覆盖重复键的映射值)是:

Map<String, String> phoneBook =
people.stream()
.collect(Collectors.toMap(Person::getName,
Person::getAddress,
(key1, key2)-> key2));