我在这方面遇到了一些麻烦——我的 EntityKey 由三个属性组成(PK 有3个列) ,我不想检查每个列,因为那样会很难看。
I thought about a solution that works all time with all entities.
另一个原因是我不喜欢每次都捕获 UpdateException。
需要一点反射来获取键属性的值。
该代码是作为一个扩展实现的,以简化使用:
context.EntityExists<MyEntityType>(item);
看看吧:
public static bool EntityExists<T>(this ObjectContext context, T entity)
where T : EntityObject
{
object value;
var entityKeyValues = new List<KeyValuePair<string, object>>();
var objectSet = context.CreateObjectSet<T>().EntitySet;
foreach (var member in objectSet.ElementType.KeyMembers)
{
var info = entity.GetType().GetProperty(member.Name);
var tempValue = info.GetValue(entity, null);
var pair = new KeyValuePair<string, object>(member.Name, tempValue);
entityKeyValues.Add(pair);
}
var key = new EntityKey(objectSet.EntityContainer.Name + "." + objectSet.Name, entityKeyValues);
if (context.TryGetObjectByKey(key, out value))
{
return value != null;
}
return false;
}
Private Function ValidateUniquePayroll(PropertyToCheck As String) As Boolean
// Return true if Username is Unique
Dim rtnValue = False
Dim context = New CPMModel.CPMEntities
If (context.Employees.Any()) Then ' Check if there are "any" records in the Employee table
Dim employee = From c In context.Employees Select c.PayrollNumber ' Select just the PayrollNumber column to work with
For Each item As Object In employee ' Loop through each employee in the Employees entity
If (item = PropertyToCheck) Then ' Check if PayrollNumber in current row matches PropertyToCheck
// Found a match, throw exception and return False
rtnValue = False
Exit For
Else
// No matches, return True (Unique)
rtnValue = True
End If
Next
Else
// The is currently no employees in the person entity so return True (Unqiue)
rtnValue = True
End If
Return rtnValue
End Function
我必须管理一个场景,在这个场景中,新数据记录中提供的重复数据的百分比非常高,并且有成千上万的数据库调用被用来检查重复数据(因此 CPU 以100% 的速度发送了大量的时间)。最后,我决定将最后的10万条记录保存在内存中。通过这种方式,我可以检查缓存记录中的重复记录,这比对 SQL 数据库的 LINQ 查询要快得多,然后将任何真正的新记录写入数据库(以及将它们添加到数据缓存中,我还对数据缓存进行了排序和修剪,以保持其长度易于管理)。
Note that the raw data was a CSV file that contained many individual records that had to be parsed. The records in each consecutive file (which came at a rate of about 1 every 5 minutes) overlapped considerably, hence the high percentage of duplicates.