具有唯一值的排列

排列生成其元素根据其位置而不是根据其值被视为唯一的情况。所以基本上我想避免这样的重复:

>>> list(itertools.permutations([1, 1, 1]))
[(1, 1, 1), (1, 1, 1), (1, 1, 1), (1, 1, 1), (1, 1, 1), (1, 1, 1)]

之后的过滤是不可能的,因为在我的例子中排列的数量太大了。

有人知道合适的算法吗?

非常感谢!

编辑:

我基本上想要的是这样的:

x = itertools.product((0, 1, 'x'), repeat=X)
x = sorted(x, key=functools.partial(count_elements, elem='x'))

这是不可能的,因为 sorted创建了一个列表,itertools.product 的输出太大了。

对不起,我应该描述一下实际的问题。

58436 次浏览

You could try using set:

>>> list(itertools.permutations(set([1,1,2,2])))
[(1, 2), (2, 1)]

The call to set removed duplicates

It sound like you are looking for itertools.combinations() docs.python.org

list(itertools.combinations([1, 1, 1],3))
[(1, 1, 1)]
class unique_element:
def __init__(self,value,occurrences):
self.value = value
self.occurrences = occurrences


def perm_unique(elements):
eset=set(elements)
listunique = [unique_element(i,elements.count(i)) for i in eset]
u=len(elements)
return perm_unique_helper(listunique,[0]*u,u-1)


def perm_unique_helper(listunique,result_list,d):
if d < 0:
yield tuple(result_list)
else:
for i in listunique:
if i.occurrences > 0:
result_list[d]=i.value
i.occurrences-=1
for g in  perm_unique_helper(listunique,result_list,d-1):
yield g
i.occurrences+=1








a = list(perm_unique([1,1,2]))
print(a)

result:

[(2, 1, 1), (1, 2, 1), (1, 1, 2)]

EDIT (how this works):

I rewrote the above program to be longer but more readable.

I usually have a hard time explaining how something works, but let me try. In order to understand how this works, you have to understand a similar but simpler program that would yield all permutations with repetitions.

def permutations_with_replacement(elements,n):
return permutations_helper(elements,[0]*n,n-1)#this is generator


def permutations_helper(elements,result_list,d):
if d<0:
yield tuple(result_list)
else:
for i in elements:
result_list[d]=i
all_permutations = permutations_helper(elements,result_list,d-1)#this is generator
for g in all_permutations:
yield g

This program is obviously much simpler: d stands for depth in permutations_helper and has two functions. One function is the stopping condition of our recursive algorithm, and the other is for the result list that is passed around.

Instead of returning each result, we yield it. If there were no function/operator yield we would have to push the result in some queue at the point of the stopping condition. But this way, once the stopping condition is met, the result is propagated through all stacks up to the caller. That is the purpose of
for g in perm_unique_helper(listunique,result_list,d-1): yield g so each result is propagated up to caller.

Back to the original program: we have a list of unique elements. Before we can use each element, we have to check how many of them are still available to push onto result_list. Working with this program is very similar to permutations_with_replacement. The difference is that each element cannot be repeated more times than it is in perm_unique_helper.

This relies on the implementation detail that any permutation of a sorted iterable are in sorted order unless they are duplicates of prior permutations.

from itertools import permutations


def unique_permutations(iterable, r=None):
previous = tuple()
for p in permutations(sorted(iterable), r):
if p > previous:
previous = p
yield p


for p in unique_permutations('cabcab', 2):
print p

gives

('a', 'a')
('a', 'b')
('a', 'c')
('b', 'a')
('b', 'b')
('b', 'c')
('c', 'a')
('c', 'b')
('c', 'c')

What about

np.unique(itertools.permutations([1, 1, 1]))

The problem is the permutations are now rows of a Numpy array, thus using more memory, but you can cycle through them as before

perms = np.unique(itertools.permutations([1, 1, 1]))
for p in perms:
print p

Came across this the other day while working on a problem of my own. I like Luka Rahne's approach, but I thought that using the Counter class in the collections library seemed like a modest improvement. Here's my code:

def unique_permutations(elements):
"Returns a list of lists; each sublist is a unique permutations of elements."
ctr = collections.Counter(elements)


# Base case with one element: just return the element
if len(ctr.keys())==1 and ctr[ctr.keys()[0]] == 1:
return [[ctr.keys()[0]]]


perms = []


# For each counter key, find the unique permutations of the set with
# one member of that key removed, and append the key to the front of
# each of those permutations.
for k in ctr.keys():
ctr_k = ctr.copy()
ctr_k[k] -= 1
if ctr_k[k]==0:
ctr_k.pop(k)
perms_k = [[k] + p for p in unique_permutations(ctr_k)]
perms.extend(perms_k)


return perms

This code returns each permutation as a list. If you feed it a string, it'll give you a list of permutations where each one is a list of characters. If you want the output as a list of strings instead (for example, if you're a terrible person and you want to abuse my code to help you cheat in Scrabble), just do the following:

[''.join(perm) for perm in unique_permutations('abunchofletters')]

Roughly as fast as Luka Rahne's answer, but shorter & simpler, IMHO.

def unique_permutations(elements):
if len(elements) == 1:
yield (elements[0],)
else:
unique_elements = set(elements)
for first_element in unique_elements:
remaining_elements = list(elements)
remaining_elements.remove(first_element)
for sub_permutation in unique_permutations(remaining_elements):
yield (first_element,) + sub_permutation


>>> list(unique_permutations((1,2,3,1)))
[(1, 1, 2, 3), (1, 1, 3, 2), (1, 2, 1, 3), ... , (3, 1, 2, 1), (3, 2, 1, 1)]

It works recursively by setting the first element (iterating through all unique elements), and iterating through the permutations for all remaining elements.

Let's go through the unique_permutations of (1,2,3,1) to see how it works:

  • unique_elements are 1,2,3
  • Let's iterate through them: first_element starts with 1.
    • remaining_elements are [2,3,1] (ie. 1,2,3,1 minus the first 1)
    • We iterate (recursively) through the permutations of the remaining elements: (1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), (3, 2, 1)
    • For each sub_permutation, we insert the first_element: (1,1,2,3), (1,1,3,2), ... and yield the result.
  • Now we iterate to first_element = 2, and do the same as above.
    • remaining_elements are [1,3,1] (ie. 1,2,3,1 minus the first 2)
    • We iterate through the permutations of the remaining elements: (1, 1, 3), (1, 3, 1), (3, 1, 1)
    • For each sub_permutation, we insert the first_element: (2, 1, 1, 3), (2, 1, 3, 1), (2, 3, 1, 1)... and yield the result.
  • Finally, we do the same with first_element = 3.

This is my solution with 10 lines:

class Solution(object):
def permute_unique(self, nums):
perms = [[]]
for n in nums:
new_perm = []
for perm in perms:
for i in range(len(perm) + 1):
new_perm.append(perm[:i] + [n] + perm[i:])
# handle duplication
if i < len(perm) and perm[i] == n: break
perms = new_perm
return perms




if __name__ == '__main__':
s = Solution()
print s.permute_unique([1, 1, 1])
print s.permute_unique([1, 2, 1])
print s.permute_unique([1, 2, 3])

--- Result ----

[[1, 1, 1]]
[[1, 2, 1], [2, 1, 1], [1, 1, 2]]
[[3, 2, 1], [2, 3, 1], [2, 1, 3], [3, 1, 2], [1, 3, 2], [1, 2, 3]]

Because sometimes new questions are marked as duplicates and their authors are referred to this question it may be important to mention that sympy has an iterator for this purpose.

>>> from sympy.utilities.iterables import multiset_permutations
>>> list(multiset_permutations([1,1,1]))
[[1, 1, 1]]
>>> list(multiset_permutations([1,1,2]))
[[1, 1, 2], [1, 2, 1], [2, 1, 1]]

Bumped into this question while looking for something myself !

Here's what I did:

def dont_repeat(x=[0,1,1,2]): # Pass a list
from itertools import permutations as per
uniq_set = set()
for byt_grp in per(x, 4):
if byt_grp not in uniq_set:
yield byt_grp
uniq_set.update([byt_grp])
print uniq_set


for i in dont_repeat(): print i
(0, 1, 1, 2)
(0, 1, 2, 1)
(0, 2, 1, 1)
(1, 0, 1, 2)
(1, 0, 2, 1)
(1, 1, 0, 2)
(1, 1, 2, 0)
(1, 2, 0, 1)
(1, 2, 1, 0)
(2, 0, 1, 1)
(2, 1, 0, 1)
(2, 1, 1, 0)
set([(0, 1, 1, 2), (1, 0, 1, 2), (2, 1, 0, 1), (1, 2, 0, 1), (0, 1, 2, 1), (0, 2, 1, 1), (1, 1, 2, 0), (1, 2, 1, 0), (2, 1, 1, 0), (1, 0, 2, 1), (2, 0, 1, 1), (1, 1, 0, 2)])

Basically, make a set and keep adding to it. Better than making lists etc. that take too much memory.. Hope it helps the next person looking out :-) Comment out the set 'update' in the function to see the difference.

A naive approach might be to take the set of the permutations:

list(set(it.permutations([1, 1, 1])))
# [(1, 1, 1)]

However, this technique wastefully computes replicate permutations and discards them. A more efficient approach would be more_itertools.distinct_permutations, a third-party tool.

Code

import itertools as it


import more_itertools as mit




list(mit.distinct_permutations([1, 1, 1]))
# [(1, 1, 1)]

Performance

Using a larger iterable, we will compare the performances between the naive and third-party techniques.

iterable = [1, 1, 1, 1, 1, 1]
len(list(it.permutations(iterable)))
# 720


%timeit -n 10000 list(set(it.permutations(iterable)))
# 10000 loops, best of 3: 111 µs per loop


%timeit -n 10000 list(mit.distinct_permutations(iterable))
# 10000 loops, best of 3: 16.7 µs per loop

We see more_itertools.distinct_permutations is an order of magnitude faster.


Details

From the source, a recursion algorithm (as seen in the accepted answer) is used to compute distinct permutations, thereby obviating wasteful computations. See the source code for more details.

Here is a recursive solution to the problem.

def permutation(num_array):
res=[]
if len(num_array) <= 1:
return [num_array]
for num in set(num_array):
temp_array = num_array.copy()
temp_array.remove(num)
res += [[num] + perm for perm in permutation(temp_array)]
return res


arr=[1,2,2]
print(permutation(arr))

I came up with a very suitable implementation using itertools.product in this case (this is an implementation where you want all combinations

unique_perm_list = [''.join(p) for p in itertools.product(['0', '1'], repeat = X) if ''.join(p).count() == somenumber]

this is essentially a combination (n over k) with n = X and somenumber = k itertools.product() iterates from k = 0 to k = X subsequent filtering with count ensures that just the permutations with the right number of ones are cast into a list. you can easily see that it works when you calculate n over k and compare it to the len(unique_perm_list)

The best solution to this problem I have seen uses Knuth's "Algorithm L" (as noted previously by Gerrat in the comments to the original post):
http://stackoverflow.com/questions/12836385/how-can-i-interleave-or-create-unique-permutations-of-two-stings-without-recurs/12837695

Some timings:

Sorting [1]*12+[0]*12 (2,704,156 unique permutations):
Algorithm L → 2.43 s
Luke Rahne's solution → 8.56 s
scipy.multiset_permutations() → 16.8 s

You can make a function that uses collections.Counter to get unique items and their counts from the given sequence, and uses itertools.combinations to pick combinations of indices for each unique item in each recursive call, and map the indices back to a list when all indices are picked:

from collections import Counter
from itertools import combinations
def unique_permutations(seq):
def index_permutations(counts, index_pool):
if not counts:
yield {}
return
(item, count), *rest = counts.items()
rest = dict(rest)
for indices in combinations(index_pool, count):
mapping = dict.fromkeys(indices, item)
for others in index_permutations(rest, index_pool.difference(indices)):
yield {**mapping, **others}
indices = set(range(len(seq)))
for mapping in index_permutations(Counter(seq), indices):
yield [mapping[i] for i in indices]

so that [''.join(i) for i in unique_permutations('moon')] returns:

['moon', 'mono', 'mnoo', 'omon', 'omno', 'nmoo', 'oomn', 'onmo', 'nomo', 'oonm', 'onom', 'noom']

To generate unique permutations of ["A","B","C","D"] I use the following:

from itertools import combinations,chain


l = ["A","B","C","D"]
combs = (combinations(l, r) for r in range(1, len(l) + 1))
list_combinations = list(chain.from_iterable(combs))

Which generates:

[('A',),
('B',),
('C',),
('D',),
('A', 'B'),
('A', 'C'),
('A', 'D'),
('B', 'C'),
('B', 'D'),
('C', 'D'),
('A', 'B', 'C'),
('A', 'B', 'D'),
('A', 'C', 'D'),
('B', 'C', 'D'),
('A', 'B', 'C', 'D')]

Notice, duplicates are not created (e.g. items in combination with D are not generated, as they already exist).

Example: This can then be used in generating terms of higher or lower order for OLS models via data in a Pandas dataframe.

import statsmodels.formula.api as smf
import pandas as pd


# create some data
pd_dataframe = pd.Dataframe(somedata)
response_column = "Y"


# generate combinations of column/variable names
l = [col for col in pd_dataframe.columns if col!=response_column]
combs = (combinations(l, r) for r in range(1, len(l) + 1))
list_combinations = list(chain.from_iterable(combs))


# generate OLS input string
formula_base = '{} ~ '.format(response_column)
list_for_ols = [":".join(list(item)) for item in list_combinations]
string_for_ols = formula_base + ' + '.join(list_for_ols)

Creates...

Y ~ A + B + C + D + A:B + A:C + A:D + B:C + B:D + C:D + A:B:C + A:B:D + A:C:D + B:C:D + A:B:C:D'

Which can then be piped to your OLS regression

model = smf.ols(string_for_ols, pd_dataframe).fit()
model.summary()

Adapted to remove recursion, use a dictionary and numba for high performance but not using yield/generator style so memory usage is not limited:

import numba


@numba.njit
def perm_unique_fast(elements): #memory usage too high for large permutations
eset = set(elements)
dictunique = dict()
for i in eset: dictunique[i] = elements.count(i)
result_list = numba.typed.List()
u = len(elements)
for _ in range(u): result_list.append(0)
s = numba.typed.List()
results = numba.typed.List()
d = u
while True:
if d > 0:
for i in dictunique:
if dictunique[i] > 0: s.append((i, d - 1))
i, d = s.pop()
if d == -1:
dictunique[i] += 1
if len(s) == 0: break
continue
result_list[d] = i
if d == 0: results.append(result_list[:])
dictunique[i] -= 1
s.append((i, -1))
return results
import timeit
l = [2, 2, 3, 3, 4, 4, 5, 5, 6, 6]
%timeit list(perm_unique(l))
#377 ms ± 26 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)


ltyp = numba.typed.List()
for x in l: ltyp.append(x)
%timeit perm_unique_fast(ltyp)
#293 ms ± 3.37 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)


assert list(sorted(perm_unique(l))) == list(sorted([tuple(x) for x in perm_unique_fast(ltyp)]))

About 30% faster but still suffers a bit due to list copying and management.

Alternatively without numba but still without recursion and using a generator to avoid memory issues:

def perm_unique_fast_gen(elements):
eset = set(elements)
dictunique = dict()
for i in eset: dictunique[i] = elements.count(i)
result_list = list() #numba.typed.List()
u = len(elements)
for _ in range(u): result_list.append(0)
s = list()
d = u
while True:
if d > 0:
for i in dictunique:
if dictunique[i] > 0: s.append((i, d - 1))
i, d = s.pop()
if d == -1:
dictunique[i] += 1
if len(s) == 0: break
continue
result_list[d] = i
if d == 0: yield result_list
dictunique[i] -= 1
s.append((i, -1))

This is my attempt without resorting to set / dict, as a generator using recursion, but using string as input. Output is also ordered in natural order:

def perm_helper(head: str, tail: str):
if len(tail) == 0:
yield head
else:
last_c = None
for index, c in enumerate(tail):
if last_c != c:
last_c = c
yield from perm_helper(
head + c, tail[:index] + tail[index + 1:]
)




def perm_generator(word):
yield from perm_helper("", sorted(word))

example:

from itertools import takewhile
word = "POOL"
list(takewhile(lambda w: w != word, (x for x in perm_generator(word))))
# output
# ['LOOP', 'LOPO', 'LPOO', 'OLOP', 'OLPO', 'OOLP', 'OOPL', 'OPLO', 'OPOL', 'PLOO', 'POLO']
ans=[]
def fn(a, size):
if (size == 1):
if a.copy() not in ans:
ans.append(a.copy())
return


for i in range(size):
fn(a,size-1);
if size&1:
a[0], a[size-1] = a[size-1],a[0]
else:
a[i], a[size-1] = a[size-1],a[i]

https://www.geeksforgeeks.org/heaps-algorithm-for-generating-permutations/

May be we can use set here to obtain unique permutations

import itertools
print('unique perms >> ', set(itertools.permutations(A)))