基于数基系统的算法?

我最近注意到,有很多算法部分或全部基于数字在创造性基础上的巧妙运用。例如:

  • 二项式堆基于二进制数,而更复杂的斜二项式堆基于斜二进制数。
  • 一些生成词典有序排列的算法是基于因子数系统的。
  • 可以将尝试看作是一次查看字符串的一个数字以获得适当基数的树。
  • 哈夫曼编码树被设计成树中的每条边都编码一个零或一个二进制表示。
  • 斐波那契编码用于斐波那契搜索和反转某些类型的对数。

我的问题是: 还有什么算法能用一个聪明的数字系统作为他们直觉或证明的关键步骤呢?。我正在考虑组织一次关于这个主题的演讲,所以我要举的例子越多越好。

7106 次浏览

"Ternary numbers can be used to convey self-similar structures like a Sierpinski Triangle or a Cantor set conveniently." source

"Quaternary numbers are used in the representation of 2D Hilbert curves." source

"The quater-imaginary numeral system was first proposed by Donald Knuth in 1955, in a submission to a high-school science talent search. It is a non-standard positional numeral system which uses the imaginary number 2i as its base. It is able to represent every complex number using only the digits 0, 1, 2, and 3." source

"Roman numerals are a biquinary system." source

"Senary may be considered useful in the study of prime numbers since all primes, when expressed in base-six, other than 2 and 3 have 1 or 5 as the final digit." source

"Sexagesimal (base 60) is a numeral system with sixty as its base. It originated with the ancient Sumerians in the 3rd millennium BC, it was passed down to the ancient Babylonians, and it is still used — in a modified form — for measuring time, angles, and the geographic coordinates that are angles." source

etc...

This list is a good starting point.

RadixSort can use a various number bases. http://en.wikipedia.org/wiki/Radix_sort Pretty interesting implementation of a bucketSort.

Cryptography makes extensive use of integer rings (modular arithmatic) and also finite fields, whose operations are intuitively based on the way polynomials with integer coefficients behave.

I really like this one for converting binary numbers into Gray codes: http://www.matrixlab-examples.com/gray-code.html

Not exactly a clever base system but a clever use of the base system: Van der Corput sequences are low-discrepancy sequences formed by reversing the base-n representation of numbers. They're used to construct the 2-d Halton sequences which look kind of like this.

here is a good post on using ternary numbers to solve the "counterfeit coin" problem (where you have to detect a single counterfeit coin in a bag of regular ones, using a balance as few times as possible)

Chris Okasaki has a very good chapter in his book Purely Functional Data Structures that discusses "Numerical Representations": essentially, take some representation of a number and convert it into a data structure. To give a flavor, here are the sections of that chapter:

  1. Positional Number Systems
  2. Binary Numbers (Binary Random-Access Lists, Zeroless Representations, Lazy Representations, Segmented Representations)
  3. Skew Binary Numbers (Skew Binary Random Access Lists, Skew Binomial Heaps)
  4. Trinary and Quaternary Numbers

Some of the best tricks, distilled:

  • Distinguish between dense and sparse representations of numbers (usually you see this in matrices or graphs, but it's applicable to numbers too!)
  • Redundant number systems (systems that have more than one representation of a number) are useful.
  • If you arrange the first digit to be non-zero or use a zeroless representation, retrieving the head of the data structure can be efficient.
  • Avoid cascading borrows (from taking the tail of the list) and carries (from consing onto the list) by segmenting the data structure

Here is also the reference list for that chapter:

  • Guibas, McCreight, Plass and Roberts: A new representation for linear lists.
  • Myers: An applicative random-access stack
  • Carlsson, Munro, Poblete: An implicit binomial queue with constant insertion time.
  • Kaplan, Tarjan: Purely functional lists with catenation via recursive slow-down.

Great question. The list is long indeed. Telling time is a simple instance of mixed bases (days | hours | minutes | seconds | am/pm)

I've created a meta-base enumeration n-tuple framework if you're interested in hearing about it. It's some very sweet syntactic sugar for base numbering systems. It's not released yet. Email my username (at gmail).

I vaguely remember something about double base systems for speeding up some matrix multiplication.

Double base system is a redundant system that uses two bases for one number.

 n = Sum(i=1 --> l){ c_i * 2^{a_i} * 3 ^ {b_i}, where c in {-1,1}

Redundant means that one number can be specified in many ways.

You can look for the article "Hybrid Algorithm for the Computation of the Matrix Polynomial" by Vassil Dimitrov, Todor Cooklev.

Trying to give the best short overview I can.

They were trying to compute matrix polynomial G(N,A) = I + A + ... + A^{N-1}.

Supoosing N is composite G(N,A) = G(J,A) * G(K, A^J), if we apply for J = 2, we get:

         / (I + A) * G(K, A^2)        , if N = 2K
G(N,A) = |
\ I + (A + A^2) * G(K, A^2)  , if N = 2K + 1

also,

         / (I + A + A^2) * G(K, A^3)           , if N = 3K
G(N,A) = | I + (A + A^2 + A^3) * G(K, A^3)     , if N = 3K + 1
\ I + A * (A + A^2 + A^3) * G(K, A^3) , if N = 3K + 2

As it's "obvious" (jokingly) that some of these equations are fast in the first system and some better in the second - so it is a good idea to choose the best of those depending on N. But this would require fast modulo operation for both 2 and 3. Here's why the double base comes in - you can basically do the modulo operation fast for both of them giving you a combined system:

         / (I + A + A^2) * G(K, A^3)       , if N = 0 or 3 mod 6
G(N,A) = | I + (A + A^2 + A^3) * G(K, A^3) , if N = 1 or 4 mod 6
| (I + A) * G(3K + 1, A^2)        , if N = 2 mod 6
\ I + (A + A^2) * G(3K + 2, A^2)  , if N = 5 mod 6

Look at the article for better explanation as I'm not an expert in this area.

Hashing strings (e.g. in the Rabin-Karp algorithm) often evaluate the string as a base-b number consisting of n digits (where n is the length of the string, and b is some chosen base that is large enough). For example the string "ABCD" can be hashed as:

'A'*b^3+'B'*b^2+'C'*b^1+'D'*b^0

Substituting ASCII values for characters and taking b to be 256 this becomes,

65*256^3+66*256^2+67*256^1+68*256^0

Though, in most practical applications, the resulting value is taken modulo some reasonably sized number to keep the result sufficiently small.

I recently came across a cool algorithm for generating subsets in lexicographical order based on the binary representations of the numbers between 0 and 2n - 1. It uses the numbers' bits both to determine what elements should be chosen for the set and to locally reorder the generated sets to get them into lexicographical order. If you're curious, I have a writeup posted here.

Also, many algorithms are based on scaling (such as a weakly-polynomial version of the Ford-Fulkerson max-flow algorithm), which uses the binary representation of the numbers in the input problem to progressively refine a rough approximation into a complete solution.

Exponentiation by squaring is based on binary representation of the exponent.

I read your question the other day, and today was faced with a problem: How do I generate all partitionings of a set? The solution that occurred to me, and that I used (maybe due to reading your question) was this:

For a set with (n) elements, where I need (p) partitions, count through all (n) digit numbers in base (p).

Each number corresponds to a partitioning. Each digit corresponds to an element in the set, and the value of the digit tells you which partition to put the element in.

It's not amazing, but it's neat. It's complete, causes no redundancy, and uses arbitrary bases. The base you use depends on the specific partitioning problem.

In Hackers Delight (a book every programmer should know in my eyes) there is a complete chapter about unusal bases, like -2 as base (yea, right negative bases) or -1+i (i as imaginary unit sqrt(-1)) as base. Also I nice calculation what the best base is (in terms of hardware design, for all who dont want to read it: The solution of the equation is e, so you can go with 2 or 3, 3 would be little bit better (factor 1.056 times better than 2) - but is technical more practical).

Other things which come to my mind are gray counter (you when you count in this system only 1 bit changes, you often use this property in hardware design to reduce metastability issues) or the generalisation of the already mentioned Huffmann encoding - the arithmetic encoding.

One of my favourites using base 2 is Arithmetic Encoding. Its unusual because the hart of the algorithm uses representations of numbers between 0 and 1 in binary.

May be AKS is the case.