计时器与计时装饰器

我正在尝试计时一些代码。首先,我使用了计时装饰器:

#!/usr/bin/env python


import time
from itertools import izip
from random import shuffle


def timing_val(func):
def wrapper(*arg, **kw):
'''source: http://www.daniweb.com/code/snippet368.html'''
t1 = time.time()
res = func(*arg, **kw)
t2 = time.time()
return (t2 - t1), res, func.__name__
return wrapper


@timing_val
def time_izip(alist, n):
i = iter(alist)
return [x for x in izip(*[i] * n)]


@timing_val
def time_indexing(alist, n):
return [alist[i:i + n] for i in range(0, len(alist), n)]


func_list = [locals()[key] for key in locals().keys()
if callable(locals()[key]) and key.startswith('time')]
shuffle(func_list)  # Shuffle, just in case the order matters


alist = range(1000000)
times = []
for f in func_list:
times.append(f(alist, 31))


times.sort(key=lambda x: x[0])
for (time, result, func_name) in times:
print '%s took %0.3fms.' % (func_name, time * 1000.)

产量

% test.py
time_indexing took 73.230ms.
time_izip took 122.057ms.

这里我用的是时间:

%  python - m timeit - s '' 'alist=range(1000000);[alist[i:i+31] for i in range(0, len(alist), 31)]'
10 loops, best of 3:
64 msec per loop
% python - m timeit - s 'from itertools import izip' 'alist=range(1000000);i=iter(alist);[x for x in izip(*[i]*31)]'
10 loops, best of 3:
66.5 msec per loop

使用计时器,结果实际上是相同的,但使用计时装饰器,它似乎 time_indexingtime_izip更快。

这种差异的原因是什么?

应该相信这两种方法中的任何一种吗?

如果有,是哪个?

127454 次浏览

Just a guess, but could the difference be the order of magnitude of difference in range() values?

From your original source:

alist=range(1000000)

From your timeit example:

alist=range(100000)

For what it's worth, here are the results on my system with the range set to 1 million:

$ python -V
Python 2.6.4rc2


$ python -m timeit -s 'from itertools import izip' 'alist=range(1000000);i=iter(alist);[x for x in izip(*[i]*31)]'
10 loops, best of 3: 69.6 msec per loop


$ python -m timeit -s '' 'alist=range(1000000);[alist[i:i+31] for i in range(0, len(alist), 31)]'
10 loops, best of 3: 67.6 msec per loop

I wasn't able to get your other code to run, since I could not import the "decorator" module on my system.


Update - I see the same discrepancy you do when I run your code without the decorator involved.

$ ./test.py
time_indexing took 84.846ms.
time_izip took 132.574ms.

Thanks for posting this question; I learned something today. =)

regardless of this particular exercise, I'd imagine that using timeit is much safer and reliable option. it is also cross-platform, unlike your solution.

Use timeit. Running the test more than once gives me much better results.

func_list=[locals()[key] for key in locals().keys()
if callable(locals()[key]) and key.startswith('time')]


alist=range(1000000)
times=[]
for f in func_list:
n = 10
times.append( min(  t for t,_,_ in (f(alist,31) for i in range(n))))


for (time,func_name) in zip(times, func_list):
print '%s took %0.3fms.' % (func_name, time*1000.)

->

<function wrapper at 0x01FCB5F0> took 39.000ms.
<function wrapper at 0x01FCB670> took 41.000ms.

I got tired of from __main__ import foo, now use this -- for simple args, for which %r works, and not in Ipython.
(Why does timeit works only on strings, not thunks / closures i.e. timefunc( f, arbitrary args ) ?)


import timeit


def timef( funcname, *args, **kwargs ):
""" timeit a func with args, e.g.
for window in ( 3, 31, 63, 127, 255 ):
timef( "filter", window, 0 )
This doesn't work in ipython;
see Martelli, "ipython plays weird tricks with __main__" in Stackoverflow
"""
argstr = ", ".join([ "%r" % a for a in args]) if args  else ""
kwargstr = ", ".join([ "%s=%r" % (k,v) for k,v in kwargs.items()]) \
if kwargs  else ""
comma = ", " if (argstr and kwargstr)  else ""
fargs = "%s(%s%s%s)" % (funcname, argstr, comma, kwargstr)
# print "test timef:", fargs
t = timeit.Timer( fargs, "from __main__ import %s" % funcname )
ntime = 3
print "%.0f usec %s" % (t.timeit( ntime ) * 1e6 / ntime, fargs)


#...............................................................................
if __name__ == "__main__":
def f( *args, **kwargs ):
pass


try:
from __main__ import f
except:
print "ipython plays weird tricks with __main__, timef won't work"
timef( "f")
timef( "f", 1 )
timef( "f", """ a b """ )
timef( "f", 1, 2 )
timef( "f", x=3 )
timef( "f", x=3 )
timef( "f", 1, 2, x=3, y=4 )

Added: see also "ipython plays weird tricks with main", Martelli in running-doctests-through-ipython

I would use a timing decorator, because you can use annotations to sprinkle the timing around your code rather than making you code messy with timing logic.

import time


def timeit(f):


def timed(*args, **kw):


ts = time.time()
result = f(*args, **kw)
te = time.time()


print 'func:%r args:[%r, %r] took: %2.4f sec' % \
(f.__name__, args, kw, te-ts)
return result


return timed

Using the decorator is easy either use annotations.

@timeit
def compute_magic(n):
#function definition
#....

Or re-alias the function you want to time.

compute_magic = timeit(compute_magic)

Use wrapping from functools to improve Matt Alcock's answer.

from functools import wraps
from time import time


def timing(f):
@wraps(f)
def wrap(*args, **kw):
ts = time()
result = f(*args, **kw)
te = time()
print 'func:%r args:[%r, %r] took: %2.4f sec' % \
(f.__name__, args, kw, te-ts)
return result
return wrap

In an example:

@timing
def f(a):
for _ in range(a):
i = 0
return -1

Invoking method f wrapped with @timing:

func:'f' args:[(100000000,), {}] took: 14.2240 sec
f(100000000)

The advantage of this is that it preserves attributes of the original function; that is, metadata like the function name and docstring is correctly preserved on the returned function.

This is the type of need that you pray a library provides a portable solution -- DRY! Fortunately funcy.log_durations comes to the answer.

Example copied from documentation:

@log_durations(logging.info)
def do_hard_work(n):
samples = range(n)
# ...


# 121 ms in do_hard_work(10)
# 143 ms in do_hard_work(11)
# ...

Browse the funcy documentation for other variants such as different keyword arguments and @log_iter_durations.

Inspired by Micah Smith's answer, I made funcy print directly instead (and not use logging module).

Below is convenient for use at google colab.

# pip install funcy
from funcy import print_durations


@print_durations()
def myfunc(n=0):
for i in range(n):
pass


myfunc(123)
myfunc(123456789)


# 5.48 mks in myfunc(123)
# 3.37 s in myfunc(123456789)