>>> data = '''\
Shasta California 14,200
McKinley Alaska 20,300
Fuji Japan 12,400
'''
>>> for line in data.splitlines():
print(line.split())
['Shasta', 'California', '14,200']
['McKinley', 'Alaska', '20,300']
['Fuji', 'Japan', '12,400']
>>> data = '''\
Line 1
Line 2
Line 3
Line 4'''
>>> data.count('\n') # Inaccurate
3
>>> len(data.splitlines()) # Accurate, but slow
4
>>> data.count('\n') + (not data.endswith('\n')) # Accurate and fast
4
来自@Kaz 的问题: 为什么两种截然不同的算法被硬塞到一个函数中?
str.split的签名已经有20年的历史了,那个时代的许多 API 都是非常实用的。虽然不是完美的,但是方法签名也不是“糟糕的”。在大多数情况下,Guido 的 API 设计选择经受住了时间的考验。
>>> print str.split.__doc__
S.split([sep [,maxsplit]]) -> list of strings
Return a list of the words in the string S, using sep as the
delimiter string. If maxsplit is given, at most maxsplit
splits are done. If sep is not specified or is None, any
whitespace string is a separator and empty strings are removed
from the result.