使用请求在 python 中下载大文件

Request 是一个非常好的库,我想用它来下载大文件(> 1GB)。 问题是不可能将整个文件保存在内存中; 我需要以块的形式读取它。这是下面代码的一个问题:

import requests


def DownloadFile(url)
local_filename = url.split('/')[-1]
r = requests.get(url)
f = open(local_filename, 'wb')
for chunk in r.iter_content(chunk_size=512 * 1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.close()
return

由于某种原因,它不这样工作; 它仍然在将响应保存到文件之前将其加载到内存中。

527050 次浏览

您的块大小可能太大了,您是否尝试过删除它-可能一次1024字节?(此外,您可以使用with来整理语法)

def DownloadFile(url):
local_filename = url.split('/')[-1]
r = requests.get(url)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
return

顺便提一下,您如何推断响应已加载到内存中?

听起来好像python没有将数据刷新到文件中,从其他所以问题,您可以尝试f.flush()os.fsync()强制写入文件并释放内存;

    with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
os.fsync(f.fileno())

使用以下流代码,无论下载的文件大小如何,Python内存使用都会受到限制:

def download_file(url):
local_filename = url.split('/')[-1]
# NOTE the stream=True parameter below
with requests.get(url, stream=True) as r:
r.raise_for_status()
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
# If you have chunk encoded response uncomment if
# and set chunk_size parameter to None.
#if chunk:
f.write(chunk)
return local_filename

注意,使用iter_content返回的字节数并不完全是chunk_size;它通常是一个大得多的随机数,并且在每次迭代中都是不同的。

请参阅body-content-workflowResponse.iter_content以获得进一步参考。

如果你使用Response.rawshutil.copyfileobj(),它会更容易:

import requests
import shutil


def download_file(url):
local_filename = url.split('/')[-1]
with requests.get(url, stream=True) as r:
with open(local_filename, 'wb') as f:
shutil.copyfileobj(r.raw, f)


return local_filename

这样可以在不使用过多内存的情况下将文件流到磁盘,而且代码很简单。

注意:根据文档Response.raw解码gzipdeflate传输编码,因此您需要手动执行此操作。

这不是OP想问的,但是…使用urllib非常容易做到这一点:

from urllib.request import urlretrieve


url = 'http://mirror.pnl.gov/releases/16.04.2/ubuntu-16.04.2-desktop-amd64.iso'
dst = 'ubuntu-16.04.2-desktop-amd64.iso'
urlretrieve(url, dst)

或者这样,如果你想把它保存到一个临时文件:

from urllib.request import urlopen
from shutil import copyfileobj
from tempfile import NamedTemporaryFile


url = 'http://mirror.pnl.gov/releases/16.04.2/ubuntu-16.04.2-desktop-amd64.iso'
with urlopen(url) as fsrc, NamedTemporaryFile(delete=False) as fdst:
copyfileobj(fsrc, fdst)

我观察了整个过程:

watch 'ps -p 18647 -o pid,ppid,pmem,rsz,vsz,comm,args; ls -al *.iso'

我看到文件在增长,但内存使用量保持在17 MB。我错过了什么吗?

根据上面Roman的最高赞评论,这是我的实现, 包括“下载为”;和“;retries"机制:< / p >

def download(url: str, file_path='', attempts=2):
"""Downloads a URL content into a file (with large file support by streaming)


:param url: URL to download
:param file_path: Local file name to contain the data downloaded
:param attempts: Number of attempts
:return: New file path. Empty string if the download failed
"""
if not file_path:
file_path = os.path.realpath(os.path.basename(url))
logger.info(f'Downloading {url} content to {file_path}')
url_sections = urlparse(url)
if not url_sections.scheme:
logger.debug('The given url is missing a scheme. Adding http scheme')
url = f'http://{url}'
logger.debug(f'New url: {url}')
for attempt in range(1, attempts+1):
try:
if attempt > 1:
time.sleep(10)  # 10 seconds wait time between downloads
with requests.get(url, stream=True) as response:
response.raise_for_status()
with open(file_path, 'wb') as out_file:
for chunk in response.iter_content(chunk_size=1024*1024):  # 1MB chunks
out_file.write(chunk)
logger.info('Download finished successfully')
return file_path
except Exception as ex:
logger.error(f'Attempt #{attempt} failed with error: {ex}')
return ''

请使用python的wget模块。下面是一个片段

import wget
wget.download(url)

requests很好,但是socket解决方案怎么样?

def stream_(host):
import socket
import ssl
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
context = ssl.create_default_context(Purpose.CLIENT_AUTH)
with context.wrap_socket(sock, server_hostname=host) as wrapped_socket:
wrapped_socket.connect((socket.gethostbyname(host), 443))
wrapped_socket.send(
"GET / HTTP/1.1\r\nHost:thiscatdoesnotexist.com\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9\r\n\r\n".encode())


resp = b""
while resp[-4:-1] != b"\r\n\r":
resp += wrapped_socket.recv(1)
else:
resp = resp.decode()
content_length = int("".join([tag.split(" ")[1] for tag in resp.split("\r\n") if "content-length" in tag.lower()]))
image = b""
while content_length > 0:
data = wrapped_socket.recv(2048)
if not data:
print("EOF")
break
image += data
content_length -= len(data)
with open("image.jpeg", "wb") as file:
file.write(image)


下面是异步分块下载用例的另一种方法,不需要将所有文件内容读入内存。
这意味着从URL读取和写入文件都是使用asyncio库实现的(aiohttp从URL读取,aiofiles从文件写入) 下面的代码应该在Python 3.7和更高版本上工作。
在复制粘贴之前,只需编辑SRC_URLDEST_FILE变量即可
import aiofiles
import aiohttp
import asyncio


async def async_http_download(src_url, dest_file, chunk_size=65536):
async with aiofiles.open(dest_file, 'wb') as fd:
async with aiohttp.ClientSession() as session:
async with session.get(src_url) as resp:
async for chunk in resp.content.iter_chunked(chunk_size):
await fd.write(chunk)


SRC_URL = "/path/to/url"
DEST_FILE = "/path/to/file/on/local/machine"


asyncio.run(async_http_download(SRC_URL, DEST_FILE))