从 S3下载文件时,AWS Lambda 中的“只读文件系统”错误

当我将 file.csv 放入 S3 bucket 时,lambda 函数出现了下面的错误。该文件不大,我甚至在打开该文件进行读取之前添加了60秒的睡眠时间,但由于某种原因,该文件附加了额外的“ .6 CEdFe7C”。为什么?

[Errno 30] Read-only file system: u'/file.csv.6CEdFe7C': IOError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 75, in lambda_handler
s3.download_file(bucket, key, filepath)
File "/var/runtime/boto3/s3/inject.py", line 104, in download_file
extra_args=ExtraArgs, callback=Callback)
File "/var/runtime/boto3/s3/transfer.py", line 670, in download_file
extra_args, callback)
File "/var/runtime/boto3/s3/transfer.py", line 685, in _download_file
self._get_object(bucket, key, filename, extra_args, callback)
File "/var/runtime/boto3/s3/transfer.py", line 709, in _get_object
extra_args, callback)
File "/var/runtime/boto3/s3/transfer.py", line 723, in _do_get_object
with self._osutil.open(filename, 'wb') as f:
File "/var/runtime/boto3/s3/transfer.py", line 332, in open
return open(filename, mode)
IOError: [Errno 30] Read-only file system: u'/file.csv.6CEdFe7C'

Code:

def lambda_handler(event, context):


s3_response = {}
counter = 0
event_records = event.get("Records", [])


s3_items = []
for event_record in event_records:
if "s3" in event_record:
bucket = event_record["s3"]["bucket"]["name"]
key = event_record["s3"]["object"]["key"]
filepath = '/' + key
print(bucket)
print(key)
print(filepath)
s3.download_file(bucket, key, filepath)

上述结果是:

mytestbucket
file.csv
/file.csv
[Errno 30] Read-only file system: u'/file.csv.6CEdFe7C'

如果密钥/文件是“ file.csv”,那么为什么 s3.download _ file 方法尝试下载“ file.csv.6 CEdFe7C”?我猜当函数被触发时,文件是 file.csv.xxxxx,但是当它到达第75行时,文件被重命名为 file.csv?

106024 次浏览

在 AWS Lambda 中似乎只有 /tmp是可写的。

因此,这种方法可行:

filepath = '/tmp/' + key

参考文献:

根据 http://boto3.readthedocs.io/en/latest/guide/s3-example-download-file.html

该示例演示如何使用云名称的第一个参数和要下载的本地路径的第二个参数。

enter image description here

另一方面,亚马逊医生表示 enter image description here

因此,我们有512 MB 用于创建文件。 这是我的 Lambda 法则,非常有效。

.download_file(Key=nombre_archivo,Filename='/tmp/{}'.format(nuevo_nombre))

I noticed when I uploaded a code for lambda directly as a zip file I was able to write only to /tmp folder, but when uploaded code from S3 I was able to write to the project root folder too.

对于 C # 来说也是完美的:

using (var fileStream = File.Create("/tmp/" + fName))
{
str.Seek(0, SeekOrigin.Begin);
str.CopyTo(fileStream);
}