admin管理员组文章数量:1022864
I am trying to read only two columns from .parquet
file, given in S3, using lambda AWS.
The function process_object
given in the lambda, try to read huge parquet file of R50
flux. Each partition in S3, has 10 parquet files, each has MB huge size.
Look the P.J
The lambda has limititations, that it can not deal with huge file, because of memory.
I tried to optimise it when reading parquet file using PyArrow
library, to read only needed columns.
elif object_name.endswith(".parquet"):
rows = []
if flow_name == "R50":
path = "s3://" + bucket_name + "/" + object_prefix
dataset = ds.dataset(path, format="parquet")
table = dataset.to_table(columns=["nom_archive", "nom_fichier"])
filtered_table = table.filter(
(table["nom_archive"].is_valid()) & (table["nom_fichier"].is_valid())
)
filtred_df = filtered_table.to_pandas()
rows = filtred_df.to_dict(orient="records")
df = pd.DataFrame(rows)
But, it causes this error message :
[ERROR] ArrowNotImplementedError: Got S3 URI but Arrow compiled without S3 support
How can I deal with that please ? Is there any other method to optimize reading parquet file in my lambda ? Thank you,
I am trying to read only two columns from .parquet
file, given in S3, using lambda AWS.
The function process_object
given in the lambda, try to read huge parquet file of R50
flux. Each partition in S3, has 10 parquet files, each has MB huge size.
Look the P.J
The lambda has limititations, that it can not deal with huge file, because of memory.
I tried to optimise it when reading parquet file using PyArrow
library, to read only needed columns.
elif object_name.endswith(".parquet"):
rows = []
if flow_name == "R50":
path = "s3://" + bucket_name + "/" + object_prefix
dataset = ds.dataset(path, format="parquet")
table = dataset.to_table(columns=["nom_archive", "nom_fichier"])
filtered_table = table.filter(
(table["nom_archive"].is_valid()) & (table["nom_fichier"].is_valid())
)
filtred_df = filtered_table.to_pandas()
rows = filtred_df.to_dict(orient="records")
df = pd.DataFrame(rows)
But, it causes this error message :
[ERROR] ArrowNotImplementedError: Got S3 URI but Arrow compiled without S3 support
How can I deal with that please ? Is there any other method to optimize reading parquet file in my lambda ? Thank you,
Share Improve this question asked Nov 19, 2024 at 9:53 user24123007user24123007 212 bronze badges1 Answer
Reset to default 0I believe you can use the trick from here
import awswrangler as wr
df=wr.s3.read_parquet(path=s3_url)
I am trying to read only two columns from .parquet
file, given in S3, using lambda AWS.
The function process_object
given in the lambda, try to read huge parquet file of R50
flux. Each partition in S3, has 10 parquet files, each has MB huge size.
Look the P.J
The lambda has limititations, that it can not deal with huge file, because of memory.
I tried to optimise it when reading parquet file using PyArrow
library, to read only needed columns.
elif object_name.endswith(".parquet"):
rows = []
if flow_name == "R50":
path = "s3://" + bucket_name + "/" + object_prefix
dataset = ds.dataset(path, format="parquet")
table = dataset.to_table(columns=["nom_archive", "nom_fichier"])
filtered_table = table.filter(
(table["nom_archive"].is_valid()) & (table["nom_fichier"].is_valid())
)
filtred_df = filtered_table.to_pandas()
rows = filtred_df.to_dict(orient="records")
df = pd.DataFrame(rows)
But, it causes this error message :
[ERROR] ArrowNotImplementedError: Got S3 URI but Arrow compiled without S3 support
How can I deal with that please ? Is there any other method to optimize reading parquet file in my lambda ? Thank you,
I am trying to read only two columns from .parquet
file, given in S3, using lambda AWS.
The function process_object
given in the lambda, try to read huge parquet file of R50
flux. Each partition in S3, has 10 parquet files, each has MB huge size.
Look the P.J
The lambda has limititations, that it can not deal with huge file, because of memory.
I tried to optimise it when reading parquet file using PyArrow
library, to read only needed columns.
elif object_name.endswith(".parquet"):
rows = []
if flow_name == "R50":
path = "s3://" + bucket_name + "/" + object_prefix
dataset = ds.dataset(path, format="parquet")
table = dataset.to_table(columns=["nom_archive", "nom_fichier"])
filtered_table = table.filter(
(table["nom_archive"].is_valid()) & (table["nom_fichier"].is_valid())
)
filtred_df = filtered_table.to_pandas()
rows = filtred_df.to_dict(orient="records")
df = pd.DataFrame(rows)
But, it causes this error message :
[ERROR] ArrowNotImplementedError: Got S3 URI but Arrow compiled without S3 support
How can I deal with that please ? Is there any other method to optimize reading parquet file in my lambda ? Thank you,
Share Improve this question asked Nov 19, 2024 at 9:53 user24123007user24123007 212 bronze badges1 Answer
Reset to default 0I believe you can use the trick from here
import awswrangler as wr
df=wr.s3.read_parquet(path=s3_url)
本文标签: aws lambdaArrowNotImplementedError Got S3 URI but Arrow compiled without S3 supportStack Overflow
版权声明:本文标题:aws lambda - ArrowNotImplementedError: Got S3 URI but Arrow compiled without S3 support - Stack Overflow 内容由热心网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://it.en369.cn/questions/1745569278a2156638.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论