admin管理员组

文章数量:1025235

I have a django application running on AWS Elastic Beanstalk. Part of this application streams an OpenAI chat completion response to the user in chunks. When running with gunicorn locally, the response is streamed in chunks as expected.

After deploying to AWS Elastic Beanstalk with a nginx server, the response is not streamed as it is in dev. It buffers until the entire response is ready and then returns it at once.

I believe I need to disable response buffering from nginx. Below is my current structure.

        def generate_stream():
            ai_response = StringIO()
            try:
                response = client.chatpletions.create(
                    model="gpt-4o",
                    messages=context_messages,
                    temperature=0.7,
                    max_tokens=4096,
                    stream=True
                )

                for chunk in response:
                    content = chunk.choices[0].delta.content
                    if content:
                        ai_response.write(content)
                        yield content

                try:
                    ai_response_obj = Message.objects.create(conversation=conversation, content=ai_response.getvalue(), role="assistant")
                    print(f"AI response saved: {ai_response_obj}")
                except Exception as e:
                    print(f"Failed to save AI response: {e}")

            except (openai.APIConnectionError, openai.OpenAIError) as e:
                print(f"API connection error: {e}")
                yield ("ERROR: There was an issue processing your request. Please check your internet connection and try again.")

            except Exception as e:
                print(f"Unexpected error: {e}")
                yield "Error: An unexpected error occurred."

        response_server = StreamingHttpResponse(generate_stream(), content_type='text/plain')
        response_server['Cache-Control'] = 'no-cache'  # prevent client cache
        response_server["X-Accel-Buffering"] = "no"
        response_server.status_code = status.HTTP_201_CREATED
        return response_server

I also have a config file located in my application directory at the path .platform/nginx/conf.d/myconf.conf which looks likes this:

server {
    location /api/chatbot {
        proxy_pass ; 
        proxy_buffering off;             
        proxy_cache off;  
               
        proxy_set_header Host $host;     
        proxy_set_header X-Real-IP $remote_addr; 
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 
        proxy_set_header Connection "";
        proxy_set_header X-Accel-Buffering "no"; 

        chunked_transfer_encoding on;     
    }
}

When sending the request with this configuration, I never see the X-Accel-Buffering response header being set. Have been dealing with this for a while and would love some insight if anyone has had a similar issue.

I have a django application running on AWS Elastic Beanstalk. Part of this application streams an OpenAI chat completion response to the user in chunks. When running with gunicorn locally, the response is streamed in chunks as expected.

After deploying to AWS Elastic Beanstalk with a nginx server, the response is not streamed as it is in dev. It buffers until the entire response is ready and then returns it at once.

I believe I need to disable response buffering from nginx. Below is my current structure.

        def generate_stream():
            ai_response = StringIO()
            try:
                response = client.chatpletions.create(
                    model="gpt-4o",
                    messages=context_messages,
                    temperature=0.7,
                    max_tokens=4096,
                    stream=True
                )

                for chunk in response:
                    content = chunk.choices[0].delta.content
                    if content:
                        ai_response.write(content)
                        yield content

                try:
                    ai_response_obj = Message.objects.create(conversation=conversation, content=ai_response.getvalue(), role="assistant")
                    print(f"AI response saved: {ai_response_obj}")
                except Exception as e:
                    print(f"Failed to save AI response: {e}")

            except (openai.APIConnectionError, openai.OpenAIError) as e:
                print(f"API connection error: {e}")
                yield ("ERROR: There was an issue processing your request. Please check your internet connection and try again.")

            except Exception as e:
                print(f"Unexpected error: {e}")
                yield "Error: An unexpected error occurred."

        response_server = StreamingHttpResponse(generate_stream(), content_type='text/plain')
        response_server['Cache-Control'] = 'no-cache'  # prevent client cache
        response_server["X-Accel-Buffering"] = "no"
        response_server.status_code = status.HTTP_201_CREATED
        return response_server

I also have a config file located in my application directory at the path .platform/nginx/conf.d/myconf.conf which looks likes this:

server {
    location /api/chatbot {
        proxy_pass ; 
        proxy_buffering off;             
        proxy_cache off;  
               
        proxy_set_header Host $host;     
        proxy_set_header X-Real-IP $remote_addr; 
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 
        proxy_set_header Connection "";
        proxy_set_header X-Accel-Buffering "no"; 

        chunked_transfer_encoding on;     
    }
}

When sending the request with this configuration, I never see the X-Accel-Buffering response header being set. Have been dealing with this for a while and would love some insight if anyone has had a similar issue.

本文标签: djangoDisable Response Buffering on nginx for StreamingHTTPResponseStack Overflow