From 1c5466b9794d96f5cbb5b899587c38bdb0995f01 Mon Sep 17 00:00:00 2001 From: Philip Hazel Date: Tue, 21 Dec 2004 11:12:13 +0000 Subject: [PATCH] Exim crashed for a huge SMTP error response; increasing big_buffer_size for Exiscan made this go away, but I've now also made the code more robust. --- doc/doc-txt/ChangeLog | 10 +++++++++- src/src/deliver.c | 22 ++++++++++++---------- 2 files changed, 21 insertions(+), 11 deletions(-) diff --git a/doc/doc-txt/ChangeLog b/doc/doc-txt/ChangeLog index 6cdc44cc6..a2c706359 100644 --- a/doc/doc-txt/ChangeLog +++ b/doc/doc-txt/ChangeLog @@ -1,4 +1,4 @@ -$Cambridge: exim/doc/doc-txt/ChangeLog,v 1.49 2004/12/21 09:40:01 ph10 Exp $ +$Cambridge: exim/doc/doc-txt/ChangeLog,v 1.50 2004/12/21 11:12:13 ph10 Exp $ Change log file for Exim from version 4.21 ------------------------------------------- @@ -210,6 +210,14 @@ Exim version 4.50 51. Small patch to Sieve code - explicitly set From: when generating an autoreply. +52. Exim crashed if a remote delivery caused a very long error message to be + recorded - for instance if somebody sent an entire SpamAssassin report back + as a large number of 550 error lines. This bug was coincidentally fixed by + increasing the size of one of Exim's internal buffers (big_buffer) that + happened as part of the Exiscan merge. However, to be on the safe side, I + have made the code more robust (and fixed the comments that describe what + is going on). + Exim version 4.43 ----------------- diff --git a/src/src/deliver.c b/src/src/deliver.c index 3dffe78fe..519e7c61c 100644 --- a/src/src/deliver.c +++ b/src/src/deliver.c @@ -1,4 +1,4 @@ -/* $Cambridge: exim/src/src/deliver.c,v 1.4 2004/12/16 15:11:47 tom Exp $ */ +/* $Cambridge: exim/src/src/deliver.c,v 1.5 2004/12/21 11:12:13 ph10 Exp $ */ /************************************************* * Exim - an Internet mail transport agent * @@ -2556,13 +2556,13 @@ also by optional retry data. Read in large chunks into the big buffer and then scan through, interpreting the data therein. In most cases, only a single read will be necessary. No -individual item will ever be anywhere near 500 bytes in length, so by ensuring -that we read the next chunk when there is less than 500 bytes left in the -non-final chunk, we can assume each item is complete in store before handling -it. Actually, each item is written using a single write(), which is atomic for -small items (less than PIPE_BUF, which seems to be at least 512 in any Unix) so -even if we are reading while the subprocess is still going, we should never -have only a partial item in the buffer. +individual item will ever be anywhere near 2500 bytes in length, so by ensuring +that we read the next chunk when there is less than 2500 bytes left in the +non-final chunk, we can assume each item is complete in the buffer before +handling it. Each item is written using a single write(), which is atomic for +small items (less than PIPE_BUF, which seems to be at least 512 in any Unix and +often bigger) so even if we are reading while the subprocess is still going, we +should never have only a partial item in the buffer. Argument: poffset the offset of the parlist item @@ -2598,7 +2598,9 @@ completed. Each separate item is written to the pipe in a single write(), and as they are all short items, the writes will all be atomic and we should never find -ourselves in the position of having read an incomplete item. */ +ourselves in the position of having read an incomplete item. "Short" in this +case can mean up to about 1K in the case when there is a long error message +associated with an address. */ DEBUG(D_deliver) debug_printf("reading pipe for subprocess %d (%s)\n", (int)p->pid, eop? "ended" : "not ended"); @@ -2612,7 +2614,7 @@ while (!done) There will be only one read if we get all the available data (i.e. don't fill the buffer completely). */ - if (remaining < 500 && unfinished) + if (remaining < 2500 && unfinished) { int len; int available = big_buffer_size - remaining; -- 2.25.1