Thread: unsafe use of hash_search(... HASH_ENTER ...)

unsafe use of hash_search(... HASH_ENTER ...)

From
"Qingqing Zhou"
Date:
-- First part ---

In md.c/RememberFsyncRequest():
if (hash_search(pendingOpsTable, &entry, HASH_ENTER, NULL) == NULL) ereport(FATAL,   (errcode(ERRCODE_OUT_OF_MEMORY),
errmsg("out of memory")));
 

pendingOpsTable uses "MdCxt" to allocate memory. So if "out of memory", we
actually have no chance to raise the error level to FATAL. A quick fix is to
use malloc() HASH_ALLOC method for pendingOpsTable.

In general, code snippet like this:

if (hash_search(..., HASH_ENTER, ...) == NULL)   action_except_elog__ERROR__;

are considered unsafe if: (1) the allocation method of the target hash table
could elog(ERROR) themselves and (2) the reaction to the failure of
hash_search() is not elog(ERROR).

So shared memory hash table is safe because of condition (1). I scratched
the server code and find the following places are like this:

* RememberFsyncRequest() - solution as above;
* XLogOpenRelation() - not a problem, since it is already in the critical
section;
* IndexNext() in 8.0.1;

-- Second part ---

Also, per discussion with Neil and Tom, it is possible to simplify code
snippets like this:
   if (hash_search(local_hash, HASH_ENTER, ...) == NULL)       elog(ERROR, "out of memory");

To
   hash_search(local_hash, HASH_ENTER, ...);


Comments?

Regards,
Qingqing




Re: unsafe use of hash_search(... HASH_ENTER ...)

From
Tom Lane
Date:
"Qingqing Zhou" <zhouqq@cs.toronto.edu> writes:
> In md.c/RememberFsyncRequest():

>  if (hash_search(pendingOpsTable, &entry, HASH_ENTER, NULL) == NULL)
>   ereport(FATAL,
>     (errcode(ERRCODE_OUT_OF_MEMORY),
>      errmsg("out of memory")));

> pendingOpsTable uses "MdCxt" to allocate memory. So if "out of memory", we
> actually have no chance to raise the error level to FATAL. A quick fix is to
> use malloc() HASH_ALLOC method for pendingOpsTable.

"Unsafe" is a bit of an overstatement, when you evidently haven't
analyzed the consequences of either choice of error level.  That is,
why is this a bug?
        regards, tom lane


Re: unsafe use of hash_search(... HASH_ENTER ...)

From
"Qingqing Zhou"
Date:
"Tom Lane" <tgl@sss.pgh.pa.us> writes
> "Qingqing Zhou" <zhouqq@cs.toronto.edu> writes:
>
> "Unsafe" is a bit of an overstatement, when you evidently haven't
> analyzed the consequences of either choice of error level.  That is,
> why is this a bug?
>

Consider the senario like this:

Backends register some dirty segments in BgWriterShmem->requests; bgwrite
will AbsorbFsyncRequests() asynchornously but failed to record some one in
pendingOpsTable due to an "out of memory" error. All dirty segments
remembered in "requests" after this one will not have chance be absorbed by
bgwriter.

Recall we have already removed those dirty segment by:
    BgWriterShmem->num_requests = 0;

So we will have no chance to pick up it again. That is, we will never fsync
some dirty segments (mdwrite() will not sync those files themselves either
because ForwardFsyncRequest() is successfully done).

Regards,
Qingqing







Re: unsafe use of hash_search(... HASH_ENTER ...)

From
Tom Lane
Date:
"Qingqing Zhou" <zhouqq@cs.toronto.edu> writes:
> Consider the senario like this:

> Backends register some dirty segments in BgWriterShmem->requests; bgwrite
> will AbsorbFsyncRequests() asynchornously but failed to record some one in
> pendingOpsTable due to an "out of memory" error. All dirty segments
> remembered in "requests" after this one will not have chance be absorbed by
> bgwriter.

So really we have to PANIC if we fail to record a dirty segment.  That's
a bit nasty, but since the hashtable is so small (only 16 bytes per
gigabyte-sized dirty segment) it seems unlikely that the situation will
ever occur in practice.

I'll put a critical section around it --- seems the easiest way to
ensure a panic ...
        regards, tom lane


Re: unsafe use of hash_search(... HASH_ENTER ...)

From
Tom Lane
Date:
"Qingqing Zhou" <zhouqq@cs.toronto.edu> writes:
> In general, code snippet like this:

> if (hash_search(..., HASH_ENTER, ...) == NULL)
>     action_except_elog__ERROR__;

> are considered unsafe if: (1) the allocation method of the target hash table
> could elog(ERROR) themselves and (2) the reaction to the failure of
> hash_search() is not elog(ERROR).

I've made some changes to hopefully prevent this type of thinko again.
Thanks for spotting it.
        regards, tom lane


Re: unsafe use of hash_search(... HASH_ENTER ...)

From
"Qingqing Zhou"
Date:
"Tom Lane" <tgl@sss.pgh.pa.us> writes
> "Qingqing Zhou" <zhouqq@cs.toronto.edu> writes:
> > In general, code snippet like this:
>
> > if (hash_search(..., HASH_ENTER, ...) == NULL)
> >     action_except_elog__ERROR__;
>
> > are considered unsafe if: (1) the allocation method of the target hash
table
> > could elog(ERROR) themselves and (2) the reaction to the failure of
> > hash_search() is not elog(ERROR).
>
> I've made some changes to hopefully prevent this type of thinko again.
> Thanks for spotting it.
>

I am afraid the problem are not limited to hash_search(). Any code snippet
are not proteced by critical section like this:
   Assert(CritSectionCount == 0);   ret = do_something_might_elog_error();   if (is_not_expected(ret))
action_raise_error_higher_than_ERROR;

are all need to re-considered. For example,

---file = AllocateFile(full_path, "r");if (!file){ if (errno == ENOENT)  ereport(FATAL,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),    errmsg("\"%s\" is not a valid data directory",      path),
errdetail("File\"%s\" is missing.", full_path))); else  ereport(FATAL,    (errcode_for_file_access(),
errmsg("couldnot open file \"%s\": %m", full_path)));}
 
---

AllocateFile() itself could raise an error so we increase error level to
FATAL.


Regards,
Qingqing







Re: unsafe use of hash_search(... HASH_ENTER ...)

From
Tom Lane
Date:
"Qingqing Zhou" <zhouqq@cs.toronto.edu> writes:
> I am afraid the problem are not limited to hash_search(). Any code snippet
> are not proteced by critical section like this:

This is not an issue except if the system might actually try to recover;
which is not the case in the postmaster snippet you mention.
elog(ERROR) in the postmaster is fatal, and the use of FATAL rather than
ERROR in this bit of code is merely documentation.
        regards, tom lane


Re: unsafe use of hash_search(... HASH_ENTER ...)

From
"Qingqing Zhou"
Date:
"Tom Lane" <tgl@sss.pgh.pa.us> writes
>
> This is not an issue except if the system might actually try to recover;
> which is not the case in the postmaster snippet you mention.
>

Yeah, you are right. I scratched elog/ereport(FATAL/PANIC), only found this
one might be a suspect:
In _hash_expandtable():
if (!_hash_try_getlock(rel, start_nblkno, HASH_EXCLUSIVE)) elog(PANIC, "could not get lock on supposedly new bucket");

Or maybe elog(PANIC) is a false alarm here?

Regards,
Qingqing






Re: unsafe use of hash_search(... HASH_ENTER ...)

From
Tom Lane
Date:
"Qingqing Zhou" <zhouqq@cs.toronto.edu> writes:
> Yeah, you are right. I scratched elog/ereport(FATAL/PANIC), only found this
> one might be a suspect:

>  In _hash_expandtable():

>  if (!_hash_try_getlock(rel, start_nblkno, HASH_EXCLUSIVE))
>   elog(PANIC, "could not get lock on supposedly new bucket");

> Or maybe elog(PANIC) is a false alarm here?

[ eyes code... ]  I think the reason it wants to PANIC is because it's
already hacked up the hash metapage in shared buffers, and it needs
to prevent that update from getting written out.  A CRIT_SECTION
would probably be a better answer --- thanks for spotting that.
        regards, tom lane