mirror of
https://github.com/postgres/postgres.git
synced 2025-05-31 00:01:57 -04:00
If we hit out-of-memory between creating the PGconn and inserting it into dblink's hashtable, we'd lose track of the PGconn, which is quite bad since it represents a live connection to a remote DB. Fix by rearranging things so that we create the hashtable entry first. Also reduce the number of states we have to deal with by getting rid of the separately-allocated remoteConn object, instead allocating it in-line in the hashtable entries. (That incidentally removes a session-lifespan memory leak observed in the regression tests.) There is an apparently-irreducible remaining OOM hazard, which is that if the connection fails at the libpq level (ie it's CONNECTION_BAD) then we have to pstrdup the PGconn's error message before we can release it, and theoretically that could fail. However, in such cases we're only leaking memory not a live remote connection, so I'm not convinced that it's worth sweating over. This is a pretty low-probability failure mode of course, but losing a live connection seems bad enough to justify back-patching. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com> Discussion: https://postgr.es/m/1346940.1748381911@sss.pgh.pa.us Backpatch-through: 13