<div dir="ltr"><div>I should add that there have been some pretty massive improvements to postgres since 8.4, and it wouldn't be at all surprising if your problem just disappeared after an upgrade.<br><br></div>QH<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Dec 5, 2016 at 1:48 PM, Quentin Hartman <span dir="ltr"><<a href="mailto:qhartman@gmail.com" target="_blank">qhartman@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>Oof, 8.4, that's tough. It's been long enough since I touched an 8.x box, I'm not sure off hand. It should be in the main conf though. Here's what should be the relevant section i nthe official docs: <a href="https://www.postgresql.org/docs/8.4/static/runtime-config-logging.html" target="_blank">https://www.postgresql.org/<wbr>docs/8.4/static/runtime-<wbr>config-logging.html</a><br><br></div>As an aside, that's not a supported version anymore. Unless you have a compelling reason not to, upgrading to a 9.2+ (9.6 is current) should be at or near the top of your to-do list...<span class="HOEnZb"><font color="#888888"><br><br></font></span></div><span class="HOEnZb"><font color="#888888">QH <br></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Dec 5, 2016 at 1:41 PM, Mike Bean <span dir="ltr"><<a href="mailto:beandaemon@gmail.com" target="_blank">beandaemon@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">It's 8.4 - unfortunately we're having some problems with autovacum appearing to be configured correctly, not realistically running when it should<div><br></div><div>POSTGRES_LAST_AUTOVACUUM CRITICAL: DB "xe" (host:localhost) (port=1935) DB: xe TABLE: pg_catalog.pg_class: 17:11 November 18, 2016 (16 days 20 hours 23 minutes)<br></div><div><br></div><div>the thursday log is largely full of checkpoint_segments spam, and ONE deadlock condition:</div><div><br></div><div><div>DETAIL: Process 9758 waits for ShareLock on transaction 40741339; blocked by process 9794.</div><div> Process 9794 waits for ShareLock on transaction 40740891; blocked by process 9758.</div><div> Process 9758: select * from rhn_channel.update_needed_cach<wbr>e($1) as result</div><div> Process 9794:</div></div><div><br></div><div>there hasn't been much in the way of logging since the deadlock, just more checkpoint_segment spam</div><div><div>HINT: Consider increasing the configuration parameter "checkpoint_segments".</div><div>LOG: checkpoints are occurring too frequently (16 seconds apart)</div></div><div><br></div><div>Since as near as I can tell the only meaningful logging over the last 4 days is the deadlock, I'm inclined to think the deadlock has some relationship with the DB growing 40GB over the weekend. Quentin's probably right, I probably need to turn up the logging levels. Do we know where those are? in the .conf file?</div></div><div class="m_-8362780188321382994HOEnZb"><div class="m_-8362780188321382994h5"><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Dec 5, 2016 at 11:06 AM, Grant Johnson <span dir="ltr"><<a href="mailto:grant@amadensor.com" target="_blank">grant@amadensor.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">There is a good chance that the Thursday log holds some clues about what<br>
the activity was. It is likely repeated often, and will be easy to<br>
find, as it was happening a lot. If the volume was very high, you also<br>
see messages about checkpoints, because changes were out pacing the<br>
normal background DB writes. If these were happening only during this<br>
anomalous time, I would not worry about it, but if it happens during<br>
normal operations, some tuning changes may be needed.<br>
<br>
If you are having updates that exceed your free space map, the normal<br>
vacuum will not release the space, and instead you will need to run a<br>
vacuum full, which has the unfortunate side effect of locking tables<br>
while it runs, so it impedes write operations.<br>
<br>
So, I would look at the log, see if anything starts making sense, and<br>
then clean up with a full vacuum during a low activity time, and perhaps<br>
a reindex all. I have found that doing full clean up every year or so<br>
helps, but generally, leaving the free space in the files with normal<br>
vacuum/auto-vacuum helps performance.<br>
<div class="m_-8362780188321382994m_6392774321791334606HOEnZb"><div class="m_-8362780188321382994m_6392774321791334606h5"><br>
<br>
On Mon, 2016-12-05 at 07:47 -0700, Mike Bean wrote:<br>
> Of particular interest is<br>
><br>
><br>
> -rw------- 1 postgres postgres 94479 Dec 2 21:29 postgresql-Fri.log<br>
> -rw------- 1 postgres postgres 0 Dec 5 00:00 postgresql-Mon.log<br>
> -rw------- 1 postgres postgres 0 Dec 3 00:00 postgresql-Sat.log<br>
> -rw------- 1 postgres postgres 0 Dec 4 00:00 postgresql-Sun.log<br>
> -rw------- 1 postgres postgres 208184 Dec 1 23:59 postgresql-Thu.log<br>
> -rw------- 1 postgres postgres 86154 Nov 29 21:57 postgresql-Tue.log<br>
> -rw------- 1 postgres postgres 89399 Nov 30 21:48 postgresql-Wed.log<br>
><br>
><br>
> Huh - want to bet the writes were actually on thursday? The plot<br>
> thickens!<br>
><br>
> On Mon, Dec 5, 2016 at 7:43 AM, Mike Bean <<a href="mailto:beandaemon@gmail.com" target="_blank">beandaemon@gmail.com</a>><br>
> wrote:<br>
><br>
> Do we have members who are comfortable/skilled with postgres?<br>
> Long story short we have a spacewalk server running<br>
> postgres and the DB is showing really erratic behavior. We<br>
> used to do weekly vacuums to keep it from running out of<br>
> space, I figured out that doing a reindex bumped the free<br>
> space from 20-80 GB. Over the weekend, it dropped to roughly<br>
> 60, we went one week without doing a vacuum reindex, and it's<br>
> back down to 20.<br>
><br>
><br>
> I know for a fact there's no one else in the organization<br>
> making major changes on this DB, and certainly nothing that<br>
> could justify 40GB of writes in 3 days. Not really being a<br>
> DBA, I'm running low on my bag of tricks, thought I'd see if<br>
> CLUE had any advice. Running vacuums once a week to keep it<br>
> from filling the disk feels like taking aspirin to get over<br>
> your gunshot wound. Sure, it'll keep the server running, but<br>
> ultimately what I really want to know is why a largely static<br>
> spacewalk DB is writing 60GB to the disk in a 2 week interval.<br>
><br>
><br>
><br>
><br>
</div></div><div class="m_-8362780188321382994m_6392774321791334606HOEnZb"><div class="m_-8362780188321382994m_6392774321791334606h5">> ______________________________<wbr>_________________<br>
> clue mailing list: <a href="mailto:clue@cluedenver.org" target="_blank">clue@cluedenver.org</a><br>
> For information, account preferences, or to unsubscribe see:<br>
> <a href="http://cluedenver.org/mailman/listinfo/clue" rel="noreferrer" target="_blank">http://cluedenver.org/mailman/<wbr>listinfo/clue</a><br>
<br>
<br>
______________________________<wbr>_________________<br>
clue mailing list: <a href="mailto:clue@cluedenver.org" target="_blank">clue@cluedenver.org</a><br>
For information, account preferences, or to unsubscribe see:<br>
<a href="http://cluedenver.org/mailman/listinfo/clue" rel="noreferrer" target="_blank">http://cluedenver.org/mailman/<wbr>listinfo/clue</a><br>
</div></div></blockquote></div><br></div>
</div></div><br>______________________________<wbr>_________________<br>
clue mailing list: <a href="mailto:clue@cluedenver.org" target="_blank">clue@cluedenver.org</a><br>
For information, account preferences, or to unsubscribe see:<br>
<a href="http://cluedenver.org/mailman/listinfo/clue" rel="noreferrer" target="_blank">http://cluedenver.org/mailman/<wbr>listinfo/clue</a><br></blockquote></div><br></div>
</div></div></blockquote></div><br></div>