Wednesday, November 9, 2011

Understanding WAL nomenclature

There are some PostgreSQL functions that reports an information such as 68A/16E1DA8. But what does it mean? The documentation states that it is a transaction log location. Let me try to explain it in details.

What is WAL? WAL is an abbreviation of Write-Ahead Log. That is how PostgreSQL named the transaction log (also known as REDO log). Every modern database system has one. It is basically a history of operations that was executed by a DBMS. It improves performance (you only need to flush one file per transaction) and guarantees ACID properties (if the DBMS was in an inconsistent state -- because of software crash, for example -- it reapplies all transactions that are committed but whose changes are not in the database files). PostgreSQL introduced the transaction log in version 7.1 (2001). It opted to implement only REDO log (there is no UNDO log on PostgreSQL).

The transaction log files are stored in $PGDATA/pg_xlog directory. They are named as 000000020000070A0000008E. They are all hexadecimal digits (0123456789ABCDEF). The first 8 digits identifies the timeline, the following 8 digits identifies the (logical) xlog file and the last ones represents the (physical) xlog file (aka segment). The physical files in pg_xlog directory are not actually the xlog files; PostgreSQL calls it segments. By default, a logical xlog file has 255 segments (16 MB each, 4080 MB in total) So in our example, the 0000070A and 0000008E are the logical xlog file and segment file, respectively. When segment file 000000FE is filled up the logical xlog file is incremented by 1 (i.e. 0000070B) and the segment file sequence starts up again (i.e. 00000000) -- 000000020000070B00000000.

What means 68A/16E1DA8? 68A is the (logical) xlog file and 16E1DA8 is the offset inside the logical xlog file. It means that the offset goes from 00000000 to FF000000 (4278190080 bytes = 4080 MB). When it reaches FF000000, it means we are in the last segment (aka physical xlog file) and the xlog file number will be incremented by 1 (i.e. 68B) and the offset will be restarted (00000000).

PostgreSQL provides a set of functions [1] [2] that deals with the transaction log location. For example, converting transaction log location to segment file name:

postgres=# select pg_xlogfile_name('68A/16E1DA8');
     pg_xlogfile_name  
--------------------------
 000000020000068A00000001
(1 row)

Converting transaction log location to segment file name and the offset inside that segment.

postgres=# select * from pg_xlogfile_name_offset('68A/16E1DA8');
        file_name         | file_offset
--------------------------+-------------
 000000020000068A00000001 |     7216552
(1 row)


If I want to know which segment file the PostgreSQL server is writing the transaction log now:

postgres=# select pg_xlogfile_name(pg_current_xlog_location());
     pg_xlogfile_name  
--------------------------
 000000020000074B000000E4
(1 row)


If you have a replication scenario, you can obtain the receive and replay lags. It is a simple hexadecimal math: master transaction log location minus replica transaction log location.

In the replica:

postgres=# select pg_last_xlog_receive_location();
 pg_last_xlog_receive_location
-------------------------------
 74B/E4D1C628
(1 row)

postgres=# select pg_last_xlog_replay_location();
 pg_last_xlog_replay_location
------------------------------
 74B/E4D1C628
(1 row)

In the master:

postgres=# select pg_current_xlog_location();
 pg_current_xlog_location
--------------------------
 74B/E4D3B070
(1 row)

If they report the same xlog file (74B), it is a matter of E4D3B070E4D1C628 = 1EA48 (125512 bytes). It means that the replica is roughly 123 kB behind the master. Receive and replay locations aren't the same all the time; if the replica can't keep up with writing receive location > replay location.

What about different xlog files? Some tools use the following formula:

(FFFFFFFF * xlog_master + offset_master) - (FFFFFFFF * xlog_replica + offset_replica)

It's wrong ¹. Why? Because the offset goes up to FF000000 (and not FFFFFFFF). It means that the above formula reports 16 MB times the difference between xlog numbers when the xlog numbers aren't the same. The correct formula is:

(FF000000 * xlog_master + offset_master) - (FF000000 * xlog_replica + offset_replica)

Let's suppose the master and replica transaction location are 67E/AFE198 and 67D/FECFA308, respectively. The correct lag is 14696080 bytes and not 31473295 bytes (the difference between results is 16 MB!).


¹ I will send patches to fix check_postgres and Munin. If you know another tool that is using the wrong formula, let me know or even better, the tool's author. FYI, both tools were fixed in a few hours. Thanks, Greg and Magnus!

Tuesday, November 8, 2011

PGBR 2011 is over

PGBR 2011 is over. It was a challenge to organize an event after 2 years and with a budget 50% higher than the last conference. Yes, we did it! It was an event organized by the community to the community. Thanks to our Kahunas (Fabio Telles and Flavio Gurgel) and our sponsors.

This year we had participants from another countries (Argentina, Peru, and USA). We are planning a better support from website and registration next year (sorry Joe). Also, some foreign speakers: Alvaro (Chile), Bruce Momjian (USA), Dave Page (UK), Greg Smith (USA), Jaime Casanova (Ecuador), and Koichi Suzuki (Japan).

I don't have the definitive numbers yet but we had roughly 250 participants from the 5 regions of Brazil. I talked to a lot of participants that bring to me interesting success stories from their companies. One of the many interested success cases I heard of was from Locaweb (hosting company) that migrated its DNS infrastructure from M$ database to PostgreSQL.

We had three rooms, 29 sessions (4 sessions from Timbira) and a lot of new speakers. In the first day I was at room 1 because I was coordinating the sessions along the day. I had the opportunity to listen Alvaro and Greg talking about locks and benchmarks, respectively. Also, I was in the round table about PostgreSQL service opportunities in Brazil. In the second day, I attended to Jaime's talk (repmgr) and Dickson's (extensions). At this year, I decided to submit a newbie tutorial that answers the frequently asked questions in the PostgreSQL mailing lists. Title is Everything you want to know about PostgreSQL but were afraid to ask. It was a challenge to keep the audience awake (1 hour and a half) after the lunch but I think I got it. Oh, I lost the Dojo (because it conflicted with my session). :( The last session I attended was that of Lucio. After that the awards. For the first time, brazilian community has decided to reward prominent members of the community. We had 5 categories (I was nominated in 4) and got one. Thanks for reconnaissance! The last session was the always-funny-session: lightning talks. 8 LTs with speak^H^H^H^H^Hstand-up comedians who entertained the participants (@lhcezar will stop working with databases and will start a career as singer).

Thumbs up to: local (hotel), coffee-break and beer (200 liters).

Hope to see you again next year!

Wednesday, September 14, 2011

Hora de considerar a versão 9.1?

Ao ler o post do Greg, resolvi apresentar a minha opinião sobre o $TITULO. É bom ser cauteloso quando estamos lidando com nossos dados. Mas será que devemos esperar quanto tempo antes de considerar uma atualização para uma versão mais nova?

Quando estamos lidando com versões antigas de uma tecnologia sempre nos deparamos com desenvolvedores de aplicações que estão tentando utilizar as últimas funcionalidades que o software oferece. No universo do PostgreSQL, isso não é diferente; há sempre uma nova função, uma sintaxe diferente, a otimização de um tipo de consulta, um novo tipo de índice e por aí vai.

Além disso, os DBAs enfrentam no dia-a-dia as limitações de uma versão que vai se tornando obsoleta ao longo dos anos (tenho clientes utilizando a versão 8.1 e que ainda sofrem dos picos de escrita durante o checkpoint). Os DBAs são os profissionais mais cautelosos dentre aqueles que administram serviços; mesmo assim, consideram atualizar entre versões para (i) diminuírem a dor de cabeça com as limitações pré-existentes e (ii) beneficiarem das novas otimizações e oportunidades (de ajuste e monitoramento).

Voltando a questão da versão, o PostgreSQL sempre considera novas versões aquelas que mudam o X e/ou Y em X.Y.Z (o Z é reservado para correção de bugs). Assim, os primeiros lançamentos das novas versões são 7.4.0, 8.0.0, 8.1.0, 8.2.0, 8.3.0, 8.4.0, 9.0.0, 9.1.0 (preferem omitir o .0 pois afinal de contas, uma vez na versão X.Y os binários podem ser atualizados para Z+n sem precisar fazer uma migração entre versões -- cópia de segurança -> restauração ou pg_upgrade). Dentre os lançamentos mencionados há uma diferença: aqueles que mudam o X e os que não mudam. É uma diferença na qualidade? Não. A mudança no X ocorre quando há um grande número de funcionalidades impactantes em uma nova versão com relação as anteriores. Foi assim da 7.4 -> 8.0 (suporte a Windows, tablespaces, PITR, savepoints) e da 8.4 -> 9.0 (replicação embutida, Hot Standby, bloco de código anônimo aka DO).

Olhando do ponto de vista de evolução de um software, mais mudanças código geram mais bugs! Então para aqueles DBAs mais cautelosos eu não aconselho que migrem rapidamente quando o X mudar. Costumo aconselhar que esperem alguns lançamentos de versões de correção (talvez 3 ou 4) para pensar em adotá-las. Quando quem muda é o Y, o conselho é esperar 1 ou duas versões de correção. Geralmente, logo após o lançamento de uma versão (como foi o caso da 9.1 recentemente), o grupo de desenvolvimento lança versões corretivas pouco tempo depois para corrigir aqueles bugs que passaram despercebidos durante o longo ciclo de testes (beta e RC -- 6 meses).

Como estamos na 9.1, se você estiver considerando uma atualização de versão, faça um planejamento com a 9.1 ao invés da 9.0 (por exemplo). Apesar de ser uma nova versão, o tempo que as suas aplicações ficarem em teste é exatamente aquele em que o grupo de desenvolvimento terá lançado 1 ou 2 versões corretivas.

Seja feliz com 9.1 até que o 9.2 ou 9.3 saia do forno e você considere atualizar novamente. ;)

Wednesday, September 7, 2011

palestras para PGBR 2011

Esta semana encerra o prazo para o envio de palestras para o PGBR 2001 que acontecerá nos dias 3 e 4 de novembro em São Paulo, SP. Não deixe de enviar a sua proposta de palestra!

As palestras podem ser básicas ou avançadas, curtas (palestra) ou longas (tutorial), português, espanhol ou inglês. Só há um pré-requisito: ser relacionada ao PostgreSQL.

Todos os palestrantes terão entrada gratuita.

Vá a página de submissões de trabalhos e envie-nos as suas propostas! Você pode submeter até 5 ideias interessantes. Não deixe de divulgar esta nota para outras comunidades relacionadas; talvez alguém tem uma ideia interessante por lá.

Te espero no PGBR 2011.

Tuesday, May 3, 2011

9.1 beta 1 foi lançado

O Grupo de Desenvolvimento Global do PostgreSQL lançou a primeira versão beta da série 9.1. Esta versão (beta) é uma prévia do que será lançado possivelmente daqui a um ou dois meses. Quanto mais gente testar as versões beta mais estável será a primeira versão da série 9.1. Os benefícios de testar o 9.1 não param por aí. Você poderá testar as novas funcionalidades em primeira mão e verificar se houve alguma regressão de funcionalidades existentes (utilizadas pelas suas aplicações).

Vale lembrar que:

  • como alguns produtos comerciais, o PostgreSQL lança uma versão nova a cada ano, como temos feito nos últimos sete anos;
  • a versão 9.1 terá mais funcionalidades importantes do que as últimas três versões. Enquanto isso, os outros SGBDs têm lançado somente consolidação das funcionalidades já existentes;
  • a versão 9.1 terá várias funcionalidades cujo PostgreSQL será o primeiro SGBD (de código aberto ou proprietário) a suportá-las.

Assim, é bom lembrar que não é só os SGBDs noSQL que podem inovar e/ou avançar rapidamente. O PostgreSQL tem estabilidade e é capaz de inovar também. O tema desta nova versão reflete justamente o que foi (aproximadamente) esse ano: Funcionalidades, Inovação e Extensibilidade.

Uma prévia das principais funcionalidades está abaixo. Para uma lista completa, consulte o manual.

Funcionalidades

  • replicação síncrona: criar bancos de dados em espera síncronos para previnir perda de dados. Inclui a efetivação (COMMIT) síncrona controlada pela transação;
  • collation por coluna: bancos de dados multilíngues podem ser criados, onde cada coluna pode ser de uma língua;
  • tabelas sem durabilidade (unlogged tables): uma alta taxa de escrita para dados efêmeros como tabelas de sessões;

Inovações

  • busca por k-vizinhos-próximos: mecanismo de indexação para acelerar buscas do tipo "quais são as k pizzarias mais próximas". Isso ajudará muito as aplicações espaciais (spatial applications);
  • isolamento de instantâneo serializável: suporte a serialização com transações concorrentes e complexas, prevenindo condições de corrida e impasses (deadlocks);
  • writeable common table expressions: execução recursiva de atualizações e outros mudanças de dados em um só comando SQL;
  • SE-Postgres: é o único SGBD que oferecerá integridade total utilizando o framework SELinux.

Extensibilidade

  • extensões: manipular as extensões (módulos adicionais) do PostgreSQL com facilidade (utilizando comandos). Uma rede online (PGXN) de extensões facilitará o gerenciamento e obtenção das extensões;
  • SQL/MED: anexar outras fontes de dados (arquivos, outros SGBDs, streams) ao PostgreSQL e usá-las como tabelas.

Para que o lançamento da versão 9.1 seja rápido, precisamos que muitas pessoas testem e atestem que ele está funcionando adequadamente. Se você poder testar, leia o HOWTO de como testar e relatar a experiência com a versão 9.1. Reportem os sucessos e também os fracassos.

Binários estão disponíveis e também o código-fonte.

Monday, March 28, 2011

PGBR 2011: Brazilian PostgreSQL Conference

After a lot of discussing about the next nacional PostgreSQL event, we have finally been able to bang the gavel. This year the conference will take place in São Paulo, Brazil, from November 3th to 4th (November 2nd is holiday in Brazil). It consists of two days of technical talks (in English and Portuguese) from beginner to master.

This is the first year that the conference will be held in a hotel. (The last two editions were held in an university.) So if you don't live in São Paulo, you could stay in the conference place.

There will be a new track (academic) and it will have more slots for case studies and advanced talks. There will be three simultaneous rooms.

Two community members already confirmed presence: Bruce and Álvaro. Of course, I will be there too. ;)

Stay tuned: website or twitter (#pgbr2011).

See you in São Paulo.

Monday, November 29, 2010

Comment on catalog tables

While building some catalogs queries I always forgot what some column is. At this point I need to access the documentation or (if I don't have Internet access) build the documentation in my laptop and check it out. Sometimes it is an annoying task. Thinking about this scenario I imagined why we don't have comments on catalogs? I started to research a easy lazy way to achieve this. I checked the source (catalogs.sgml) to investigate how I could extract such information. I noticed that each section (that starts with catalog-pg-) contains a catalog and the description is stored in a table (the first one). I decided to use the same infrastructure PostgreSQL uses to build the docs (DSSSL or XSL). Easy task? Not that easy.

DSSSL x XSL. I had never done any formatting in DSSSL; it is greek for me so I headed to XSL. I have already played with XSL some time ago; it wouldn't be that hard to extract those descriptions, would it?

First step was converting the SGML file to XML (XSL only works with XML). I used the osx (that is part of openjade). The catalogs.sgml file doesn't have a header (because it is part of the  PostgreSQL documentation that provides its header in a separate file postgres.sgml); I had to add the following 3 lines as header.

$ cat /path/to/header.sgml
<!doctype chapter PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [
<!entity % standalone-ignore "IGNORE">
<!entity % standalone-include "INCLUDE"> ]>

Next step was run the osx tool to obtain the xml output. The verbose messages generated are just broken links for other documentation parts; it was safe to ignore them.

$ cat /path/to/header.sgml /path/to/catalogs.sgml | osx -x lower > /path/to/catalogs.tmp

At this point, I had the catalogs file in XML format. Looking at the generated file (catalogs.tmp), I noticed that some special characters (i.e. &, >, <) were not preserved. Fortunately, looking at the doc/src/sgml/Makefile I found this command:

$ perl -p -e 's/\[(amp|copy|egrave|gt|lt|mdash|nbsp|ouml|pi|quot|uuml) *\]/\&\1;/g;' \
> -e '$_ .= qq{<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">\n} if $. == 1;' \
> < /path/to/catalogs.tmp > /path/to/catalogs.xml

It converts some entities like [foo  ] to &foo; and adds the DOCTYPE header at the second line.

Once I had the file in XML format, I started to investigate how to extract descriptions for COMMENT ON TABLE. I noticed that the first section (sect1 id="catalogs-overview") has a table with the information I was looking for. The following XSL script is capable of extracting the catalog table descriptions. (The only difficulty in this script was to build the template escape-string-sql to escape sql strings. This was necessary because the replace function is only available at XSL version 2.0; I was using version 1.0).

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
                version="1.0"
                xmlns="http://www.w3.org/TR/xhtml1/transitional"
                exclude-result-prefixes="#default">

<xsl:output method="text" encoding="utf-8"/>

<!-- escape SQL string -->
<xsl:template name="escape-string-sql">
 <xsl:param name="mystr"/>
 <xsl:choose>
  <!-- &quot;&apos;&quot; = "'" -->
  <xsl:when test="contains($mystr, &quot;&apos;&quot;)">
   <xsl:value-of select="substring-before($mystr, &quot;&apos;&quot;)"/>
   <!-- replacing ' (&apos;) with '' -->
   <xsl:text>&apos;&apos;</xsl:text>
   <xsl:call-template name="escape-string-sql">
    <xsl:with-param name="mystr" select="substring-after($mystr, &quot;&apos;&quot;)"/>
   </xsl:call-template>
  </xsl:when>
  <xsl:otherwise>
   <xsl:value-of select="$mystr"/>
  </xsl:otherwise>
 </xsl:choose>
</xsl:template>

<xsl:template match="/chapter[@id='catalogs']">

 <xsl:text>BEGIN;&#xa;</xsl:text>

 <xsl:text>&#xa;-- comment on catalog tables</xsl:text>

 <xsl:for-each select="sect1">

  <xsl:choose>
  <!-- comment on catalog tables -->
  <xsl:when test="self::sect1[@id='catalogs-overview']">
   <xsl:for-each select="table[@id='catalog-table']/tgroup/tbody/row">
COMMENT ON TABLE pg_catalog.<xsl:value-of select="entry[1]"/> IS '<xsl:call-template name="escape-string-sql"><xsl:with-param name="mystr" select="normalize-space(entry[2])"/></xsl:call-template>';</xsl:for-each>
   <xsl:text>&#xa;</xsl:text>
  </xsl:when>
  </xsl:choose>

 </xsl:for-each>

 <xsl:text>&#xa;END;&#xa;</xsl:text>

</xsl:template>

</xsl:stylesheet>

Let's check the output:

euler@harman:~$ xsltproc /path/to/only-tables.xsl /path/to/catalogs.xml
BEGIN;

-- comment on catalog tables
COMMENT ON TABLE pg_catalog.pg_aggregate IS 'aggregate functions';
COMMENT ON TABLE pg_catalog.pg_am IS 'index access methods';
COMMENT ON TABLE pg_catalog.pg_amop IS 'access method operators';
COMMENT ON TABLE pg_catalog.pg_amproc IS 'access method support procedures';
.
.
.
COMMENT ON TABLE pg_catalog.pg_ts_parser IS 'text search parsers';
COMMENT ON TABLE pg_catalog.pg_ts_template IS 'text search templates';
COMMENT ON TABLE pg_catalog.pg_type IS 'data types';
COMMENT ON TABLE pg_catalog.pg_user_mapping IS 'mappings of users to foreign servers';

END;

The next step was extracting descriptions from catalog tables (COMMENT ON COLUMNS). I observed that each section contains a catalog and this section is identified with catalog-pg-*. Inside each section the first table contains the information we want to extract. Those tables can have three or four columns. The following XSL script is capable of extracting the catalog column descriptions. It uses the template escape-string-sql to escape sql strings too. Also, the template build-comment-col centralizes the COMMENT ON COLUMN output.

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
                version="1.0"
                xmlns="http://www.w3.org/TR/xhtml1/transitional"
                exclude-result-prefixes="#default">

<xsl:output method="text" encoding="utf-8"/>

<!-- escape SQL string -->
<xsl:template name="escape-string-sql">
 <xsl:param name="mystr"/>
 <xsl:choose>
  <!-- &quot;&apos;&quot; = "'" -->
  <xsl:when test="contains($mystr, &quot;&apos;&quot;)">
   <xsl:value-of select="substring-before($mystr, &quot;&apos;&quot;)"/>
   <!-- replacing ' (&apos;) with '' -->
   <xsl:text>&apos;&apos;</xsl:text>
   <xsl:call-template name="escape-string-sql">
    <xsl:with-param name="mystr" select="substring-after($mystr, &quot;&apos;&quot;)"/>
   </xsl:call-template>
  </xsl:when>
  <xsl:otherwise>
   <xsl:value-of select="$mystr"/>
  </xsl:otherwise>
 </xsl:choose>
</xsl:template>

<!-- build catalog column comment -->
<xsl:template name="build-comment-col">
 <xsl:param name="mytab"/>
 <xsl:param name="mycol"/>
 <xsl:param name="mystr"/>
COMMENT ON COLUMN pg_catalog.<xsl:value-of select="$mytab"/>.<xsl:value-of select="$mycol"/> IS '<xsl:call-template name="escape-string-sql"><xsl:with-param name="mystr" select="normalize-space($mystr)"/></xsl:call-template>';
</xsl:template>

<xsl:template match="/chapter[@id='catalogs']">

 <xsl:text>BEGIN;&#xa;</xsl:text>

 <xsl:for-each select="sect1">

  <xsl:choose>
  <!-- comment on catalog columns -->
  <xsl:when test="contains(self::sect1/@id,'catalog-pg-')">
   <xsl:text>&#xa;-- comment on </xsl:text><xsl:value-of select="table/title/structname"/><xsl:text> columns</xsl:text>

   <xsl:variable name="tab" select="table/title/structname"/>
   <xsl:variable name="numcol" select="table/tgroup/@cols"/>

   <!-- consider only the first table of each section -->
   <xsl:for-each select="table[1]/tgroup/tbody/row">
    <xsl:choose>
    <!-- there are tables with 3 or 4 columns -->
    <xsl:when test="$numcol = 3">
     <xsl:call-template name="build-comment-col">
      <xsl:with-param name="mytab" select="$tab"/>
      <xsl:with-param name="mycol" select="entry[1]/structfield"/>
      <xsl:with-param name="mystr" select="entry[3]"/>
     </xsl:call-template>
    </xsl:when>
    <xsl:when test="$numcol = 4">
     <xsl:call-template name="build-comment-col">
      <xsl:with-param name="mytab" select="$tab"/>
      <xsl:with-param name="mycol" select="entry[1]/structfield"/>
      <xsl:with-param name="mystr" select="entry[4]"/>
     </xsl:call-template>
    </xsl:when>
    </xsl:choose>
   </xsl:for-each>
  </xsl:when>
  </xsl:choose>

 </xsl:for-each>

 <xsl:text>&#xa;END;</xsl:text>

</xsl:template>

</xsl:stylesheet>

Let's check the output:

euler@harman:~$ xsltproc /path/to/only-columns.xsl /path/to/catalogs.xml
BEGIN;

-- comment on catalog tables
-- comment on pg_aggregate columns
COMMENT ON COLUMN pg_catalog.pg_aggregate.aggfnoid IS 'pg_proc OID of the aggregate function';
COMMENT ON COLUMN pg_catalog.pg_aggregate.aggtransfn IS 'Transition function';
COMMENT ON COLUMN pg_catalog.pg_aggregate.aggfinalfn IS 'Final function (zero if none)';
COMMENT ON COLUMN pg_catalog.pg_aggregate.aggsortop IS 'Associated sort operator (zero if none)';
COMMENT ON COLUMN pg_catalog.pg_aggregate.aggtranstype IS 'Data type of the aggregate function''s internal transition (state) data';
COMMENT ON COLUMN pg_catalog.pg_aggregate.agginitval IS 'The initial value of the transition state. This is a text field containing the initial value in its external string representation. If this field is null, the transition state value starts out null.';
.
.
.
COMMENT ON COLUMN pg_catalog.pg_statistic.stawidth IS 'The average stored width, in bytes, of nonnull entries';

COMMENT ON COLUMN pg_catalog.pg_statistic.stadistinct IS 'The number of distinct nonnull data values in the column. A value greater than zero is the actual number of distinct values. A value less than zero is the negative of a multiplier for the number of rows in the table; for example, a column in which values appear about twice on the average could be represented by stadistinct = -0.5. A zero value means the number of distinct values is unknown.';

COMMENT ON COLUMN pg_catalog.pg_statistic.stakindN IS 'A code number indicating the kind of statistics stored in the Nth slot of the pg_statistic row.';

COMMENT ON COLUMN pg_catalog.pg_statistic.staopN IS 'An operator used to derive the statistics stored in the Nth slot. For example, a histogram slot would show the < operator that defines the sort order of the data.';
.
.
.
-- comment on pg_user_mapping columns
COMMENT ON COLUMN pg_catalog.pg_user_mapping.umuser IS 'OID of the local role being mapped, 0 if the user mapping is public';
COMMENT ON COLUMN pg_catalog.pg_user_mapping.umserver IS 'The OID of the foreign server that contains this mapping';
COMMENT ON COLUMN pg_catalog.pg_user_mapping.umoptions IS 'User mapping specific options, as keyword=value strings';

END;

Checking the generated file, everything seems to be ok except a small detail: some pg_statistic columns presented the form fooN while there were 4 columns to its representation (foo1, foo2, foo3, and foo4). This is true for 4 columns of pg_statistic: stakind, staop, stanumbers, and stavalues. A hack transforms one "virtual" comment into 4 comments.

Two templates were designed to expand the statistic columns (expand-stat-col and build-comment-stat-col) and another one to convert the string "Nth" to "1st", "2nd", "3rd", or "4th" (convert-ord-number). Also, some tests for those special statistic columns were added in the main template.

The new version of only-cols.xsl is above.

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
                version="1.0"
                xmlns="http://www.w3.org/TR/xhtml1/transitional"
                exclude-result-prefixes="#default">

<xsl:output method="text" encoding="utf-8"/>

<!-- escape SQL string -->
<xsl:template name="escape-string-sql">
 <xsl:param name="mystr"/>
 <xsl:choose>
  <!-- &quot;&apos;&quot; = "'" -->
  <xsl:when test="contains($mystr, &quot;&apos;&quot;)">
   <xsl:value-of select="substring-before($mystr, &quot;&apos;&quot;)"/>
   <!-- replacing ' (&apos;) with '' -->
   <xsl:text>&apos;&apos;</xsl:text>
   <xsl:call-template name="escape-string-sql">
    <xsl:with-param name="mystr" select="substring-after($mystr, &quot;&apos;&quot;)"/>
   </xsl:call-template>
  </xsl:when>
  <xsl:otherwise>
   <xsl:value-of select="$mystr"/>
  </xsl:otherwise>
 </xsl:choose>
</xsl:template>

<!-- expand fooN to foo1, foo2, foo3, and foo4 (pg_statistic has columns like fooN) -->
<xsl:template name="expand-stat-col">
 <xsl:param name="mycol"/>
 <xsl:param name="mynum"/>
 <xsl:value-of select="substring-before($mycol, 'N')"/>
 <xsl:value-of select="$mynum"/>
</xsl:template>

<!-- convert Nth to 1st, 2nd, 3rd, or 4th -->
<xsl:template name="convert-ord-number">
 <xsl:param name="mystr"/>
 <xsl:param name="myord"/>
 <xsl:choose>
  <xsl:when test="contains($mystr, 'Nth')">
   <xsl:value-of select="substring-before($mystr, 'Nth')"/>
   <xsl:choose>
    <xsl:when test="$myord = 1">1st</xsl:when>
    <xsl:when test="$myord = 2">2nd</xsl:when>
    <xsl:when test="$myord = 3">3rd</xsl:when>
    <xsl:when test="$myord = 4">4th</xsl:when>
   </xsl:choose>
   <xsl:value-of select="substring-after($mystr, 'Nth')"/>
  </xsl:when>
 </xsl:choose>
</xsl:template>

<!-- build catalog column comment -->
<xsl:template name="build-comment-col">
 <xsl:param name="mytab"/>
 <xsl:param name="mycol"/>
 <xsl:param name="mystr"/>
COMMENT ON COLUMN pg_catalog.<xsl:value-of select="$mytab"/>.<xsl:value-of select="$mycol"/> IS '<xsl:call-template name="escape-string-sql"><xsl:with-param name="mystr" select="normalize-space($mystr)"/></xsl:call-template>';
</xsl:template>

<!-- build special catalog stat column comment -->
<xsl:template name="build-comment-stat-col">
 <xsl:param name="mytab"/>
 <xsl:param name="mycol"/>
 <xsl:param name="mystr"/>
 <!-- first column -->
 <xsl:call-template name="build-comment-col">
  <xsl:with-param name="mytab" select="$mytab"/>
  <xsl:with-param name="mycol">
   <xsl:value-of select="substring-before($mycol, 'N')"/>
   <xsl:text>1</xsl:text>
  </xsl:with-param>
  <xsl:with-param name="mystr">
   <xsl:call-template name="convert-ord-number">
    <xsl:with-param name="mystr" select="$mystr"/>
    <xsl:with-param name="myord">1</xsl:with-param>
   </xsl:call-template>
  </xsl:with-param>
 </xsl:call-template>
 <!-- second column -->
 <xsl:call-template name="build-comment-col">
  <xsl:with-param name="mytab" select="$mytab"/>
  <xsl:with-param name="mycol">
   <xsl:value-of select="substring-before($mycol, 'N')"/>
   <xsl:text>2</xsl:text>
  </xsl:with-param>
  <xsl:with-param name="mystr">
   <xsl:call-template name="convert-ord-number">
    <xsl:with-param name="mystr" select="$mystr"/>
    <xsl:with-param name="myord">2</xsl:with-param>
   </xsl:call-template>
  </xsl:with-param>
 </xsl:call-template>
 <!-- third column -->
 <xsl:call-template name="build-comment-col">
  <xsl:with-param name="mytab" select="$mytab"/>
  <xsl:with-param name="mycol">
   <xsl:value-of select="substring-before($mycol, 'N')"/>
   <xsl:text>3</xsl:text>
  </xsl:with-param>
  <xsl:with-param name="mystr">
   <xsl:call-template name="convert-ord-number">
    <xsl:with-param name="mystr" select="$mystr"/>
    <xsl:with-param name="myord">3</xsl:with-param>
   </xsl:call-template>
  </xsl:with-param>
 </xsl:call-template>
 <!-- fourth column -->
 <xsl:call-template name="build-comment-col">
  <xsl:with-param name="mytab" select="$mytab"/>
  <xsl:with-param name="mycol">
   <xsl:value-of select="substring-before($mycol, 'N')"/>
   <xsl:text>4</xsl:text>
  </xsl:with-param>
  <xsl:with-param name="mystr">
   <xsl:call-template name="convert-ord-number">
    <xsl:with-param name="mystr" select="$mystr"/>
    <xsl:with-param name="myord">4</xsl:with-param>
   </xsl:call-template>
  </xsl:with-param>
 </xsl:call-template>
</xsl:template>

<xsl:template match="/chapter[@id='catalogs']">

 <xsl:text>BEGIN;&#xa;</xsl:text>

 <xsl:for-each select="sect1">

  <xsl:choose>
  <!-- comment on catalog columns -->
  <xsl:when test="contains(self::sect1/@id,'catalog-pg-')">
   <xsl:text>&#xa;-- comment on </xsl:text><xsl:value-of select="table/title/structname"/><xsl:text> columns</xsl:text>

   <xsl:variable name="tab" select="table/title/structname"/>
   <xsl:variable name="numcol" select="table/tgroup/@cols"/>

   <!-- consider only the first table of each section -->
   <xsl:for-each select="table[1]/tgroup/tbody/row">
    <xsl:choose>
    <!-- comment on special stat columns -->
    <xsl:when test="contains(entry[1]/structfield, 'stakindN')">
     <xsl:call-template name="build-comment-stat-col">
      <xsl:with-param name="mytab" select="$tab"/>
      <xsl:with-param name="mycol" select="entry[1]/structfield"/>
      <xsl:with-param name="mystr" select="entry[4]"/>
     </xsl:call-template>
    </xsl:when>
    <xsl:when test="contains(entry[1]/structfield, 'staopN')">
     <xsl:call-template name="build-comment-stat-col">
      <xsl:with-param name="mytab" select="$tab"/>
      <xsl:with-param name="mycol" select="entry[1]/structfield"/>
      <xsl:with-param name="mystr" select="entry[4]"/>
     </xsl:call-template>
    </xsl:when>
    <xsl:when test="contains(entry[1]/structfield, 'stanumbersN')">
     <xsl:call-template name="build-comment-stat-col">
      <xsl:with-param name="mytab" select="$tab"/>
      <xsl:with-param name="mycol" select="entry[1]/structfield"/>
      <xsl:with-param name="mystr" select="entry[4]"/>
     </xsl:call-template>
    </xsl:when>
    <xsl:when test="contains(entry[1]/structfield, 'stavaluesN')">
     <xsl:call-template name="build-comment-stat-col">
      <xsl:with-param name="mytab" select="$tab"/>
      <xsl:with-param name="mycol" select="entry[1]/structfield"/>
      <xsl:with-param name="mystr" select="entry[4]"/>
     </xsl:call-template>
    </xsl:when>
    <!-- there are tables with 3 or 4 columns -->
    <xsl:when test="$numcol = 3">
     <xsl:call-template name="build-comment-col">
      <xsl:with-param name="mytab" select="$tab"/>
      <xsl:with-param name="mycol" select="entry[1]/structfield"/>
      <xsl:with-param name="mystr" select="entry[3]"/>
     </xsl:call-template>
    </xsl:when>
    <xsl:when test="$numcol = 4">
     <xsl:call-template name="build-comment-col">
      <xsl:with-param name="mytab" select="$tab"/>
      <xsl:with-param name="mycol" select="entry[1]/structfield"/>
      <xsl:with-param name="mystr" select="entry[4]"/>
     </xsl:call-template>
    </xsl:when>
    </xsl:choose>
   </xsl:for-each>
  </xsl:when>
  </xsl:choose>

 </xsl:for-each>

 <xsl:text>&#xa;END;</xsl:text>

</xsl:template>

</xsl:stylesheet>


Let's check the output:

euler@harman:~$ xsltproc /path/to/only-columns-2.xsl /path/to/catalogs.xml
BEGIN;
.
.
.
COMMENT ON COLUMN pg_catalog.pg_statistic.stadistinct IS 'The number of distinct nonnull data values in the column. A value greater than zero is the actual number of distinct values. A value less than zero is the negative of a multiplier for the number of rows in the table; for example, a column in which values appear about twice on the average could be represented by stadistinct = -0.5. A zero value means the number of distinct values is unknown.';
COMMENT ON COLUMN pg_catalog.pg_statistic.stakind1 IS 'A code number indicating the kind of statistics stored in the 1st slot of the pg_statistic row.';
COMMENT ON COLUMN pg_catalog.pg_statistic.stakind2 IS 'A code number indicating the kind of statistics stored in the 2nd slot of the pg_statistic row.';
COMMENT ON COLUMN pg_catalog.pg_statistic.stakind3 IS 'A code number indicating the kind of statistics stored in the 3rd slot of the pg_statistic row.';
COMMENT ON COLUMN pg_catalog.pg_statistic.stakind4 IS 'A code number indicating the kind of statistics stored in the 4th slot of the pg_statistic row.';
.
.
.
END;

The next step was integrating only-tables.xsl and only-columns-2.xsl. The escape-string-sql template is used in both scripts and the path to extract descriptions are the same. The resulting script extract-catalog.xsl is above.

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
                version="1.0"
                xmlns="http://www.w3.org/TR/xhtml1/transitional"
                exclude-result-prefixes="#default">

<xsl:output method="text" encoding="utf-8"/>

<!-- escape SQL string -->
<xsl:template name="escape-string-sql">
 <xsl:param name="mystr"/>
 <xsl:choose>
  <!-- &quot;&apos;&quot; = "'" -->
  <xsl:when test="contains($mystr, &quot;&apos;&quot;)">
   <xsl:value-of select="substring-before($mystr, &quot;&apos;&quot;)"/>
   <!-- replacing ' (&apos;) with '' -->
   <xsl:text>&apos;&apos;</xsl:text>
   <xsl:call-template name="escape-string-sql">
    <xsl:with-param name="mystr" select="substring-after($mystr, &quot;&apos;&quot;)"/>
   </xsl:call-template>
  </xsl:when>
  <xsl:otherwise>
   <xsl:value-of select="$mystr"/>
  </xsl:otherwise>
 </xsl:choose>
</xsl:template>

<!-- expand fooN to foo1, foo2, foo3, and foo4 (pg_statistic has columns like fooN) -->
<xsl:template name="expand-stat-col">
 <xsl:param name="mycol"/>
 <xsl:param name="mynum"/>
 <xsl:value-of select="substring-before($mycol, 'N')"/>
 <xsl:value-of select="$mynum"/>
</xsl:template>

<!-- convert Nth to 1st, 2nd, 3rd, or 4th -->
<xsl:template name="convert-ord-number">
 <xsl:param name="mystr"/>
 <xsl:param name="myord"/>
 <xsl:choose>
  <xsl:when test="contains($mystr, 'Nth')">
   <xsl:value-of select="substring-before($mystr, 'Nth')"/>
   <xsl:choose>
    <xsl:when test="$myord = 1">1st</xsl:when>
    <xsl:when test="$myord = 2">2nd</xsl:when>
    <xsl:when test="$myord = 3">3rd</xsl:when>
    <xsl:when test="$myord = 4">4th</xsl:when>
   </xsl:choose>
   <xsl:value-of select="substring-after($mystr, 'Nth')"/>
  </xsl:when>
 </xsl:choose>
</xsl:template>

<!-- build catalog column comment -->
<xsl:template name="build-comment-col">
 <xsl:param name="mytab"/>
 <xsl:param name="mycol"/>
 <xsl:param name="mystr"/>
COMMENT ON COLUMN pg_catalog.<xsl:value-of select="$mytab"/>.<xsl:value-of select="$mycol"/> IS '<xsl:call-template name="escape-string-sql"><xsl:with-param name="mystr" select="normalize-space($mystr)"/></xsl:call-template>';
</xsl:template>

<!-- build special catalog stat column comment -->
<xsl:template name="build-comment-stat-col">
 <xsl:param name="mytab"/>
 <xsl:param name="mycol"/>
 <xsl:param name="mystr"/>
 <!-- first column -->
 <xsl:call-template name="build-comment-col">
  <xsl:with-param name="mytab" select="$mytab"/>
  <xsl:with-param name="mycol">
   <xsl:value-of select="substring-before($mycol, 'N')"/>
   <xsl:text>1</xsl:text>
  </xsl:with-param>
  <xsl:with-param name="mystr">
   <xsl:call-template name="convert-ord-number">
    <xsl:with-param name="mystr" select="$mystr"/>
    <xsl:with-param name="myord">1</xsl:with-param>
   </xsl:call-template>
  </xsl:with-param>
 </xsl:call-template>
 <!-- second column -->
 <xsl:call-template name="build-comment-col">
  <xsl:with-param name="mytab" select="$mytab"/>
  <xsl:with-param name="mycol">
   <xsl:value-of select="substring-before($mycol, 'N')"/>
   <xsl:text>2</xsl:text>
  </xsl:with-param>
  <xsl:with-param name="mystr">
   <xsl:call-template name="convert-ord-number">
    <xsl:with-param name="mystr" select="$mystr"/>
    <xsl:with-param name="myord">2</xsl:with-param>
   </xsl:call-template>
  </xsl:with-param>
 </xsl:call-template>
 <!-- third column -->
 <xsl:call-template name="build-comment-col">
  <xsl:with-param name="mytab" select="$mytab"/>
  <xsl:with-param name="mycol">
   <xsl:value-of select="substring-before($mycol, 'N')"/>
   <xsl:text>3</xsl:text>
  </xsl:with-param>
  <xsl:with-param name="mystr">
   <xsl:call-template name="convert-ord-number">
    <xsl:with-param name="mystr" select="$mystr"/>
    <xsl:with-param name="myord">3</xsl:with-param>
   </xsl:call-template>
  </xsl:with-param>
 </xsl:call-template>
 <!-- fourth column -->
 <xsl:call-template name="build-comment-col">
  <xsl:with-param name="mytab" select="$mytab"/>
  <xsl:with-param name="mycol">
   <xsl:value-of select="substring-before($mycol, 'N')"/>
   <xsl:text>4</xsl:text>
  </xsl:with-param>
  <xsl:with-param name="mystr">
   <xsl:call-template name="convert-ord-number">
    <xsl:with-param name="mystr" select="$mystr"/>
    <xsl:with-param name="myord">4</xsl:with-param>
   </xsl:call-template>
  </xsl:with-param>
 </xsl:call-template>
</xsl:template>

<xsl:template match="/chapter[@id='catalogs']">

 <xsl:text>BEGIN;&#xa;</xsl:text>

 <xsl:text>&#xa;-- comment on catalog tables</xsl:text>

 <xsl:for-each select="sect1">

  <xsl:choose>
  <!-- comment on catalog tables -->
  <xsl:when test="self::sect1[@id='catalogs-overview']">
   <xsl:for-each select="table[@id='catalog-table']/tgroup/tbody/row">
COMMENT ON TABLE pg_catalog.<xsl:value-of select="entry[1]"/> IS '<xsl:call-template name="escape-string-sql"><xsl:with-param name="mystr" select="normalize-space(entry[2])"/></xsl:call-template>';</xsl:for-each>
   <xsl:text>&#xa;</xsl:text>
  </xsl:when>
  <!-- comment on catalog columns -->
  <xsl:when test="contains(self::sect1/@id,'catalog-pg-')">
   <xsl:text>&#xa;-- comment on </xsl:text><xsl:value-of select="table/title/structname"/><xsl:text> columns</xsl:text>

   <xsl:variable name="tab" select="table/title/structname"/>
   <xsl:variable name="numcol" select="table/tgroup/@cols"/>

   <!-- consider only the first table of each section -->
   <xsl:for-each select="table[1]/tgroup/tbody/row">
    <xsl:choose>
    <!-- comment on special stat columns -->
    <xsl:when test="contains(entry[1]/structfield, 'stakindN')">
     <xsl:call-template name="build-comment-stat-col">
      <xsl:with-param name="mytab" select="$tab"/>
      <xsl:with-param name="mycol" select="entry[1]/structfield"/>
      <xsl:with-param name="mystr" select="entry[4]"/>
     </xsl:call-template>
    </xsl:when>
    <xsl:when test="contains(entry[1]/structfield, 'staopN')">
     <xsl:call-template name="build-comment-stat-col">
      <xsl:with-param name="mytab" select="$tab"/>
      <xsl:with-param name="mycol" select="entry[1]/structfield"/>
      <xsl:with-param name="mystr" select="entry[4]"/>
     </xsl:call-template>
    </xsl:when>
    <xsl:when test="contains(entry[1]/structfield, 'stanumbersN')">
     <xsl:call-template name="build-comment-stat-col">
      <xsl:with-param name="mytab" select="$tab"/>
      <xsl:with-param name="mycol" select="entry[1]/structfield"/>
      <xsl:with-param name="mystr" select="entry[4]"/>
     </xsl:call-template>
    </xsl:when>
    <xsl:when test="contains(entry[1]/structfield, 'stavaluesN')">
     <xsl:call-template name="build-comment-stat-col">
      <xsl:with-param name="mytab" select="$tab"/>
      <xsl:with-param name="mycol" select="entry[1]/structfield"/>
      <xsl:with-param name="mystr" select="entry[4]"/>
     </xsl:call-template>
    </xsl:when>

    <!-- there are tables with 3 or 4 columns -->
    <xsl:when test="$numcol = 3">
     <xsl:call-template name="build-comment-col">
      <xsl:with-param name="mytab" select="$tab"/>
      <xsl:with-param name="mycol" select="entry[1]/structfield"/>
      <xsl:with-param name="mystr" select="entry[3]"/>
     </xsl:call-template>
    </xsl:when>
    <xsl:when test="$numcol = 4">
     <xsl:call-template name="build-comment-col">
      <xsl:with-param name="mytab" select="$tab"/>
      <xsl:with-param name="mycol" select="entry[1]/structfield"/>
      <xsl:with-param name="mystr" select="entry[4]"/>
     </xsl:call-template>
    </xsl:when>
    </xsl:choose>
   </xsl:for-each>
  </xsl:when>
  </xsl:choose>

 </xsl:for-each>

 <xsl:text>&#xa;END;</xsl:text>

</xsl:template>

</xsl:stylesheet>

The SQL file can be built with the following command:

euler@harman:~$ xsltproc /path/to/extract-catalog.xsl /path/to/catalogs.xml > /path/to/catalogs.sql

The final step was to load the sql file into PostgreSQL. You need to have in mind that comments are not shared across databases; if you load the script in database foo, those comments will only be visible to users connected to database foo. You can load it to your template database (template1, for example) and any new database created using that template will inherit those comments.

euler@harman:~$ psql -f /path/to/catalogs.sql template1