понедельник, 27 декабря 2010 г.
PostgreSQL 9.0 High perfomance
I've just finished G.Smith's "PostgreSQL 9.0 High performance". It's just an excellent book for any DBA. It provides detail information on PostgreSQL-related tuning (from server hardware and OS settings to query tuning and DBMS monitoring). We are going to upgrade our DBMS server next year, so I'll have an opportunity to apply this recommendations in practice. The only disadvantage is that the book is Linux-oriented, and we have FreeBSD DBMS server. However, basic settings for our OS are also covered in this book. Now I wonder: should I use ZFS for PostgreSQL on FreeBSD? I think at least I should make some benchmarks...
суббота, 25 декабря 2010 г.
Social network
When book is over and there is a mess in my head, movie helps... I've just watched "Social Network". At least it was interesting. The ethernal opposition between men who just do something and believe in this and men who just want to earn some money and clear off.
пятница, 24 декабря 2010 г.
Paradise Lost
I've just finished reading Milton's "Paradise Lost". Mixed impression... I tried to start reading this book three times before. The book is sometimes quite boring, but it has very intersting details. Disliking the Bible and religion in general, I can appreciate Milton's interpretation of banishment from paradise. And also, author draws Lucifer image not without sympathy. A figure defying omnipotent God is worthy of respect...
понедельник, 20 декабря 2010 г.
Just remember never use ctrl+Ins in server console
I'm quite disturbed. I had found file to delete, had copied its name, entered "rm" and pressed ctrl+ins... What effectively pasted in the contents of one script. So I recieved "rm #!/bin/sh". I'm lucky, it wasn't "rm /bin/sh"... I hate several copy-paste buffers - one general buffer in KDE and separate buffer in Konsole.... Uuuuuu..........
пятница, 10 декабря 2010 г.
One more awful day
Just one more day when everything goes wrong, you are so exhausted that can't even read, you eager to sleep, but you can't and so on, and so on. And there's no one to blame for this except yourself.
The only wish is to drink a glass of vodka and to go sleeping...
Sounds like a shit, feels much worse...
The only wish is to drink a glass of vodka and to go sleeping...
Sounds like a shit, feels much worse...
среда, 8 декабря 2010 г.
Vim-compatible web-based text editor
It is not a joke. Google created vim-compatible Web-based text editor :) I don't feel myself such a geek. It is the best editor in command line, but why should anyone use it in GUI environment, especially in user-oriented OS?
Is Russian Culture department a band of thieves?
I do like our government!!! Fuck them and their laws (c, Prodigy). The Russian Culture Department recently created new perfect idea: let's take money from computers and ebooks sellers (1 % from sales). Quite ellegant decision for feeding an army of bureaucrats when Russian economy is on the knees...
понедельник, 6 декабря 2010 г.
This is why Russian science will die or something about philanthropy
I finish reading a DBMS-related course in South Federal University. I've read 34 hours (once per week since September to December) lectures and received about 100$ for this work. Our universities are the most philanthropic organizations in the world!!!
I'm not complaining, I have other sources of income, but it clearly shows that I'm not motivated to read lectures :) For example, to prepare for one lecture I need about 6 hours + 2 hours of lecture - so we have 8 hours per week or about 128 hours overall. It's about half of full working month, so I supposed to work for 200$ per month... Cool! I was always shocked with Russian higher education system. It's main miracle is that it still exists.
And we always hear different shit from our beloved leaders about High Technologies and Science. It's quite funny.
I'm not complaining, I have other sources of income, but it clearly shows that I'm not motivated to read lectures :) For example, to prepare for one lecture I need about 6 hours + 2 hours of lecture - so we have 8 hours per week or about 128 hours overall. It's about half of full working month, so I supposed to work for 200$ per month... Cool! I was always shocked with Russian higher education system. It's main miracle is that it still exists.
And we always hear different shit from our beloved leaders about High Technologies and Science. It's quite funny.
воскресенье, 5 декабря 2010 г.
XID wraparround and postgresql single user mode
I've just found on pgsql-general mailing list interesting detail. This case supports the idea that manuals should be read from cover to cover. Of course, I've never bothered with this :)
The case was the following: DBMS refused to start with message:
The previous messages in the log were
I'm glad, that it was not mine DBMS. As we see from log, postgresql shutted down itself to avoid XID wrapparound. It is covered in detail in the manual. The main idea is that every row in table is marked with a XID of last transaction, which modified it. So, when row is not modified for long, vacuum (or autovacuum) will increase its XID, and it will not wrap (to compare XIDs PostgreSQL using modulo-2^31 arithmetic, so that "for every normal XID, there are two billion XIDs that are "older" and two billion that are "newer"; another way to say it is that the normal XID space is circular with no endpoint" (c) PostgreSQL 9.0 Documentation).
In this situation, server had seen that we were close to maximum 2 billion difference between current XID and oldest one in DB and shutted down. So, now we have to run postgresql in single user mode and do vacuum on affected database. Something like that:
The case was the following: DBMS refused to start with message:
2010-12-01 08:36:42 EST LOG: database system was shut down at
2010-11-29 17:38:17 EST
2010-12-01 08:36:42 EST FATAL: invalid cache id: 19
2010-12-01 08:36:43 EST FATAL: the database system is starting up
2010-12-01 08:36:43 EST LOG: startup process (PID 640) exited with exit
code 1
2010-12-01 08:36:43 EST LOG: aborting startup due to startup process
failure
The previous messages in the log were
2010-11-29 12:39:17 EST ERROR: database is not accepting commands to
avoid wraparound data loss in database "fps_data"
2010-11-29 12:39:17 EST HINT: Stop the postmaster and use a standalone
backend to vacuum that database.
I'm glad, that it was not mine DBMS. As we see from log, postgresql shutted down itself to avoid XID wrapparound. It is covered in detail in the manual. The main idea is that every row in table is marked with a XID of last transaction, which modified it. So, when row is not modified for long, vacuum (or autovacuum) will increase its XID, and it will not wrap (to compare XIDs PostgreSQL using modulo-2^31 arithmetic, so that "for every normal XID, there are two billion XIDs that are "older" and two billion that are "newer"; another way to say it is that the normal XID space is circular with no endpoint" (c) PostgreSQL 9.0 Documentation).
In this situation, server had seen that we were close to maximum 2 billion difference between current XID and oldest one in DB and shutted down. So, now we have to run postgresql in single user mode and do vacuum on affected database. Something like that:
# su - pgsql -c '/usr/local/bin/postgres --single -D /pgdata dbname'
PostgreSQL stand-alone backend 9.0.0
backend> vacuum;
backend> ^D
# /usr/local/etc/rc.d/postgresql start
пятница, 3 декабря 2010 г.
DBI-Link - why is it so buggy?
I've spent several hours trying to make dbi-link work - I had errors related to table names and errors related to dbi_link.shadow_trigger_func(). However, I couldn't just get up - I was asked to suggest a sollution for MySQL <=> PostgreSQL replication and this module could help me to do the job...
So, I've found the following post , which helped me a lot, so I modified dbi_link.shadow_trigger_func to look the following way:
And this worked! Now I just can create after each row triggers on local PostgreSQL table and promote data to MySQL. I like DBI-Link, however, if I were to use it in production I would examine its code attentively...
So, I've found the following post , which helped me a lot, so I modified dbi_link.shadow_trigger_func to look the following way:
CREATE OR REPLACE FUNCTION dbi_link.shadow_trigger_func()
RETURNS TRIGGER
LANGUAGE plperlu
AS $$
require 5.8.3;
######################################################
# #
# Immediately reject anything that is not an INSERT. #
# #
######################################################
if ($_TD->{event} ne 'INSERT') {
return "SKIP";
}
spi_exec_query('SELECT dbi_link.dbi_link_init()');
my $data_source_id = shift;
die "In shadow_trigger_function, data_source_id must be an integer"
unless ($data_source_id =~ /^\d+$/);
my $query = "SELECT dbi_link.cache_connection( $data_source_id )";
warn "In shadow_trigger_function, calling\n $query" if $_SHARED{debug};
warn "In shadow_trigger_function, the trigger payload is\n". Dump(\$_TD) if $_SHARED{debug};
my $rv = spi_exec_query($query);
my $remote_schema = $_SHARED{get_connection_info}->({
data_source_id => $data_source_id
})->{remote_schema};
my $table = $_TD->{relname};
warn "Raw table name is $table" if $_SHARED{debug};
warn "In trigger on $table, action is $_TD->{new}{iud_action}" if $_SHARED{debug};
$table =~ s{
\A # Beginning of string.
(.*) # Actual table name.
_shadow # Strip off shadow.
\z # End of string.
}
{$1}sx;
$table = $remote_schema . "." . $table if defined $remote_schema;
warn "Cooked table name is $table" if $_SHARED{debug};
my $iud = {
I => \&do_insert,
U => \&do_update,
D => \&do_delete,
};
if ($iud->{ $_TD->{new}{iud_action} }) {
$iud->{ $_TD->{new}{iud_action} }->({
payload => $_TD->{new},
tbl => $table,
source_id => $data_source_id
});
}
else {
die "Trigger event was $_TD->{new}{iud_action}<, but should have been one of I, U or D!"
}
return 'SKIP';
sub do_insert {
my ($params) = @_;
die "In do_insert, must pass a payload!"
unless (defined $params->{payload});
die "In do_insert, payload must be a hash reference!"
unless (ref $params->{payload} eq 'HASH');
my (@keys, @values);
foreach my $key (sort keys %{ $params->{payload} } ) {
next unless $key =~ /^.?new_(.*)/;
my $real_key = $1;
push @keys, $real_key;
push @values, $_SHARED{dbh}{ $params->{source_id} }->quote(
$params->{payload}{$key}
);
}
my $sql = <<SQL;
INSERT INTO $params->{tbl} (
@{[
join(
",\n ",
@keys
)
]}
)
VALUES (
@{[
join(
",\n ",
@values
)
]}
)
SQL
warn "SQL is\n$sql" if $_SHARED{debug};
$_SHARED{dbh}{ $params->{source_id} }->do($sql);
}
sub do_update {
my ($params) = @_;
die "In do_update, must pass a payload!"
unless (defined $params->{payload});
die "In do_update, payload must be a hash reference!"
unless (ref $params->{payload} eq 'HASH');
my $sql = <<SQL;
UPDATE $params->{tbl}
SET
@{[ make_pairs({
payload => $params->{payload},
which => 'new',
joiner => ",\n ",
transform_null => 'false',
source_id => "$params->{source_id}"
}) ]}
WHERE
@{[ make_pairs({
payload => $params->{payload},
which => 'old',
joiner => "\nAND ",
transform_null => 'true',
source_id => "$params->{source_id}"
}) ]}
SQL
warn "SQL is\n$sql" if $_SHARED{debug};
$_SHARED{dbh}{ $params->{source_id} }->do($sql);
}
sub do_delete {
my ($params) = @_;
die "In do_delete, must pass a payload!"
unless (defined $params->{payload});
die "In do_delete, payload must be a hash reference!"
unless (ref $params->{payload} eq 'HASH');
my $sql = <<SQL;
DELETE FROM $params->{tbl}
WHERE
@{[ make_pairs({
payload => $params->{payload},
which => 'old',
joiner => "\nAND ",
transform_null => 'true',
source_id => "$params->{source_id}"
}) ]}
SQL
warn "SQL is\n$sql" if $_SHARED{debug};
$_SHARED{dbh}{ $params->{source_id} }->do($sql);
}
sub make_pairs {
my ($params) = @_;
die "In make_pairs, must pass a payload!"
unless (defined $params->{payload});
die "In make_pairs, payload must be a hash reference!"
unless (ref $params->{payload} eq 'HASH');
warn "In make_pairs, parameters are:\n". Dump($params) if $_SHARED{debug};
my @pairs;
foreach my $key (
keys %{ $params->{payload} }
) {
next unless $key =~ m/^(.?)$params->{which}_(.*)/;
my $left = "$1$2";
warn "In make_pairs, raw key is $key, cooked key is $left" if $_SHARED{debug};
warn "In make_pairs, pairs are @pairs" if $_SHARED{debug};
$bl=!defined $params->{payload}{$key};
$bl1=defined $params->{transform_null};
if (
defined $params->{transform_null} && # In a WHERE clause,
!defined $params->{payload}{$key} # turn undef into IS NULL
) {
push (@pairs, "$left");
}
else {
warn "o, fuck";
warn "params->source_id is $params->{source_id}";
push @pairs, "$left = " . $_SHARED{dbh}{ $params->{source_id} }->quote(
$params->{payload}{$key}
);
}
}
my $ret =
join (
$params->{joiner},
@pairs,
);
warn "In make_pairs, the pairs are:\n". Dump(\@pairs) if $_SHARED{debug};
return $ret;
}
$$;
And this worked! Now I just can create after each row triggers on local PostgreSQL table and promote data to MySQL. I like DBI-Link, however, if I were to use it in production I would examine its code attentively...
Подписаться на:
Сообщения (Atom)