Sequences v4
Many applications require that unique surrogate ids be assigned to database entries.
Often the database SEQUENCE
object is used to produce these. In
PostgreSQL, these can be either:
- A manually created sequence using the
CREATE SEQUENCE
command and retrieved by calling thenextval()
function serial
andbigserial
columns or, alternatively,GENERATED BY DEFAULT AS IDENTITY
columns
However, standard sequences in PostgreSQL aren't multi-node aware and
produce values that are unique only on the local node. This is important because
unique ids generated by such sequences cause conflict and data loss (by
means of discarded INSERT
actions) in multi-master replication.
BDR global sequences
For this reason, BDR provides an application-transparent way to generate unique ids using sequences on bigint or bigserial datatypes across the whole BDR group, called global sequences.
BDR global sequences provide an easy way for applications to use the database to generate unique synthetic keys in an asynchronous distributed system that works for most—but not necessarily all—cases.
Using BDR global sequences allows you to avoid the problems with insert
conflicts. If you define a PRIMARY KEY
or UNIQUE
constraint on a column
that's using a global sequence, no node can ever get
the same value as any other node. When BDR synchronizes inserts between the
nodes, they can never conflict.
BDR global sequences extend PostgreSQL sequences, so they are crash-safe. To use
them, you must be granted the bdr_application
role.
There are various possible algorithms for global sequences:
- SnowflakeId sequences
- Globally allocated range sequences
SnowflakeId sequences generate values using an algorithm that doesn't require inter-node communication at any point. It's faster and more robust and has the useful property of recording the timestamp at which the values were created.
SnowflakeId sequences have the restriction that they work only for 64-bit BIGINT datatypes and produce values up to 19 digits long, which might be too long for use in some host language datatypes such as Javascript Integer types. Globally allocated sequences allocate a local range of values that can be replenished as needed by inter-node consensus, making them suitable for either BIGINT or INTEGER sequences.
You can create a global sequence using the bdr.alter_sequence_set_kind()
function. This function takes a standard PostgreSQL sequence and marks it as
a BDR global sequence. It can also convert the sequence back to the standard
PostgreSQL sequence.
BDR also provides the configuration variable bdr.default_sequence_kind
, which
determines the kind of sequence to create when the CREATE SEQUENCE
command is executed or when a serial
, bigserial
, or
GENERATED BY DEFAULT AS IDENTITY
column is created. Valid settings are:
local
, meaning that newly created sequences are the standard PostgreSQL (local) sequences.galloc
, which always creates globally allocated range sequences.snowflakeid
, which creates global sequences for BIGINT sequences that consist of time, nodeid, and counter components. You can't use it with INTEGER sequences (so you can use it forbigserial
but not forserial
).timeshard
, which is the older version of SnowflakeId sequence and is provided for backward compatibility only. The SnowflakeId is preferred.distributed
(the default), which is a special value that you can use only forbdr.default_sequence_kind
. It selectssnowflakeid
forint8
sequences (i.e.,bigserial
) andgalloc
sequence forint4
(i.e.,serial
) andint2
sequences.
The bdr.sequences
view shows information about individual sequence kinds.
currval()
and lastval()
work correctly for all types of global sequence.
SnowflakeId sequences
The ids generated by SnowflakeId sequences are loosely time ordered so you can use them to get the approximate order of data insertion, like standard PostgreSQL sequences. Values generated within the same millisecond might be out of order, even on one node. The property of loose time ordering means they are suitable for use as range partition keys.
SnowflakeId sequences work on one or more nodes and don't require any inter-node communication after the node join process completes. So you can continue to use them even if there's the risk of extended network partitions. They aren't affected by replication lag or inter-node latency.
SnowflakeId sequences generate unique ids in a different way from standard sequences. The algorithm uses three components for a sequence number. The first component of the sequence is a timestamp at the time of sequence number generation. The second component of the sequence number is the unique id assigned to each BDR node, which ensures that the ids from different nodes are always different. The third component is the number generated by the local sequence.
While adding a unique node id to the sequence number is enough to ensure there are no conflicts, we also want to keep another useful property of sequences. The ordering of the sequence numbers roughly corresponds to the order in which data was inserted into the table. Putting the timestamp first ensures this.
A few limitations and caveats apply to SnowflakeId sequences.
SnowflakeId sequences are 64 bits wide and need a bigint
or bigserial
.
Values generated are up to 19 digits long.
There's no practical 32-bit integer
version, so you can't use it with serial
sequences. Use globally allocated range sequences instead.
For SnowflakeId there's a limit of 4096 sequence values generated per millisecond on any given node (about 4 million sequence values per second). In case the sequence value generation wraps around within a given millisecond, the SnowflakeId sequence waits until the next millisecond and gets a fresh value for that millisecond.
Since SnowflakeId sequences encode timestamps into the sequence value, you can generate new sequence values only within the given time frame (depending on the system clock). The oldest timestamp that you can use is 2016-10-07, which is the epoch time for the SnowflakeId. The values wrap to negative values in the year 2086 and completely run out of numbers by 2156.
Since timestamp is an important part of a SnowflakeId sequence, there's additional protection from generating sequences with a timestamp older than the latest one used in the lifetime of a postgres process (but not between postgres restarts).
The INCREMENT
option on a sequence used as input for SnowflakeId sequences is
effectively ignored. This might be relevant for applications that do sequence
ID caching, like many object-relational mapper (ORM) tools, notably Hibernate.
Because the sequence is time based, this has little practical effect since the
sequence advances to a new noncolliding value by the time the
application can do anything with the cached values.
Similarly, you might change the START
, MINVALUE
, MAXVALUE
, and CACHE
settings
on the underlying sequence, but there's no benefit to doing
so. The sequence's low 14 bits are used and the rest is discarded, so
the value range limits don't affect the function's result. For the same
reason, setval()
isn't useful for SnowflakeId sequences.
Timeshard sequences
Timeshard sequences are provided for backward compatibility with existing installations but aren't recommended for new application use. We recommend using the SnowflakeId sequence instead.
Timeshard is very similar to SnowflakeId but has different limits and fewer protections and slower performance.
The differences between timeshard and SnowflakeId are as following:
- Timeshard can generate up to 16384 per millisecond (about 16 million per
second), which is more than SnowflakeId. However, there's no protection
against wraparound within a given millisecond. Schemas using the timeshard
sequence must protect the use of the
UNIQUE
constraint when using timeshard values for given column. - The timestamp component of timeshard sequence runs out of values in the year 2050 and, if used in combination with bigint, the values wrap to negative numbers in the year 2033. This means that sequences generated after 2033 have negative values. This is a considerably shorter time span than SnowflakeId and is the main reason why SnowflakeId is preferred.
- Timeshard sequences require occasional disk writes (similar to standard local sequences). SnowflakeIds are calculated in memory so the SnowflakeId sequences are in general a little faster than timeshard sequences.
Globally allocated range sequences
The globally allocated range (or galloc
) sequences allocate ranges (chunks)
of values to each node. When the local range is used up, a new range is
allocated globally by consensus amongst the other nodes. This uses the key
space efficiently but requires that the local node be connected to a majority
of the nodes in the cluster for the sequence generator to progress when the
currently assigned local range is used up.
Unlike SnowflakeId sequences, galloc
sequences support all sequence data types
provided by PostgreSQL: smallint
, integer
, and bigint
. This means that
you can use galloc
sequences in environments where 64-bit sequences are
problematic. Examples include using integers in javascript, since that supports only
53-bit values, or when the sequence is displayed on output with limited space.
The range assigned by each voting is currently predetermined based on the datatype the sequence is using:
- smallint — 1 000 numbers
- integer — 1 000 000 numbers
- bigint — 1 000 000 000 numbers
Each node allocates two chunks of seq_chunk_size, one for the current use plus a reserved chunk for future usage, so the values generated from any one node increase monotonically. However, viewed globally, the values generated aren't ordered at all. This might cause a loss of performance due to the effects on b-tree indexes and typically means that generated values aren't useful as range partition keys.
The main downside of the galloc
sequences is that once the assigned range is
used up, the sequence generator has to ask for consensus about the next range
for the local node that requires inter-node communication. This could
lead to delays or operational issues if the majority of the BDR group isn't
accessible. This might be avoided in later releases.
The CACHE
, START
, MINVALUE
, and MAXVALUE
options work correctly
with galloc
sequences. However, you need to set them before transforming
the sequence to the galloc
kind. The INCREMENT BY
option also works
correctly. However, you can't assign an increment value that's equal
to or more than the above ranges assigned for each sequence datatype.
setval()
doesn't reset the global state for galloc
sequences; don't use it.
A few limitations apply to galloc
sequences. BDR tracks galloc
sequences in a
special BDR catalog bdr.sequence_alloc. This
catalog is required to track the currently allocated chunks for the galloc
sequences. The sequence name and namespace is stored in this catalog. Since the
sequence chunk allocation is managed by Raft, whereas any changes to the
sequence name/namespace is managed by the replication stream, BDR currently doesn't
support renaming galloc
sequences or moving them to another namespace or
renaming the namespace that contains a galloc
sequence. Be
mindful of this limitation while designing application schema.
Converting a local sequence to a galloc sequence
Before transforming a local sequence to galloc, you need to take care of several prerequisites.
1. Verify that sequence and column data type match
Check that the sequence's data type matches the data type of the column with
which it will be used. For example, you can create a bigint
sequence
and assign an integer
column's default to the nextval()
returned by that
sequence. With galloc sequences, which for bigint
are allocated in blocks of
1 000 000 000, this quickly results in the values returned by nextval()
exceeding the int4
range if more than two nodes are in use.
The following example shows what can happen:
After executing INSERT INTO seqtest VALUES(DEFAULT)
on two nodes, the table
contains the following values: