You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+18-6Lines changed: 18 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -113,20 +113,21 @@ Example partition names: `j_myActor_0`, `j_myActor_1`, `j_worker_0` etc.
113
113
Keep in mind that the default maximum length for a table name in Postgres is 63 bytes, so you should avoid any non-ascii characters in your `persistenceId`s and keep the `prefix` reasonably short.
114
114
115
115
> :warning: Once any of the partitioning setting under `postgres-journal.tables.journal.partitions` branch is settled, you should never change it. Otherwise you might end up with PostgresExceptions caused by table name or range conflicts.
116
+
## Migration
116
117
117
-
## Migration from akka-persistence-jdbc 4.0.0
118
+
###Migration from akka-persistence-jdbc 4.0.0
118
119
It is possible to migrate existing journals from Akka Persistence JDBC 4.0.0.
119
120
Since we decided to extract metadata from the serialized payload and store it in a separate column it is not possible to migrate exiting journal and snapshot store using plain SQL scripts.
120
121
121
-
### How migration works
122
+
####How migration works
122
123
Each journal event and snapshot has to be read, deserialized, metadata and tags must be extracted and then everything stored in the new table.
123
124
124
125
We provide you with an optional artifact, `akka-persistence-postgres-migration` that brings to your project the necessary classes to automate the above process.
125
126
126
127
**Important**: Our util classes neither drop nor update any old data. Original tables will be still there but renamed with an `old_` prefix. It's up to you when to drop them.
127
128
128
-
### How to use plugin provided migrations
129
-
#### Add akka-persistence-migration to your project
129
+
####How to use plugin provided migrations
130
+
#####Add akka-persistence-migration to your project
@@ -155,9 +156,20 @@ _ <- new Jdbc4SnapshotStoreMigration(config).run()
155
156
156
157
It's your choice whether you want to trigger migration manually or (recommended) leverage a database version control system of your choice (e.g. Flyway).
157
158
158
-
### Examples
159
+
####Examples
159
160
An example Flyway-based migration can be found in the demo app: https://github.com/mkubala/demo-akka-persistence-postgres/blob/master/src/main/scala/com/github/mkubala/FlywayMigrationExample.scala
160
161
162
+
### Migration from akka-persistence-postgres 0.4.0 to 0.5.0
163
+
New indices need to be created on each partition, to avoid locking production databases for too long, it should be done in 2 steps:
164
+
1. manually create indices CONCURRENTLY,
165
+
2. deploy new release with migration scripts.
166
+
167
+
#### Manually create indices CONCURRENTLY
168
+
Execute DDL statements produced by the [sample migration script](scripts/migratrion-0.5.0/partitioned/1-add-indices-manually.sql), adapt top level variables to match your journal configuration before executing.
169
+
170
+
#### Deploy new release with migration scripts
171
+
See [sample flyway migration script](scripts/migratrion-0.5.0/partitioned/2-add-indices-flyway.sql) and adapt top level variables to match your journal configuration.
172
+
161
173
## Contributing
162
174
We are also always looking for contributions and new ideas, so if you’d like to join the project, check out the [open issues](https://github.com/SwissBorg/akka-persistence-postgres/issues), or post your own suggestions!
@@ -69,9 +69,13 @@ class PartitionedJournalDao(db: Database, journalConfig: JournalConfig, serializ
69
69
valname=s"${partitionPrefix}_$partitionNumber"
70
70
valminRange= partitionNumber * partitionSize
71
71
valmaxRange= minRange + partitionSize
72
-
withHandledPartitionErrors(logger, s"ordering between $minRange and $maxRange") {
73
-
sqlu"""CREATE TABLE IF NOT EXISTS #${schema + name} PARTITION OF #${schema + journalTableCfg.tableName} FOR VALUES FROM (#$minRange) TO (#$maxRange)"""
74
-
}
72
+
valpartitionName=s"${schema + name}"
73
+
valindexName=s"${name}_persistence_sequence_idx"
74
+
withHandledPartitionErrors(logger, s"$partitionName (ordering between $minRange and $maxRange)") {
75
+
sqlu"""CREATE TABLE IF NOT EXISTS #$partitionName PARTITION OF #${schema + journalTableCfg.tableName} FOR VALUES FROM (#$minRange) TO (#$maxRange)"""
76
+
}.andThen(withHandledIndexErrors(logger, s"$indexName for partition $partitionName") {
77
+
sqlu"""CREATE UNIQUE INDEX IF NOT EXISTS #$indexName ON #$partitionName USING BTREE (#${journalTableCfg.columnNames.persistenceId}, #${journalTableCfg.columnNames.sequenceNumber});"""
-- unique btree on (persistence_id, sequence_number)
32
+
v_sql :='CREATE UNIQUE INDEX IF NOT EXISTS '|| quote_ident(v_rec.child|| v_persistence_seq_idx) ||' ON '|| quote_ident(v_schema) ||'.'|| quote_ident(v_rec.child) ||' USING BTREE ('|| quote_ident(v_column_persistence_id) ||','|| quote_ident(v_column_sequence_number) ||');';
33
+
RAISE notice 'Running DDL: %', v_sql;
34
+
EXECUTE v_sql;
35
+
36
+
END LOOP;
37
+
END;
38
+
$$;
39
+
40
+
-- drop global, non-unique index
41
+
DROPINDEX IF EXISTS journal_persistence_id_sequence_number_idx;
0 commit comments