DB conversion: SQLite to PostgreSQL

I started to testing vikunja and as already said in other posts: It’s damn f* great! :rocket:

To have a fast start, I used sqlite as the backend database. Now I want to reinstall vikunja on my productive server with valkey and postgresql.

I can’t export my data of vikunja because it wants to send it via e-mail which is not setup on my old instance. Also in the work directory of vikunja, in my case /opt/vikunja/, there is a files folder with a subfolder called user-export-tmp but that is empty.

Is there any other way to migrate from sqlite to postgresql or did I miss something? :thinking:

I haven’t tried it, but it obviously should be possible to migrate.
In my experience I did 2TB DB conversion from Oracle to Pg so I can assure you nothing is impossible after what I’ve experienced.
Quick Google search revealed trustworthy conversion tool with 5.6K stars called pgloader.

1 Like

OMG, that sounds crazy! :exploding_head:
So you have seen everything when you survived with Oracle not true?

Yes I found also pgloader, don’t get me wrong I google first before I created the post, but the tool is only available for Debian systems - I thought! So I created this post here, maybe you would have had a better idea anyway. :blush:

There are also RPM packages available, which I picked up today. :wink:
So thanks for poking me again in the right direction! :handshake:

Note:
Choose your RPM distribution from the PostgreSQL Common Repository section.

After a long hiccup, it finally worked. I needed to allow IP connections to my PostgreSQL server temporarily. After it’s easy, you can use the db file directly, in my case vikunja.db, in your working directory.

pgloader vikunja.db pgsql://dev@192.168.200.203/devdb

So far so good, right? Not exactly…
So at first the migration worked: in general! But there are still some issues.

The log output of vikunja mentioning that there is an invalid syntax for bigint. I know there is a difference between SQLite and PostgreSQL but should not pgload merge or convert those differences? Or did I miss something in the vikunja configuration or anywhere else? :man_shrugging::thinking:

[root@pro-00000006-vkja logs]# cat standard.log 
2025-02-21T14:55:16Z: INFO	▶ 004 Running migrations…
2025-02-21T14:55:16Z: INFO	▶ 06e Ran all migrations successfully.
2025-02-21T14:55:16Z: INFO	▶ 06f Mailer is disabled, not sending reminders per mail
2025-02-21T14:55:16Z: INFO	▶ 070 Mailer is disabled, not sending overdue per mail
2025-02-21T14:55:16Z: INFO	▶ 071 Vikunja version v0.24.6
2025-02-21T14:55:46Z: ERROR	▶ 0f3 pq: invalid input syntax for type bigint: "false"
2025-02-21T14:55:46Z: ERROR	▶ 100 pq: invalid input syntax for type bigint: "false"
[root@pro-00000006-vkja logs]# 

If I forgot to give some information snippets, don’t hesitate to ask for it, I’ll try to deliver it. :blush:

Thank you for reading and helping. :pray:

  1. Did you change this env variable to postgres? If not - I would advise to do the conversion from scratch.
  2. Try changing the SQL log level to see what query exactly fails.
    See code snippet with needed env variables here:
    Dev: how to log DML SQL queries

P.S. it’s always worth checking if such tools like pgloader has Docker image you can run - it makes setup much more easier, does not leave waste on your system and many other benefits.

1 Like

If you mean the database section in the config.yml, yes:

database:
  # Database type to use. Supported values are mysql, postgres and sqlite. Vikunja is able to run with MySQL 8.0+, Mariadb 10.2+, PostgreSQL 12+, and sqlite.
  type: "postgres"
  # Database user which is used to connect to the database.
  user: "dev"
  # Database password
  password: "some_unsecure_password"
  # Database host
  host: "localhost"
  # Database to use
  database: "devdb"
  # When using sqlite, this is the path where to store the data
  path: "./vikunja.db"
  # Sets the max open connections to the database. Only used when using mysql and postgres.
  maxopenconnections: 100
  # Sets the maximum number of idle connections to the db.
  maxidleconnections: 50
  # The maximum lifetime of a single db connection in milliseconds.
  maxconnectionlifetime: 10000

The database log looks as following:

2025-02-22T10:44:46.667296109Z: INFO    ▶ [DATABASE] 004^[[0m [SQL] SELECT tablename FROM pg_tables WHERE schemaname = $1 [public] - 12.439974ms
2025-02-22T10:44:46.682627636Z: INFO    ▶ [DATABASE] 005^[[0m [SQL] SELECT column_name, column_default, is_nullable, data_type, character_maximum_length, description,
    CASE WHEN p.contype = 'p' THEN true ELSE false END AS primarykey,
    CASE WHEN p.contype = 'u' THEN true ELSE false END AS uniquekey
FROM pg_attribute f
    JOIN pg_class c ON c.oid = f.attrelid JOIN pg_type t ON t.oid = f.atttypid
    LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum
    LEFT JOIN pg_description de ON f.attrelid=de.objoid AND f.attnum=de.objsubid
    LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
    LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey)
    LEFT JOIN pg_class AS g ON p.confrelid = g.oid
    LEFT JOIN INFORMATION_SCHEMA.COLUMNS s ON s.column_name=f.attname AND c.relname=s.table_name
WHERE n.nspname= s.table_schema AND c.relkind = 'r' AND c.relname = $1 AND s.table_schema = $2 AND f.attnum > 0 ORDER BY f.attnum; [migration public] - 15.129377ms
2025-02-22T10:44:46.685035397Z: INFO    ▶ [DATABASE] 006^[[0m [SQL] SELECT indexname, indexdef FROM pg_indexes WHERE tablename=$1 AND schemaname=$2 [migration public] - 2.235786ms
^[[33m2025-02-22T10:44:46.685107774Z: WARNING   ▶ [DATABASE] 007^[[0m Table migration column id db type is TEXT, struct type is VARCHAR(255)
^[[33m2025-02-22T10:44:46.685128007Z: WARNING   ▶ [DATABASE] 008^[[0m Table migration column description db type is TEXT, struct type is VARCHAR(255)
2025-02-22T10:44:46.685781644Z: INFO    ▶ [DATABASE] 009^[[0m [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [SCHEMA_INIT] - 580.417µs
2025-02-22T10:44:46.686153949Z: INFO    ▶ [DATABASE] 00a^[[0m [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190324205606] - 312.897µs
2025-02-22T10:44:46.686420181Z: INFO    ▶ [DATABASE] 00b^[[0m [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190328074430] - 222.499µs
2025-02-22T10:44:46.686653661Z: INFO    ▶ [DATABASE] 00c^[[0m [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190430111111] - 197.649µs
2025-02-22T10:44:46.68689575Z: INFO     ▶ [DATABASE] 00d^[[0m [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190511202210] - 202.442µs
2025-02-22T10:44:46.687188882Z: INFO    ▶ [DATABASE] 00e^[[0m [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190514192749] - 218.493µs
2025-02-22T10:44:46.687408343Z: INFO    ▶ [DATABASE] 00f^[[0m [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190524205441] - 186.34µs
(...)

I believe it has something to do with WARNING lines, without want to play the captain obvious, but these are the only two WARNINGS I get in the log file. If needed, I can upload the complete log.

^[[33m2025-02-22T10:44:46.685107774Z: WARNING   ▶ [DATABASE] 007^[[0m Table migration column id db type is TEXT, struct type is VARCHAR(255)
^[[33m2025-02-22T10:44:46.685128007Z: WARNING   ▶ [DATABASE] 008^[[0m Table migration column description db type is TEXT, struct type is VARCHAR(255)

Also, I have noticed that the error message in the Vikunja Desktop app only appears when I click on Overview and Upcoming. Everything else does not throw an error message.

I checked the possible differences in the SQLite and PostgreSQL database or better say in the table migration but did not find any or at least none I’m aware of. But I checked through a GUI application called Antares SQL, so maybe the information is limited there.

On the stdout I got also this error here when I start vikunja via cli:

2025-02-22T11:05:20Z: WEB       ▶ 192.168.200.1  GET 500 /api/v1/labels?page=1 31.932181ms - Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like
 Gecko) vikunja-desktop/v0.24.3 Chrome/122.0.6261.156 Electron/29.4.6 Safari/537.36
(...)
2025-02-22T11:05:31Z: WEB       ▶ 192.168.200.1  GET 500 /api/v1/labels?page=1 24.129382ms - Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like
 Gecko) vikunja-desktop/v0.24.3 Chrome/122.0.6261.156 Electron/29.4.6 Safari/537.36
(...)
2025-02-22T11:05:33Z: WEB       ▶ 192.168.200.1  GET 500 /api/v1/tasks/all?sort_by[]=due_date&sort_by[]=id&order_by[]=asc&order_by[]=desc&filter=done+%3D+fa
lse&filter_include_nulls=false&s=&filter_timezone=Europe%2FZurich&page=1 4.223774ms - Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko)
 vikunja-desktop/v0.24.3 Chrome/122.0.6261.156 Electron/29.4.6 Safari/537.36
(...)
2025-02-22T11:05:33Z: WEB       ▶ 192.168.200.1  GET 500 /api/v1/tasks/all?sort_by[]=due_date&sort_by[]=id&order_by[]=asc&order_by[]=desc&filter=done+%3D+fa
lse+%26%26+due_date+%3C+%272025-03-01T11:05:33.679Z%27+%26%26+due_date+%3E+%272025-02-22T11:05:33.679Z%27&filter_include_nulls=false&s=&filter_timezone=Euro
pe%2FZurich&page=1 3.707673ms - Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) vikunja-desktop/v0.24.3 Chrome/122.0.6261.156 Electro
n/29.4.6 Safari/537.36

Yes, I know but did not learn yet about Docker or more favorable Podman but created a task for it in vikunja. :wink: Anyway, for such situation I have a development or test VM so no problem there with garbaging the system.

I hope I could summarize all important information, else you know what to do. :blush:

It’s worth checking PG log to get the timestamp of when error happens, then checking desktop/SQL logs around that time to see possible culprit DB tables, then setting up alternative Vikunja env on PG and comparing DB types between discovered DB tables.

I probably found the culprit:

The Sequences objects are not the same as you see in the reference picture. Keep in mind that the devdb is the one which I tested the import or better say the migration from SQLite to PostgreSQL, where the devdb2, I know super name, is the new one which is created original by vikunja.

I wanted to export the Sequences and reimport them, but didn’t work. :man_shrugging:
Tried with updating the Columns or INSERT SQL script. Both did not work. Also tried to delete sequences completely, but there was a dependency error with the following message.

ERROR: cannot drop sequence api_tokens_id_seq because other objects depend on it
  Detail: default value for column id of table api_tokens depends on sequence api_tokens_id_seq
  Hint: Use DROP ... CASCADE to drop the dependent objects too.

Which makes sense because for that are sequences there, but I did not know exactly how to recreate the missing sequences (correctly) and associate them to the correct tables… I can create new ones but cannot import them is a mystery to me, but maybe I misunderstood something in general or in DBeaver.

Anyway, my second thought to this situation is to let vikunja recreate the database, and then I reimport the user and tasks table again. Should this work, or are there or tables which I should not miss reimporting them because they’re also associated? :thinking:

You need to make sure sequences in SQLite and converted PG are the same - same amount, same names, same values.

Also sequences don’t sound like they have something to do with original error pq: invalid input syntax for type bigint: “false”, but maybe you’re right.

I think this approach won’t work because of ID differences, dependencies, etc.

P.S. how many tasks you got already? Wouldn’t it be faster to manually create everything at this point?

I tried different approaches, even though the database errors do not go away…
Probably need to redefine the task by hand again…

Yes, it didn’t work.

Yes in terms of how much time I have wasted, so that would be or is faster, nonetheless very tedious.

It also really sucks that the export function is tied to the e-mail function. If you don’t set up an e-mail address, you don’t have an export function in vikunja.

Update:

I found out that if you select Export in the Settings of Vikunja it creates a zip file in the root path of the binary or whatever is configured via config.yml file. Though it does not have a real filename or extension. In my case, it’s just called 8.

[root@pro-00000006-vkja files]# ll
total 12
-rw-r--r--. 1 root root 8600 Feb 25 10:10 8
drwx------. 1 root root    0 Feb 25 10:10 user-export-tmp
[root@pro-00000006-vkja files]# pwd
/opt/vikunja2/files
[root@pro-00000006-vkja files]# 

I renamed it to bkp_vikunja.zip. In there are two JSON files, data.json and filters.json.
I can import that file into vikunja but nonetheless also this approach fails.

STDOUT Log:

2025-02-25T10:26:55Z: ERROR     ▶ 8f4 [creating structure] Error while creating structure: Project view does not exist [ProjectViewID: 0]
2025-02-25T10:26:55Z: ERROR     ▶ 8f5 could not insert data: Project view does not exist [ProjectViewID: 0]
2025-02-25T10:26:55Z: WEB       ▶ 192.168.200.1  PUT 500 /api/v1/migration/vikunja-file/migrate 17.615314797s - Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) vikunja-desktop/v0.24.3 Chrome/122.0.6261.156 Electron/29.4.6 Safari/537.36

Do I really need to recreate the projects and “groups” (Project inside projects)?

The ongoing tasks and labels were successfully imported but as said none of those which are Done or the project folders/views in general.

Yeah, that’s unfortunate.
I had same error with data import back in the day:

Okay, I see…so there is no solution to this at the moment?…:disappointed_relieved:

DB conversion should be possible, it’s just that we need someone with a bit of DB knowledge and time to migrate SQLite to PG and document the caveats here.
I cannot see myself diving into this, can be 15 min thing, or can become rabbit hole very quickly and eat up the whole day.
I suggest to copy everything over to a new PG instance manually.

1 Like

Returning back to last error I saw regarding DB conversion.

@LxWulf please try doing regular DB conversion using pgloader and when you confirm that it succeeded and before deploying app on top of it - try running following scripts on new PG DB:

Sync sequences SQL

SELECT setval(‘projects_id_seq’, COALESCE(MAX(id), 1)) FROM projects;
SELECT setval(‘migration_id_seq’, COALESCE(MAX(id), 1)) FROM migration;
SELECT setval(‘saved_filters_id_seq’, COALESCE(MAX(id), 1)) FROM saved_filters;
SELECT setval(‘subscriptions_id_seq’, COALESCE(MAX(id), 1)) FROM subscriptions;
SELECT setval(‘task_reminders_id_seq’, COALESCE(MAX(id), 1)) FROM task_reminders;
SELECT setval(‘api_tokens_id_seq’, COALESCE(MAX(id), 1)) FROM api_tokens;
SELECT setval(‘users_id_seq’, COALESCE(MAX(id), 1)) FROM users;
SELECT setval(‘totp_id_seq’, COALESCE(MAX(id), 1)) FROM totp;
SELECT setval(‘users_projects_id_seq’, COALESCE(MAX(id), 1)) FROM users_projects;
SELECT setval(‘label_tasks_id_seq’, COALESCE(MAX(id), 1)) FROM label_tasks;
SELECT setval(‘link_shares_id_seq’, COALESCE(MAX(id), 1)) FROM link_shares;
SELECT setval(‘labels_id_seq’, COALESCE(MAX(id), 1)) FROM labels;
SELECT setval(‘unsplash_photos_id_seq’, COALESCE(MAX(id), 1)) FROM unsplash_photos;
SELECT setval(‘reactions_id_seq’, COALESCE(MAX(id), 1)) FROM reactions;
SELECT setval(‘task_attachments_id_seq’, COALESCE(MAX(id), 1)) FROM task_attachments;
SELECT setval(‘buckets_id_seq’, COALESCE(MAX(id), 1)) FROM buckets;
SELECT setval(‘team_projects_id_seq’, COALESCE(MAX(id), 1)) FROM team_projects;
SELECT setval(‘team_members_id_seq’, COALESCE(MAX(id), 1)) FROM team_members;
SELECT setval(‘files_id_seq’, COALESCE(MAX(id), 1)) FROM files;
SELECT setval(‘task_relations_id_seq’, COALESCE(MAX(id), 1)) FROM task_relations;
SELECT setval(‘notifications_id_seq’, COALESCE(MAX(id), 1)) FROM notifications;
SELECT setval(‘migration_status_id_seq’, COALESCE(MAX(id), 1)) FROM migration_status;
SELECT setval(‘tasks_id_seq’, COALESCE(MAX(id), 1)) FROM tasks;
SELECT setval(‘teams_id_seq’, COALESCE(MAX(id), 1)) FROM teams;
SELECT setval(‘task_assignees_id_seq’, COALESCE(MAX(id), 1)) FROM task_assignees;
SELECT setval(‘webhooks_id_seq’, COALESCE(MAX(id), 1)) FROM webhooks;
SELECT setval(‘project_views_id_seq’, COALESCE(MAX(id), 1)) FROM project_views;
SELECT setval(‘task_comments_id_seq’, COALESCE(MAX(id), 1)) FROM task_comments;
SELECT setval(‘user_tokens_id_seq’, COALESCE(MAX(id), 1)) FROM user_tokens;

P.S. synchronizing sequences makes no sense, they seem to be already in sync even after DB conversion with pgloader

1 Like

Hopefully I can solve or at least help to solve this issue with my situation.

So I did the conversion of the database from sqlite to postgresql with pgloader which was successfully so far.

pgloader log
officerk@fl41:~/Downloads$ pgloader sqlite:///home/officerk/Downloads/vikunja.db pgsql://dev:MyDBPassword@192.168.200.203/devdb2
2025-02-25T16:14:53.057999Z LOG pgloader version "3.6.7~devel"
2025-02-25T16:14:53.063999Z LOG Data errors in '/tmp/pgloader/'
2025-02-25T16:14:53.239997Z LOG Migrating from #<SQLITE-CONNECTION sqlite:///home/officerk/Downloads/vikunja.db {1005C61763}>
2025-02-25T16:14:53.240997Z LOG Migrating into #<PGSQL-CONNECTION pgsql://dev@192.168.200.203:5432/devdb2 {1005DBECC3}>
2025-02-25T16:14:57.537943Z LOG report summary reset
             table name     errors       rows      bytes      total time
-----------------------  ---------  ---------  ---------  --------------
                  fetch          0          0                     0.000s
        fetch meta data          0        116                     0.100s
         Create Schemas          0          0                     0.002s
       Create SQL Types          0          0                     0.008s
          Create tables          0         66                     1.040s
         Set Table OIDs          0         33                     0.014s
-----------------------  ---------  ---------  ---------  --------------
              migration          0        100     1.6 kB          0.138s
               projects          0          9     0.8 kB          0.154s
                  teams          0          0                     0.307s
          team_projects          0          0                     0.294s
         task_assignees          0          0                     0.368s
            label_tasks          0         37     1.0 kB          0.441s
            link_shares          0          0                     0.496s
       task_attachments          0          0                     0.562s
                buckets          0         28     1.8 kB          0.658s
          saved_filters          0          0                     0.638s
              favorites          0          0                     0.742s
         typesense_sync          0          0                     0.756s
              reactions          0          0                     0.844s
         task_positions          0         75     1.5 kB          0.921s
                  files          0          0                     0.906s
                  users          0          1     0.3 kB          1.056s
            user_tokens          0          0                     1.024s
                  tasks          0         30    19.2 kB          0.045s
           team_members          0          0                     0.004s
         users_projects          0          0                     0.083s
                 labels          0         16     1.1 kB          0.261s
         task_reminders          0          0                     0.202s
         task_relations          0         12     0.5 kB          0.390s
          task_comments          0          1     0.1 kB          0.431s
        unsplash_photos          0          0                     0.477s
          subscriptions          0          1     0.0 kB          0.569s
             api_tokens          0          0                     0.562s
               webhooks          0          0                     0.675s
          project_views          0         33     2.4 kB          0.756s
           task_buckets          0         30     0.2 kB          0.790s
       migration_status          0          0                     0.832s
                   totp          0          0                     0.911s
          notifications          0          0                     0.943s
-----------------------  ---------  ---------  ---------  --------------
COPY Threads Completion          0          4                     1.095s
         Create Indexes          0         83                     4.871s
 Index Build Completion          0         83                     1.380s
        Reset Sequences          0         14                     0.138s
           Primary Keys          0         29                     0.139s
    Create Foreign Keys          0          0                     0.000s
        Create Triggers          0          0                     0.001s
       Install Comments          0          0                     0.000s
-----------------------  ---------  ---------  ---------  --------------
      Total import time          ✓        373    30.4 kB          7.624s

The TABLES with DATA is successfully migrated - so far.

Reference picture:

Then Sequences are also there with the same amount as the SQLite database.

Reference picture:

Nonetheless, it looks good the script does not run at all, saying that all sequences does not exist…somehow…:man_shrugging:

SQL Error [42703]: ERROR: column "‘projects_id_seq’" does not exist
  Position: 15
  ERROR: column "‘projects_id_seq’" does not exist
  Position: 15
  ERROR: column "‘projects_id_seq’" does not exist
  Position: 15
(...)

This gets repeated for every line in the SQL script. It’s understandable that some lines or commands throw an error because the Sequence does really not exist, like the webhooks_id_seq as example and a few others, but project_id_seq does exist.
Also, you may see in the reference picture, those are also not existent on the SQLite database. :thinking:

Is there a core issue in general with the SQLite database from vikunja? When I create a new Postgres database the sequences are all there like mentioned before. But not in the SQLite database.

I mean, based on your screenshots sequences seem to be same as in SQLite, so there’s really no reason to run the script I suggested.
But then I don’t understand what caused them to be be non-synchronized here:

P.S. Sorry, I think I misunderstood that that was comparison between raw PG app and converted PG app - those screenshots don’t deserve attention.

What happens if you now deploy Vikunja on a new PG DB - do sequences get messed up by app deployment itself?
If not do you still get any errors? Overview/Upcoming page problems again?

If any errors - please send PG DB log here.
If any specific tables problems - send the description of those table data types here too (both new PG and SQLite) - gonna help to identify possible data type differences.

Yes, it may be the same, but it is not functional…

Eh…okay…:face_with_spiral_eyes:

I don’t think so. There is none which gets deleted, altered or added, at least so far I do understand.

Yes, I still get errors when I click on the Overview and Upcoming page. Also, it’s not usable at all because several functions do not work:

  • Creating new projects
  • Duplicating projects
  • Labels are not existent (though in the database table labels the data is there)
  • Creating tasks

It does show up the tasks and kanban boards, but it’s more like a museum to look as to work with it.

I tried to log the things which happens in the background, but I did not find any interesting thing in the logs, though I set everything to DEBUG. But maybe I missed something. :man_shrugging:

But the PostgreSQL log seems to be interesting, and also it does say the same as before ERROR: invalid input syntax for type bigint: "false" again and again. Also, the CONTEXT value is unnamed portal parameter $X where X is a number like 7 for example. I also listed it in the following list.

Note:
Ignore the FATAL error which appears. I removed the database devdb2 while running the vikunja app. I did then recreate it again and reloaded the data with pgloader. Also tried to set the correct encoding of the database, but this was already set correctly. So you can also ignore this. If it’s too confusing, I do can recreate a new setup or special situations with logs.

Logs | bpa.st

If missed something, just say it, I will try to deliver the answers or information to it.

Note:
Maybe it would be a good idea to allow file uploads with .log extension in discourse, if possible. :thinking:

To summarize the solution:

1. Use pgloader tool to migrate Vikunja's SQLite DB to PostgreSQL

Link to pgloader: here

2. Convert all boolean DB columns in PG from bigint to boolean type

Here are the SQL scripts you need to execute on new PostgreSQL DB:

SQL that converts semantically boolean columns from bigint to boolean
ALTER TABLE projects
    ALTER COLUMN is_archived DROP DEFAULT,
    ALTER COLUMN is_archived TYPE boolean USING is_archived != 0,
    ALTER COLUMN is_archived SET DEFAULT false;

ALTER TABLE tasks
    ALTER COLUMN done DROP DEFAULT,
    ALTER COLUMN done TYPE boolean USING done != 0,
    ALTER COLUMN done SET DEFAULT false;

ALTER TABLE saved_filters
    ALTER COLUMN is_favorite DROP DEFAULT,
    ALTER COLUMN is_favorite TYPE boolean USING is_favorite != 0,
    ALTER COLUMN is_favorite SET DEFAULT false;

ALTER TABLE teams
    ALTER COLUMN is_public DROP DEFAULT,
    ALTER COLUMN is_public TYPE boolean USING is_public != 0,
    ALTER COLUMN is_public SET DEFAULT false;

ALTER TABLE team_members
    ALTER COLUMN admin DROP DEFAULT,
    ALTER COLUMN admin TYPE boolean USING admin != 0,
    ALTER COLUMN admin SET DEFAULT false;

ALTER TABLE totp
    ALTER COLUMN enabled DROP DEFAULT,
    ALTER COLUMN enabled TYPE boolean USING enabled != 0,
    ALTER COLUMN enabled SET DEFAULT false;

ALTER TABLE users
    ALTER COLUMN email_reminders_enabled DROP DEFAULT,
    ALTER COLUMN email_reminders_enabled TYPE boolean USING email_reminders_enabled != 0,
    ALTER COLUMN email_reminders_enabled SET DEFAULT false;

ALTER TABLE users
    ALTER COLUMN discoverable_by_name DROP DEFAULT,
    ALTER COLUMN discoverable_by_name TYPE boolean USING discoverable_by_name != 0,
    ALTER COLUMN discoverable_by_name SET DEFAULT false;

ALTER TABLE users
    ALTER COLUMN discoverable_by_email DROP DEFAULT,
    ALTER COLUMN discoverable_by_email TYPE boolean USING discoverable_by_email != 0,
    ALTER COLUMN discoverable_by_email SET DEFAULT false;

ALTER TABLE users
    ALTER COLUMN overdue_tasks_reminders_enabled DROP DEFAULT,
    ALTER COLUMN overdue_tasks_reminders_enabled TYPE boolean USING overdue_tasks_reminders_enabled != 0,
    ALTER COLUMN overdue_tasks_reminders_enabled SET DEFAULT false;

Explanation

As far as I understand the problem is that SQLite defines boolean values (true/false) as numbers, while PG has specific boolean data type for that.

During conversion there’s no way for pgloader tool to understand if it should be boolean or number (bigint) so it defaults to bigint (basicaly leaves number as number).

Backend code fails to substitute/insert values in these columns because it passes true/false there while PG expects numbers.

During Oracle to PG conversion I had exactly the same problem, though the fix was a bit easier because our semantically boolean columns at least had the number length of 1, I don’t think it’s the case for SQLite which makes the distinction between boolean and number non-trivial for the conversion tool.

Note: new boolean columns might be added in the future which means if you know about them - adjust the SQL queries accordingly.

1 Like

That sounds like a possible explanation for it. :thinking:

Though this does not explain why the Sequences are not the same amount right? Or did I miss something? :thinking:

Anyway I recreated the situation as you suggested. Reimported the SQLite database via pgloader and then run your sql script on the database. Though sadly it did not work completely flawlessly.

Nonetheless, there are also some good things: I do can now create task and also save or move them, this didn’t work before. But still can’t create projects or teams and the overview and upcoming still throws errors.

By the way, I found this also an interesting aspect: The labels are in the database, but it does not show in the UI no matter if web or desktop app but this was also the case before the script.

Note:
My manual conversion of the data from my old instance to the one was successfully, and I’m happy to use it professional. :rocket:

Pardon me, I’ve updated the SQL in my previous post.
Please start from beginning and apply the SQL on top of it.
It’s hard to test these without actual PG under my hands, but I think that should be it because I was able to reproduce and fix the issue on some online demo PG DB.

That’s a good sign indeed.

You said youreself that they are identical here:

So let’s not worry about them, current focus is on SQL errors. One problem at a time.

Didn’t catch what you mean here, Fehlstart? :laughing: