Custom types #12
Replies: 6 comments 7 replies
-
In my opinion, flexibility over "natural" implementation would be better, especially this early into development. Here are some of my thoughts on this:
|
Beta Was this translation helpful? Give feedback.
-
I've been trying different things, and I'm going to put notes about different ideas in different messages below to make threading a little more organized |
Beta Was this translation helpful? Give feedback.
-
Option: skip the extra types at the Postgres level, and just use JSONB
|
Beta Was this translation helpful? Give feedback.
-
Implement everything at the Postgres level for types
|
Beta Was this translation helpful? Give feedback.
-
Use Postgres composite types combined with SQLAlchemy for input/output parsing
This is my least favorite option |
Beta Was this translation helpful? Give feedback.
-
Use Postgres Domains, and skip types altogetherPostgres
I think this might be the best short-term (I.e., MVP) option, though I really don't like the lack of support in SQLAlchemy. |
Beta Was this translation helpful? Give feedback.
-
I've been thinking a lot about custom types (for this post, I'll use
email
as an example type). To quickly recap, we want custom types to:[email protected]
),GROUP BY
(for example, we could group emails by domain),psql
The natural solution would be to use PostgreSQL's Custom Base Type extensibility. This presents a number of problems:
C
, compiled, andmake install
ed on the machine running the postgres server.C
is still a pretty popular/common language)A less natural solution would be to try to cobble together similar functionality using Composite types, domains to check for input format (in our example, the domain could check that the email address fulfills some regex), and a system of Triggers to automatically parse the string input from a given column into the composite type, and store that in a separate column. This also has a number of downsides:
This second solution would be quite painful to create and maintain, and would be suboptimal for anyone who does have full control over their postgres install (or who's using our docker image). It would, however, be possible to implement completely through SQL scripts, and therefore should be maximally portable. It would at least be portable to any Postgres instance; other RDBMSes might not work as well: They should still support custom composite types (this is part of the SQL standard), and it should be possible to recreate the checks and auto-filling of columns using the available functionality in the other systems.
To me, it seems like this question comes down to what balance we want to strike between flexibility w.r.t. porting to other DBMSes and using managed solutions vs. "natural" implementation and probably a more maintainable codebase in the long run.
Beta Was this translation helpful? Give feedback.
All reactions