-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for reviving Dates #829
Comments
Good idea! I know EJSON from the meteor, that provides a similar functionality. |
My five cents. I use EJSON because I come from Meteor, but I only use it for Dates. I think a simple out of box solution for data persistence is nice, but I am also looking into syncing large collections ( or subsets of large collections ) and it's looking like the sync and persistence need to be separated from signalDB collections. |
Superjson could be an optional dependency since not everyone will use the storage adapters that store json. |
Interesting how superjson solves this (with extra meta on the side). 🤔 But it seems very complete. An important factor is also performance, and I noticed some bit showing poor performance in superjson. blitz-js/legacy-framework#778 (comment) I think it would make sense to split persistence providers into its own package together with any dependency on any extra deps. |
devalue is mentioned in one of the links above as another (fast) json converter. |
I’ve just created a pull request (#832) that adds the ability to pass custom serialization and deserialization functions to the local storage, file system, and OPFS persistence adapters. This is an intermediate solution until we can separate the persistence adapters into their own packages (planned for v1). Thanks to @michaelkrog for highlighting the performance issues with superjson. Performance is definitely a crucial factor in selecting a library for this purpose. I’m still undecided on whether this functionality should be integrated into signaldb or would be more appropriate in a custom persistence adapter. Additionally, we need to consider how to handle multiple versions of different serialized data formats and possibly migrate them during upgrades. |
I see 2 different angles to migration here. When replicated source changes it's modelI noticed in one of Linear's videos that what they do when migration is needed (fx. models change server-side) is simply pulling everything again. Even if that means loading a lot of data again, for replicated cases I prefer this direction. I can't even imagine the complexity of synchronizing proper migration client side with the migrations server side. When the persistence layer changes it's storage formatI think your comment was from this viewpoint, namely the migration needed when the persistence layer in signaldb changes how it stores data. As I see it signaldb is the in-memory database - FULL STOP! 😀 signaldb by itself needs no migration, because it has no persistence. Persistence layers and replication are from my point of view just "add-ons" and migration lays with them. My recommendationFor Replication migration (when a replication source changes its model), I would simply explain in the documentation that the preferred method is to re-pull everything on the client. Down the road that could come with some recommendations on how to keep track of it. What I we plan to do in Previsto is to add the version of the REST-api to the name for the persistence layer, fx. When the application is started and version does not match the last used api-version, we wipe everything and pull everything again. For change to the Persistence layer data format I think your intermediate solution is quite flexible and can go a long way. It leaves the migration up the the consumer of signaldb, but in reality I would imagine that most users would be using signaldb with replication and then they would probably always be able to just re-pull everything. Guess what I am saying is: Let's just wait and see what is needed. |
I just released the custom serialization (#832) in
I totally agree with you. It isn't worth it running and maintaining backend database migrations on the client 😀
Yes, exactly. It’s only an issue in a client-only application with persistent storage when the format in which the data is saved changes.
Absolutely! I don't want to have any migration related stuff in the core package 😀
That's actually a neat idea! It even makes it possible to send persistent data from a previous version that hasn't yet been committed to an endpoint that accepts the previous version, before deleting and pulling the new data.
You're right. I think this is our best option so far 🙂 |
If objects in collections contains date objects, then they change to strings when read from a persistence adapter.
This is true for Opfs, Localstorage and Filesystem. All of them uses plain JSON.stringify and JSON.parse, which will convert Dates to strings.
A simple solution could be to add a reviver function to JSON.parse:
Better maybe even encapsulate it in a new class, that all persistence adapters can use.
Suggested solution
I suggest that serializing dates to json, should be resolve in an object like this:
Doing that requires a replacer method for JSON.stringify, and a reviver for JSON.parse. Here follows a pseudo example.
stringify
parse
The text was updated successfully, but these errors were encountered: