What is writing so much postgres disk space?

I ran into an issue where I ran out of postgres disk space. I am using Docker with image graphprotocol/graph-node:v0.25.0. I’m otherwise doing what I perceive to be fairly tame operations saving data while running the node and being strict about what I’m saving. Further, my graph-node service and my postgres service are on different machines, and it’s the latter that ran out of disk.

My understanding is that I am streaming blocks as they come in, taking actions on them according to the subgraph’s .ts files, and then moving on. Is this wrong and my graph-node is actually indexing every block’s data while running this service? If so, is there a way to make that more sparse? I would expect that, once the node has read a block, that it would then move on to the next one and not save it. Where can I find information in the docs on what exactly is being saved by the graph node to postgres?

Thanks!

2 Likes

Can you share the error code?

1 Like

Just a guess from experience, check your arrays. If you have a table with millions of records and there is 1 record with many values in the array, it will cause every record to LOB.

1 Like