Domain Storage unusual Size in File Level

@indreias

I will do as per your advice. I will put temporary disk until we find the culprit of this issue.

Thanks again for all the assistance.

I will execute the command and keep you posted.

Regards,
Jay

@indreias

There is good news and there is bad news.

Found on ALL SCAN the LEAKS which occupy 500+ GB.

I tried to run ALL SCAN PURGE but unfortunately did not remove the leaks. Please see attached CLI logs for reference.

CLI_SCAN_ALL_PURGE.txt (9.0 KB)

Regards,
Jay

@indreias

Update:

Good news, I reacquire all my disk space now.

After executing SCAN ALL PURGE executing did not do anything. Then I just tried to do COMPACT ALL FORCED after and it did the job.

Before closing this TOPIC.

How this happen and how to prevent it in future?

Regards,
Jay

Hello @Jay

Good to know that you managed to solve your problem. In order to get the current situation you have to run first SCAN ALL clearCache and after this a new SCAN ALL.

There is nothing that I could suggest to do in the future to prevent this things as these are not usual situations (maybe configure a weekly cron job based on SCAN ALL CLEARCACHE + SCAN ALL CLI commands that may allert you in case there are some space consumed by leaks so you may apply the purge command).

What you have to be sure is to not configure storages that may consume / overpass your entire disk space.

BR,
Ioan

@indreias

Thank Ioan for all the assistance.

Yes, I readjusted all my containers now to the proper value of my disk space.

Regards,
Jay

Hello @Jay,

I really hope you have carefully made mentioned changes as usually readjusting the storages to lower values is usually not supported.

In case you have double checked that the maxFileSize value configured into the storages are bigger than the size of any storage files from that storage unit after the compact was executed than you may have dodged the bullet.

Anyway you have earned the new bravery badge that I have just created.

BR,
Ioan

1 Like

@indreias

Yes I did, before changing the sizes of my container I double check each one and make sure the it will not be lesser than the current container size.

Thanks again for the feedback.

Regards,
Jay

1 Like

@indreias

Last question in which I need to ask you this which very critical.

How can I assure with a lot of confidence that I did not lose any critical data’s / emails when I cleared the 500+ GB LEAKS?

Because right now I base all decisions to Axigen algorithm to identify which is good and which is LEAKS.

Why I am asking this, as you can see on the start of my thread.

My capacity on WebAdmin base on per account size is : 214 GB
My File System size is 665 - 700GB
Leaks detected is: 500+ GB

Then when I finished removing the LEAKS my File System Capacity is only: 135GB
What happen to the difference between the actual size on WebAdmin (214GB) vs The size after removing Leaks (135GB)

In which that is about 70GB difference.

Hoping to get clarification on this matter.

Regards,
Jay

Hello @Jay

The sum of account’s mailbox size is not necessarily equal to the disk space as Axigen is internally storing one message per multiple recipients (more info here).

From the scan all KB refferend in a previous reply we know that leaks are messages found in the storages, which are not referred from any mailbox so definitelly there will be no mechanism (but repair accounts ...) to scan them and try to retrieve any usefull data from there.

As always you have to backup your data in case something goes wrong and here I’m reffering to backup via FUSE or FTP as in such way you have access to each internal Axigen objects (like messages, contacts, etc) that may be recovered if needed (like restoring a particular folder and its content that one of your users wrongly permanently deleted).

Note: in case you will restore from a previous FUSE or FTP backup the message deduplication could not be recreated so in case of a full domain restore (not recomended but possible) the disk space will be more or less in sync with the sum of mailbox sizes.

HTH,
Ioan

@indreias

Thanks appreciate the detailed answer. And thank you for all the assistance.

Regards,
Jay