Usually the (minimum) resident time (how long data has to be stored) is fixed by law or in long business discussions - so everything older should be archived, nothing newer is allowed to be archived. Based on this, I doubt that anyone really needed a detailed space analysis.
Because of a lot of dependencies between the different parts of an archiving object, it's not so easy to fetch all data to be archived -> there is a test mode, where objects are only gathered and analyzed.
From these test runs in transaction SARA you can see the number of archivable entries per object (-> table), with the line size you can calculate the amount, but already the number (1458 of 57899 documents) will be helpful. In this example roughly 2.5% could be deleted -> you will win 2.5% of current used space -> or just the space for additional 1458 documents.
But it can be easier! If all documents are closed correctly, then you can estimate by total volume of documents per year and resident time: when you have 1 million docs per year, you have to store for 2 years, everything above 2.2 Mio can be deleted - the 0.2 for some open stuff, which has to be closed before deletion.
Situation: I am preparing an approach document to show it to management.
Problem: I want to show to management total space likely to get free if we start archiving process. Bifurcated amongst different archiving objects and amongst years so that we can show how much one particular business object is taking space and I can show which all archiving objects we should start with. If I don't want to run the archiving object in PRD in test mode. I want to know is there any other alternative for finding the same except consolidation of different table size a particular archiving object is hitting.
There is no easy tool to get this numbers. But on the other hand, you don't need exact numbers, estimations will do.
It's a good idea to start with the biggest objects: take DB02 , make a detailed analysis where you select the biggest tables -> corresponding archive objects should be your main focus (for the beginning).
Count for the biggest tables in each objects the entries per year (or month, whatever periods you are interested in). Most tables have creation date to do so, otherwise go for number range. For some numbers you could search the creation date, the rest is estimation again. Then you will have an idea, which volume was created in which time frame.
Still you need some test archive runs (in PRD or an (old) copy, at least for an example amount of data): you need to know, which % of the documents can technically be archived, how much will stay open because of missing closing. That's critical information (maybe 90% will stay in system) and can only be analyzed by SARA test runs - if you identify the missing object status, you can go on selecting this directly, but in the beginning you need the archive run.
With the volume / time frame and the percentage, which can be deleted you should be able to give estimations based on current total object size. Make clear, that you talk about estimations: every single object will be checked for having correct status - before this isn't done (in a test run), no one can tell exact numbers.
0 Comments