RegisterSign In
By Alfredo Sanchez (alfy) on Mar 13, 2011 3:04 PM.
Path size
Hi Toby,

I was wondering if is it there a way, or if have you thought of a system, for retrieving the total size of a path instance. I've noticed that through the getStats method in the ConsensusProcessor interface, MckoiDDB exposes various stats, and that the SDB implementation gives out the size of the current snapshot.

For instance, let the path "test" of kind "com.mckoi.sdb.SimpleDatabase": is it there a way to have the total size of the instance, including historical snapshots?

By Tobias Downer (toby) on Mar 13, 2011 4:48 PM.
You can get at the consensus function stats by using the NetworkProfile object; See 'Code Example 1' above.

That will return the stats string that the consensus function provides (if any). However, further analysis (such as looking at historical snapshots) would require a new function. You could do something like this for determining transaction size over historical snapshots; See 'Code Example 2' above.

But note that the actual size reported may not correspond to the actual physical media used because of sparse nodes and data mirroring. For example, transaction 1 may report 100MB and transaction 2 may report 110MB but 90MB might be shared data between both transactions so ~120MB of physical media is used. There are some tree analysis functions in that could be better exposed in the API.
By Alfredo Sanchez (alfy) on Mar 13, 2011 5:06 PM.
Thanks for the fast reply. You speak about transactional data (if I have understood correctly): does this mean we're speaking about memory allocated during the transaction? or is it concretely physical memory on the blocks?
By the way, the reason I'm asking is because I'd like to implement a diagnostic tool that reports several facts sheets about paths, for administration purposes: it would be nice to have a standard format for the information returned by getStats (not that all the information must be identical: different models report different data). For example a key1=value1,key2=value2,...,keyN=valueN kind of format.

In this context, an external tool would just query the path and not interact with it: how is it possible the amount of data floats so much? eventually, an approximation could also be acceptable, no?
By Tobias Downer (toby) on Mar 13, 2011 5:54 PM.
I think even an approximation of backed media consumption would be difficult unless you did some really deep analysis. This subject can get very deep, but basically it comes down to a process in which a tree is used to represent a transaction snapshot and the branch and leaf nodes of the tree may have either been created when the transaction changed or inherited from previous versions. When you traverse a tree snapshot you can not determine how many other snapshots also share the tree nodes you are looking at. It's important that snapshots can share nodes otherwise you would need to rewrite the entire database every time a change was made.
By Alfredo Sanchez (alfy) on Mar 13, 2011 6:07 PM.
basically your answer is that is not possible to determine the size of a path, not even approximately. am I correct?
By Tobias Downer (toby) on Mar 13, 2011 6:33 PM.
You can determine the logical size of a snapshot (the size of all data addressable in the snapshot), but not the space a snapshot consumes in the backed media.
By Alfredo Sanchez (alfy) on Mar 13, 2011 6:37 PM.
now it's much much clearer! basically, it's all that's needed... I didn't understand it very well, but actually getting the size of the snapshot, spanned over the network is exactly what should be done. in terms of cloud computing, the network IS the backed media ...
Please sign in or register to post in this topic.
The text on this page is licensed under the Creative Commons Attribution 3.0 License. Java is a registered trademark of Oracle and/or its affiliates.
Mckoi is Copyright © 2000 - 2021 Diehl and Associates, Inc. All rights reserved.