The above file describes a ωFS i-node. From top to bottom...
version: version number of the inode format. For the very unlikely event of having to extend the file structure (adding keys) in backwards incompatible ways.
filename: filename as appearing in the mounted volume.
irisname: name of the file this json data appears.
sha1 : SHA1 of the target file.
stat: Standard POSIX stat. Note that uid and gui are inherited from the user running the ωFS Server.
file-data: An array of virtual block identifiers. Here, the file is stored in only one virtual block, therefore the array is of length 1.
virtual-blocks A virtual block is basically a collection of real blocks. All real blocks in the same virtual block have same contents (this is why "size" and sha1 are attributes of the virtual block itself). In this example the file and the virtual block have same size and same sha1 because there is only one (virtual) block of data.
real-blocks: lists all the real blocks. Each real block has a storage point identifier and an encryption identifier (possibly null). Encryption identifier refers to a particular encryption scheme (this can be decided at real block level; data stored in unsafe storage points will be encrypted; also note that you are free to use more than one encryption method, says GnuPG and AES256 together, as long as you map each uuid to the right encryption method; all this is meant to be automatically managed, of course). The reason for having several real blocks for the same virtual block is that this is how data duplication is implemented. If a storage point goes down (or the data stored there is corrupted), the corresponding real block data will not be available, but the contents of the virtual block itself can be retrieved from other real blocks. And as I explained in the original email, a daemon can routinely check the availability and integrity of each real block independently and take duplication measures when required.
This tells you a bit more about how many files can be stored on ωFS: as many as metadata files you can store on an hard drive. Quick check on my calculator says that my laptop could index 200 million files. All this independently of the actual size of the files. Each file in your ωFS volume can be as big as you desire (you can read and write files one virtual block at a time).
So now you are going to say "This is all nice Pascal, but you are building a file system in which the i-nodes are encoded as json data stored in text files, isn't that inefficient ?", to which I reply "Get lost!" :-p
There is no need for a v-node json file format, the position of the target file in the mounted volume file tree is to be deduced from where the i-node file is stored ( ωFS's v-nodes correspond to Iris's v-nodes ).