next up previous contents index
Next: Replica Control Strategies Up: Replica Control Previous: File Sharing Semantics   Contents   Index


Granularity of Replication

I will cache whole files, which makes sense, i.e., how would you feel if you found out during disconnection halfway through a file that the rest was missing? "Chunks" give possibility for better use of bandwidth especially with large files (only the needed parts are transferred), but it is harder to work with, i.e., which parts of a file are needed? Under the assumptions made in 3.1.4 and  3.1.5--i.e., files are read or written in their entirety and files are small (10-22K)--whole file replication is a sensible choice. Even more so, when files in a UNIX environment are nothing more--to the system--than a raw stream of bytes.

The alternative would be structured files (or objects) with the structures known (i.e., semantic knowledge of objects [27], [28]) and/or supported by the file system.

Coda and Ficus use whole file replication. Bayou uses a relational database, thus the granularity can be rows of tables (partial replication by means of (updateable!) views [54]), but for know it replicates the full database!


next up previous contents index
Next: Replica Control Strategies Up: Replica Control Previous: File Sharing Semantics   Contents   Index

michael@garfield.dk
2000-10-13