So on Quora there is an author named Franklin Veaux. He writes answers and comments on a lot of different topics and generally seems to have a good head on his shoulders and knows what he is talking about. One such topic that came up was the future of computer architectures with regard to system memory and offline storage. Today’s modern computer systems utilize smaller amounts of primary RAM for system storage, which is fast but volatile; and larger amounts of persistent storage which is slow but is… well, persistent (via things like removable discs, and magnetic rotational or solid state drives).
As we make advancements in memory technologies, it is conceivable that we will eventually end up with a memory technology that has the density of persistent storage, and the speed of volatile memory. When this happens, the differentiation between system RAM and persistent storage may start to blur, or go away entirely.
Right now you have CPUs in your computer that can address (access) a certain amount of memory. At the time of this writing we have AMD Threadripper processors that can address up to two terabytes (2048 gigabytes) of RAM. But today’s desktop and laptop systems usually have much less physical RAM than that, somewhere between 16 to 64 GB is typical in my experience.
As a normal course of operation, we routinely load stuff from persistent storage into memory and occasionally save it back out again, from the computer’s operating system (OS) to pictures and documents. When you first turn on your computer on and boot it, it loads its OS from persistent storage into memory and then executes it. When you launch an app the same thing happens, just on a different scale.
But imagine if/when we end up with CPUs that can directly address a petabyte (a petabyte is 1024 terabytes) of memory, and we also have that new memory technology that gives us one petabyte of fast, persistent storage. With that much persistent memory, you may no longer need a separate “disk drive” anymore. Your computer would just be able to directly address all of that memory as one big chunk of memory.
This would require a paradigm shift in how we think about using computers today. For more than 60 years we have been using computers the same way: load something from slower persistent storage into faster volatile system memory, do something with it, and then write it back out to the persistent storage.
In the old days, the volatile memory could have been core memory and the persistent memory could have been 7-track reel-to-reel tape. Today we have high-speed RAM chips and SSDs, and we organize things using paradigm of directories and files, but the process remains the same: if we want to edit a document, we open it (copy it from persistent into volatile memory), edit it using the faster volatile memory, and then save it (copy from system memory back to storage again).
But once we have eliminated the distinction between memory and storage, how does editing a document work? Where is it loaded from? Where is it saved to? I think that if we ever hit that point where we lose the distinction between system memory and persistent storage, and systems just come with one large set of “memory,” we may still need to maintain the current load-edit-save way of thinking that used separate persistent storage.
I imagine something where behind the scenes we would partition off a section of this new flat memory area (e.g. something “small” like 768 terabytes) and use it like an old school Ramdisk — presenting it to the user (and the OS?) like a separate, large, distinct and persistent storage device. This would likely make things easier for users, and OSes as well, because it allows them both to efficiently use this new memory technology and architecture by having it present itself as the old one.
This could help ease the transition into the future, allowing us to take baby steps until we finally get there. But what does that future actually look like when we stop thinking in terms of drives, directories, and files? Or are we stuck with this type of thinking forever?