1. Basic
Memory Concepts:
- A memory unit is the collection of
storage units or devices together.
- The memory unit stores the binary
information in the form of bits.
- Memory/ Storage is classified into 2
categories:
i)
Volatile Memory
ii)
Non Volatile Memory
i) Volatile Memory:
- This loses its data, when power is
switched off.
ii) Non-Volatile Memory:
- This is a permanent storage and does not
lose any data when power is switched off.
Memory Access Methods:
i) Random Access:
- Main memories are random access
memories, in which each memory location has a unique address.
- Using this unique address any memory
location can be reached in the same amount of time in any order.
ii) Sequential Access:
- This method allows memory access in a
sequence or in order.
iii) Direct Access:
- In this mode, information is stored in
tracks, with each track having a separate read/write head.
Connection of the memory to the
Processor:
- Data transfer between the memory and the
processor takes place through the use of two processor registers
- MAR (Memory Address Register)
- MDR (Memory Data Register)
- If MAR is k bits long and MDR is n bits
long, then the memory unit may contain upto 2k addressable
locations.
- During a memory cycle, n bits of data
are transferred between the memory and the processor
- This transfer takes place over the
processor bus, which has k address lines and n data lines
- The bus also includes the control lines
Read/ Write (R/W) and Memory Function Completed (MFC) for coordinating data
transfers
- Other control lines may be added to
indicate the number of bytes to be transferred
Memory Hierarchy:
- The Memory Hierarchy is depicted as,
- The comparisons in the Memory Hierarchy is depicted as,
2. Main
Memory:
- The memory unit that communicates
directly within the CPU, Auxiliary memory and Cache memory, is called main
memory.
- It is the central storage unit of the
computer system.
- It is a large and fast memory used to
store data during computer operations.
- Main memory is made up of RAM and ROM
i) RAM: Random Access Memory:
DRAM:
- Dynamic RAM, is made of capacitors and
transistors
- It is slower and cheaper than SRAM.
- Dynamic RAMs are the pre-dominant choice
for implementing computer main memories.
- The high densities achievable in these
chips make large memories economically feasible.
SRAM:
- Static RAM, has a six transistor circuit in each cell and retains data, until
powered off.
- The choice of a RAM chip for a given
application depends on several factors such as, cost, speed, power dissipation,
and size of the chip.
- Static RAMs are generally used only when
very fast operation is the primary requirement.
- Their cost and size are adversely
affected by the complexity of the circuit that realizes the basic cell.
- They are used mostly in cache memories.
ii) ROM: Read Only Memory:
- It is non-volatile and is more like a
permanent storage for information.
- It also stores the bootstrap
loader program, to load and start the operating system when computer
is turned on.
The various types of ROM are
- PROM
- EPROM
- EEPROM
- FLASH MEMORY
ROM:
- A ROM cell is depicted as,
- A logic value 0 is stored in
the cell if the transistor is connected to ground at point P, otherwise a 1 is
stored.
- The bit line is connected
through a resistor to the power supply
-To read the state of the cell,
the word line is activated
- Thus the transistor switch is
closed and the voltage on the bit line drops to near zero if there is a
connection between the transistor and ground.
- If there is no connection to
ground, the bit line remains at the high voltage, indicating a 1.
- A sense circuit at the end of
the bit line generates the proper output value.
- Data are written into a ROM
when it is manufactured.
PROM:
- It stands for Programmable Read Only Memory.
- It was first developed in 70s by Texas Instruments.
- It is made as a blank memory.
- A PROM programmer or PROM burner is required in order to
write data onto a PROM chip.
- The data stored in it cannot be modified and therefore it
is also known as one time programmable device.
EPROM:
- It stands for Erasable Programmable ROM.
- It is different from PROM as unlike PROM the program can be
written on it more than once.
- This comes as the solution to the problem faced by PROM.
- The bits of memory come back to 1, when ultra violet rays
of some specific wavelength falls into its chip’s glass panel.
- The fuses are reconstituted and thus new things can be
written on the memory.
EEPROM:
- It stands for Electrically Erasable Read Only Memory.
These are also erasable like EPROM, but the same work of
erasing is performed with electric current. Thus, it provides the ease of
erasing it even if the memory is positioned in the computer.
It stores computer system’s BIOS. Unlike EPROM, the entire
chip does not have to be erased for changing some portion of it.
- Thus, it even gets rid of some biggest challenges faced by
using EPROMs.
FLASH ROM:
It is an updated version of EEPROM.
In EEPROM, it is not possible to alter many memory locations
at the same time.
However, Flash memory provides this advantage over the EEPROM
by enabling this feature of altering many locations simultaneously.
It was invented by Toshiba and got its name from it
capability of deleting a block of data in a flash.
Flash Cards:
- One way of constructing a larger module
is to mount flash chips on a small card.
- Such flash cards have a standard
interface that makes them usable in a variety of products.
- A card is simply plugged into a
conveniently accessible slot.
- Flash cards come in a variety of memory
sizes.
- Typical sizes are 8, 32, and 64 Mbytes.
Flash Drives:
- Larger flash memory modules have been
developed to replace hard disk drives.
- These flash drives are designed to fully
emulate the hard disks, to the point that they can be fitted into standard disk
drive bays.
- However, the storage capacity of the flash
drives is significantly lower.
- Currently, the capacity of flash drives
is less than one Giga Byte.
3. Memory
System Considerations:
- The choice of a RAM chip for a given
application depends on several factors such as, cost, speed, power dissipation,
and size of the chip.
Memory controller:
- The address is divided into two parts.
- The high-order address bits, which
select a row in the cell array, are provided first and latched into the memory
chip under control of the RAS signal.
- Then the lower-order address bits, which
select a column, are provided on the same address pins and latched using the
CAS signal.
- A typical procedure issues all bits of
an address at the same time.
- The required multiplexing of address
bits is usually performed by a Memory Controller Circuit, which is interposed
between the processor and the dynamic memory
- Use of a Memory Controller is depicted as,
- The controller accepts a complete
address and the R/W signal from the processor, under control of a request
signal which indicates that a memory access operation is needed.
- The controller then forwards the row and
column portions of the address to the memory and generates the RAS and CAS
signals.
- Thus, the controller provides the
RAS-CAS timing, in addition to its address multiplexing function.
- It also sends the R/W and CS signals to
the memory.
- The CS signal is usually active low.
- Data lines are connected directly
between the processor and the memory.
- Note that the clock signal is needed in
SDRAM chips.
4. Cache
Memory:
-
The cache is a small and very fast memory, interposed between the processor and
the main memory.
-
Its purpose is to make the main memory appear to the processor to be much faster
than it actually is.
-
The effectiveness of this approach is based on a property of computer programs
called locality of reference.
- Consider the arrangement, the use of a cache memory is depicted as
-
When the processor issues a Read request, the contents of a block of memory
words containing the location specified are transferred into the cache.
-
Subsequently, when the program references any of the locations in this block, the
desired contents are read directly from the cache.
-
Usually, the cache memory can store a reasonable number of blocks at any given
time, but this number is small compared to the total number of blocks in the
main memory.
-
The correspondence between the main memory blocks and those in the cache is
specified by a mapping function.
Hit Ratio:
- The performance of cache memory is
measured in terms of a quantity called hit ratio.
- When the CPU refers to memory and finds
the word in cache it is said to produce a hit.
- If the word is not found in cache, it is
in main memory then it counts as a miss.
- The ratio of the number of hits to the
total CPU references to memory is called hit ratio.
Hit Ratio = Hit / (Hit +
Miss)
*) Mapping Functions:
-
There are several possible methods for determining where memory blocks are
placed in the cache.
-
Consider a cache consisting of 128 blocks of 16 words each, for a total of 2048
(2K) words, and assume that the main memory is addressable by a 16-bit address.
-
The main memory has64K words, which we will view as 4K blocks of 16 words each.
i) Direct Mapping:
-
The simplest way to determine cache locations in which to store memory blocks
is the direct-mapping technique.
- In this technique, block j of the main memory maps onto block j modulo 128 of the cache, is depicted as,
-
Thus, whenever one of the main memory blocks 0, 128, 256, . . . is loaded into the cache, it is stored
in cache block0.
-
Blocks 1, 129, 257, . . . are stored in
cache block 1, and so on.
-
The memory address can be divided into three
fields
-
The low-order 4 bits select one of 16 words in a block.
-
When a new block enters the cache, the 7-bit cache block field determines the
cache position in which this block must be stored.
-
The high-order 5 bits of the memory address of the block are stored in 5 tag bits
associated with its location in the cache.
-
The tag bits identify which of the 32 main memory blocks mapped into this cache
position is currently resident in the cache.
-
As execution proceeds, the 7-bit cache block field of each address generated by
the processor points to a particular block location in the cache.
-
The high-order 5 bits of the address are compared with the tag bits associated
with that cache location. If they match, then the desired word is in that block
of the cache.
-
If there is no match, then the block containing the required word must first be
read from the main memory and loaded into the cache.
ii)
Associative Mapping:
- The most flexible mapping method, in which a main memory block can be placed into any cache block position.
-
In this case, 12 tag bits are required to identify a memory block when it is
resident in the cache.
-
The tag bits of an address received from the processor are compared to the tag
bits of each block of the cache to see if the desired block is present.
-
This is called the associative-mapping
technique.
-
It gives complete freedom in choosing the cache location in which to place the
memory block, resulting in a more efficient use of the space in the cache.
-
When a new block is brought into the cache, it replaces (ejects) an existing
block only if the cache is full.
-
The complexity of an associative cache is higher than that of a direct-mapped
cache, because of the need to search all 128 tag patterns to determine whether
a given block is in the cache.
-
To avoid a long delay, the tags must be searched in parallel.
-
A search of this kind is called an associative
search.
iii)
Set-Associative Mapping:
-
Another approach is to use a combination of the direct- and associative-mapping
techniques.
-
The blocks of the cache are grouped into sets, and the mapping allows a block
of the main memory to reside in any block of a specific set.
-
Hence, the contention problem of the direct method is eased by having a few
choices for block placement.
-
At the same time, the hardware cost is reduced by decreasing the size of the
associative search.
-
An example of this set-associative-mapping technique is shown in Figure 8.18
for a cache with two blocks per set.
-
In this case, memory blocks 0, 64, 128, . . . , 4032 map into cache set 0, and
they can occupy either of the two block positions within this set.
-
Having 64 sets means that the6-bit set field of the address determines which
set of the cache might contain the desired block.
-
The tag field of the address must then be associatively compared to the tags of
the two blocks of the set to check if the desired block is present.
-
This two-way associative search is simple to implement.
-
The number of blocks per set is a parameter that can be selected to suit the
requirements of a particular computer.
- For the main memory and cache sizes, four blocks per set can be accommodated by a 5-bit set field, eight blocks per set by a 4-bit set field, and so on.
-
The extreme condition of 128 blocks per set requires no set bits and
corresponds to the fully-associative technique, with 12 tag bits.
-
The other extreme of one block per set is the direct-mapping method.
- A cache that has k blocks per set is referred to as a k-way set-associative cache.
No comments:
Post a Comment