Q: 1 The capacity of a memory unit is defined by the number of words multiplied by the number of bits per word. How many separate address and data line are needed for a memory of 4K*16?
10 address lines and 16 data lines
12 address lines and 10 data lines
12 address lines and 16 data lines
12 address lines and 8 data lines
[ Option C ]
A memory unit stores data in the form of words. Each word contains a fixed number of bits. The total capacity of a memory is therefore defined as:
Memory Capacity=Number of words × Number of bits per word
To access and transfer data between the CPU and memory, two types of lines are used:
The Given Memory : 4K * 16
4K represents the number of words in memory : 4K = 4*1024 = 4096 Words.
16 represents the number of bits per word.
Each memory word must have a unique address. The number of address lines required depends on how many different addresses must be generated or number of memory locations (words).
Number of address lines = log2(Number of Words)
=log2(4096) = 12 or 4096=212
The memory needs 12 address lines to uniquely identify all 4096 words.
Data lines are used to read or write one complete word at a time. Since each word contains 16 bits, the memory must have 16 data lines.
NOTE:
Q: 2 On a system using a disk cache the main cache access time is 1 ms, the mean desk access time is 100 ms and the hit rate is 40%. What is the mean access time in ms?
40.6 ms
60.4 ms
50.5 ms
66.67 ms
[ Option B ]
A disk cache stores frequently accessed data in fast memory (RAM) to reduce slow disk I/O operations.
Hit Rate is the probability a requested data block is found in cache (40% here). Cache access time (1 ms) is much faster than disk access time (100 ms). Mean Access Time (MAT) combines both using the formula:
MAT = (Hit Rate*Cache Time) + ((1-Hit Rate)*Disk Time)
With cache access time of 1 ms, disk access time of 100 ms, and 40% hit rate, the mean access time calculates as:
MAT = (0.4*1)+(0.6*100) = 0.4+60 = 60.4 ms.
Q: 3 Spatial locality refers to the fact that once a memory location is referenced,
It will not be referenced again
It will be referenced again
A nearby location will be referenced soon
None of the above
[ Option C ]
In computer architecture, locality of reference describes how programs access memory. One important type is spatial locality, which means that if a particular memory location is accessed, then nearby memory locations are likely to be accessed soon.
For example, when accessing elements of an array sequentially, once one element is accessed, the next nearby elements are also accessed.
Q: 4 An I/O processor controls the flow of information between:
Cache memory and I/O devices
Main memory and I/O devices
Two I/O devices
Cache and main memory
[ Option B ]
An I/O processor is a special-purpose processor used to handle input and output operations independently of the CPU. Its main job is to manage data transfer between I/O devices and Main Memory.
By doing this, the I/O processor reduces the workload on the CPU and allows the CPU to continue executing other instructions while I/O operations are in progress.
Q: 5 Which is most expensive among ERAM, SRAM and DRAM?
SRAM
DRAM
ERAM
All are Same
[ Option A ]
SRAM (Static Random Access Memory) is the most expensive memory among SRAM, DRAM, and ERAM because it is faster, more reliable, and does not require refreshing repeatedly like DRAM.
SRAM uses more transistors for storing each bit, which increases its manufacturing cost. It is commonly used in Cache Memory.
Q: 6 A RAM chip has 8 data lines and 10 address lines and no address multiplexing is recommended for addressing the chip. Maximum amount/volume of data (in bits) that can be stored in the RAM chip is
210 * 23
210 * 28
28 * 10
10 * 8
[ Option A ]
In a RAM chip, the number of address lines determines the number of memory locations, while the number of data lines determines how many bits are stored in each location.
Here, the number of address lines is 10, so the total number of memory locations is 210. The number of data lines is 8, which means each location can store 8 bits. Since 8 = 23, each location stores 23 bits.
Therefore, the total memory capacity of the RAM chip is 210 * 23 = 213 bits.
Q: 7 In the virtual memory system, the address space specified by address line of the CPU must be __________ than the physical memory size and __________ than the secondary storage size.
smaller, smaller
smaller, larger
larger, larger
larger, smaller
[ Option D ]
In a Virtual Memory system, the CPU generates addresses based on its address lines, which define the virtual address space. This virtual address space represents the range of memory addresses that a program can use, and it is not limited to the size of the physical main memory (RAM).
One of the main purposes of virtual memory is to allow programs to use an address space that is larger than the available physical memory, with the extra portion being stored on secondary storage.
However, the virtual address space cannot exceed the size of the secondary storage, because secondary storage is where the non-resident pages of memory are actually stored.
Therefore, the address space specified by the CPU must be larger than the physical memory size but smaller than the secondary storage size.
Q: 8 How many 32K*1 RAM chips are needed to provided a memory capacity of 256K bytes?
8
128
64
32
[ Option C ]
Each RAM chip is of size 32K × 1. This means, 32K locations and 1 bit per location. So, capacity of one chip is 32K*1 bit=32K bits.
Required memory capacity : 256K bytes = 256K*8 = 2048K bits
Number of chips required : 2048K bits/32K bits per chip = 64
Q: 9 The main disadvantage of direct mapping of cache organization is that?
It does not allow simultaneous access to the intended data and its tag
It is more expensive than other type of organization
The cache hit ratio is degraded if two more blocks used alternatively map onto the same block frame in the cache
The number of blocks required for the caches increases linearly with the size of the main memory
[ Option C ]
Cache Memory is used to store frequently accessed data so that the CPU can access it faster than main memory.
In direct mapping, each block of main memory is mapped to exactly one fixed location (line) in the cache.
This fixed mapping makes the design simple and inexpensive, but it also introduces a major drawback known as the conflict problem.
When two or more frequently accessed main memory blocks are mapped to the same cache line, they continuously replace each other in the cache.
As a result, even though the blocks are repeatedly used, they cannot remain in the cache simultaneously. This frequent replacement reduces the cache hit ratio and degrades overall performance.
Q: 10 The method of mapping the consecutive memory blocks to consecutive cache blocks is called
Associative
Set-Associative
Direct
Indirect
[ Option C ]
Cache Mapping determines how memory blocks are placed in cache. In Direct Mapping, each memory block is mapped to exactly one specific cache block using a fixed formula.
In this method, consecutive memory blocks are mapped to consecutive cache blocks in a cyclic manner. This makes the mapping simple and fast.
| Technique | Description |
|---|---|
| Direct Mapping | In direct mapping, each memory block is mapped to exactly one fixed cache block using a simple formula, making it fast and low-cost but prone to conflicts. |
| Associative Mapping | In associative mapping, any memory block can be placed in any cache block, providing maximum flexibility and better utilization but requiring complex and costly hardware. |
| Set-Associative Mapping | In set-associative mapping, each memory block maps to a specific set and can be placed in any block within that set, offering a balance between speed, cost, and flexibility. |
Q: 11 In associative mapping during LRU, a new block is identified and its counter is set to “0” and all the others are incremented by one when _________ happens.
Write
Hit
Delayed Hit
Miss
[ Option D ]
In Cache Memory, LRU (Least Recently Used) is a replacement policy used to decide which block should be removed when a new block needs to be inserted.
Each cache block is assigned a counter to track how recently it was used. The block with the highest counter is least recently used.
In LRU, when the requested block is not already in cache, a new block is brought in, its counter becomes 0, and the other counters are incremented. That is exactly the Miss case, not hit or write.
The wording “a new block is identified” points to the moment a block is loaded into the cache, which happens on a Miss.
Q: 12 The cache memory is more effective because of?
Memory Localization
Locality of Reference
Memory Size
None of these
[ Option B ]
Cache Memory works effectively because of the locality of reference principle. This means that programs tend to access the same data or instructions repeatedly (Temporal Locality) or access nearby memory locations (Spatial Locality). Cache stores such frequently or recently used data, so the CPU can access it much faster than main memory.
Q: 13 Consider a memory which stores 8K of 16 bit words. How many address lines are required?
16
13
10
8
[ Option B ]
The number of address lines depends on how many unique memory locations (words) need to be addressed. Here, the memory stores 8K words, and each word is 16 bits, but word size does not affect the number of address lines, only the number of locations matters.
So, 8K = 8*1024 = 8192 Locations. To find the number of address lines n, we use:
2n=8192
213=8192
So, 13 address lines are required.
Q: 14 Which of the following memories have the shortest access time?
RAM
USB
Cache
Disk
[ Option C ]
Access time refers to how quickly data can be read from memory. Among the given options, Cache Memory has the shortest access time because it is located very close to the CPU and is designed for high-speed operations.
Cache stores frequently used data and instructions so that the CPU can access them quickly without going to slower memory levels. Its access time is even faster than RAM.
The general speed order from fastest to slowest is:
Registers > Cache > RAM > SSD > HDD (Disk) > USB (Flash Drive)
Q: 15 A RAM chip has a capacity of 1024 words of 8 bits each (1K X 8). The number of 2 X 4 decoders with enable line needed to construct a 16K X 16 RAM from 1K X 8 RAM is—
4
5
6
7
[ Option B ]
Q: 16 When a memory write operation updates both main memory and cache memory it is called _________.
Write-through
Write-back
Write-once
None of these
[ Option A ]
In cache memory systems, when the CPU writes data, there are different policies to decide how memory is updated.
In the WRITE-THROUGH policy, whenever data is written to the cache, the same data is immediately written to the main memory as well. This ensures that both cache and main memory always remain consistent.
So, if a memory write operation updates both cache and main memory at the same time, it is called Write-through.
| POLICY | DESCRIPTION |
|---|---|
| Write-Through | Whenever the CPU writes data to the cache, the same data is immediately written to main memory. Ensures strong consistency between cache and memory. Simpler design but slower due to frequent memory writes. |
| Write-Back (Copy-Back) | Data is written only to the cache. Main memory is updated later when the modified cache block is replaced. Uses a dirty bit to track changes. Faster but slightly complex. |
| Write-Once | A hybrid policy. The first write updates both cache and memory (like write-through). Subsequent writes update only the cache (like write-back). Rarely used in modern systems. |
| Write-Around | Data is written directly to main memory without updating the cache. Cache is updated only when data is read again. Helps reduce cache pollution. |
Q: 17 What is the maximum memory that can be accessed by using 16 address lines?
4K
16K
32K
64K
[ Option D ]
The maximum memory that can be accessed depends on the number of address lines. If there are n address lines, then the total number of unique addresses that can be generated is 2n.
Here, number of address lines are 16, so total memory locations are 216=65536 locations. Each location typically stores 1 byte, so total memory are 65536 bytes = 64KB.
Q: 18 Which of the following is an advantage of memory interlacing?
A large memory is obtained
A non-volatile memory is obtained
The cost of the memory is reduced
Effective speed of the memory is increased
[ Option D ]
Memory Interlacing (Memory Interleaving) is a technique in which main memory is divided into several independent modules (banks) and addresses are distributed across them in a round‑robin fashion.
When the CPU accesses memory, different banks can be accessed in parallel or overlapped, so the next piece of data is already being read from another bank while the previous one is being processed. This improves memory bandwidth and reduces effective access time, which is perceived as higher effective speed of memory.
Q: 19 Which of the following is the fastest memory?
Cache
RAM
Register
Secondary Storage
[ Option C ]
Computer memory forms a hierarchy based on speed, cost, and capacity. Faster memory sits closer to the CPU for quick access during instruction execution, while slower memory holds bulk data.
Registers are the tiniest, fastest storage inside the CPU itself. They store small amounts of data such as operands, instructions, and intermediate results, allowing very high-speed operations.
| MEMORY TYPE | LOCATION | CAPACITY | COST PER BIT | USE CASE |
|---|---|---|---|---|
| Register | Inside CPU | Bits | Highest | Active ALU Operations. |
| Cache | CPU Chip | KB-MB | High | Frequent Data. |
| RAM | Motherboard | GB | Medium | Programs or Data. |
| Secondary | Peripherals | TB | Lowest | Files or OS storage. |
Q: 20 Let the memory access time is 10 milliseconds and cache access time is 10 microseconds. If the cache hit ratio 15% then the effective memory access time is?
2 milliseconds
1.5 milliseconds
1.85 microseconds
1.85 milliseconds
[ Option D ]
Effective Memory Access Time (EMAT) depends on how often data is found in the cache (hit) and how often it must be fetched from main memory (miss).
Given:
EMAT = (Hit Ratio*Cache Access Time) + (Miss Ratio*Main Memory Access Time)
EMAT = (0.15*0.01 ms) + (0.85*10 ms)
EMAT = 0.0015 ms + 8.5 ms
EMAT = 8.5015 ms = 1.85 milliseconds.
Q: 21 The memory unit which directly communicate with the CPU is known as?
Primary Memory
Secondary Memory
Shared Memory
Auxiliary Memory
[ Option A ]
The CPU can directly access only one type of memory, which is the memory used to store programs and data that are currently being executed. This type of memory is called Primary Memory or Main Memory or RAM.
Q: 22 How many 16 K * 1 bit RAM chips are needed to provide a memory capacity of 128 K * 1 byte?
8
32
64
128
[ Option C ]
Memory capacity is represented in the form of:
Number of Memory Locations * Number of Bits per Location
A RAM chip of 16K * 1 bit means the chip contains 16K memory locations and each location can store 1 bit of data.
The required memory capacity is 128K * 1 byte. Since 1 byte = 8 bits, the required memory becomes 128K * 8 bits.
Now divide the required memory capacity by the capacity of one RAM chip, 128K*8 / 16K*1 = 64.
Therefore, 64 RAM chips are required to provide a memory capacity of 128K * 1 byte.
Q: 23 If number of address lines is increased by 1, then memory capacity will increase by _________.
No Chage
Twice
Four Times
Ten Times
[ Option B ]
In computer organization, the number of address lines determines how many unique memory locations can be accessed. If there are n address lines, then the total number of addressable memory locations is given by: 2n.
When the number of address lines is increased from n to (n+1), the total memory locations become: 2n+1 = 2*2n. This means the memory capacity becomes twice the original.
Q: 24 How many address lines are needed to address each memory locations in a 2048 * 4 memory chip?
10
11
8
12
[ Option B ]
In memory organization, the number of address lines depends on how many unique memory locations need to be accessed. If a memory chip has N locations, then the number of address lines required is:
Address Lines = log2(N)
In the given memory chip 2048*4, 2048 represents the number of memory locations and 4 represents the number of bits per location, which is not relevant for address lines.
Since 2048 = 211, a total of 11 address lines are required to uniquely access all memory locations.
Q: 25 Which of the following is not a form of memory?
Instruction Cache
Instruction Register
Instruction op-code
Translation Look-aside Buffer
[ Option C ]
In computer architecture, memory refers to components that store data or instructions.
However, an instruction op-code is not a memory unit. It is just a part of an instruction that specifies the operation to be performed like ADD, SUB, etc.
Q: 26 How many address lines are needed to address a memory of 512 bytes?
6
7
8
9
[ Option D ]
Address Lines in Memory
Given: Memory size = 512 bytes
Number of address lines n is calculated using:
2n = Memory size in bytes
2n = 512
Since 29=512, we find n=9.
Q: 27 A main memory has an access time of 45ns. A 5ns time gap is necessary for completion of one access to beginning of next access. The bandwidth of the memory is?
25 MHz
20 MHz
40 MHz
50 MHz
[ Option B ]
The bandwidth of memory tells us how many memory accesses can be completed per second. To find this, we must first calculate the total time required for one complete memory cycle.
Given,
Access time = 45ns
Time gap before next access = 5ns
Total Cycle Time : 45ns + 5ns = 50ns
Bandwidth is the reciprocal of the cycle time:
Bandwidth : 1 / (Access Time + Recovery Time)
Bandwidth : 1/(50*10-9) Since : 1ns = 10-9 seconds
Bandwidth : 1000*106/50 = 20*106 Hz
Bandwidth : 20 MHz
Time and Frequency Units:
| UNIT | SYMBOL | VALUE (POWER OF 10) | EXAMPLE |
|---|---|---|---|
| Picosecond | ps | 10-12 | 1 ps = 10-12 s |
| Nanosecond | ns | 10-9 | 1 ns = 10-9 s |
| Microsecond | µs | 10-6 | 1 µs = 10-6 s |
| Millisecond | ms | 10-3 | 1 ms = 10-3 s |
| Second | s | 100 | 1 s |
| Hertz | Hz | 100 | 1 Hz |
| Kilohertz | kHz | 103 | 1 kHz = 103 Hz |
| Megahertz | MHz | 106 | 1 MHz = 106 Hz |
| Gigahertz | GHz | 109 | 1 GHz = 109 Hz |
| Terahertz | THz | 1012 | 1 THz = 1012 Hz |
Q: 28 A 4 way set-associative cache memory unit with a capacity of 16 KB is built using a block size of 8 words. The word length is 32 bits. The size of the physical address space is 4 GB. The number of bits for the tag field is
20
18
19
21
[ Option A ]
In cache memory, a physical address is divided into three parts, Tag | Index (Set) | Block Offset.
Physical Address Space = 4 GB = 232 bytes. So, Total Address Bits = 32 bits
Block Size = 8 words and 1 word = 32 bits = 4 bytes. So, Block Size = 8*4 = 32 bytes = 25. Therefore, Block Offset = 5 bits
Cache Size = 16 KB = 214 bytes. Number of Blocks = 214 / 25 = 29 = 512 blocks
Since it is 4-way Set Associative:
Number of Sets = 512 / 4 = 128 = 27. Therefore, Index Bits = 7 bits
Tag Bits = Total Address Bits-(Index Bits+Offset Bits)
Tag = 32-(7+5) = 20 bits
Q: 29 Increasing the RAM of the PC improves performance because
Large RAMs are fast
Fewer page faults
Virtual memory increases
Fewer memory access
[ Option B ]
Increasing the RAM improves system performance mainly because it reduces page faults. A page fault occurs when the required data is not found in main memory and must be fetched from secondary storage, which is much slower.
Q: 30 Dynamic RAM consumes ________ power and ________ than static RAM.
more, faster
more, slower
less, slower
less, faster
[ Option C ]
Dynamic RAM (DRAM) stores data using capacitors and needs periodic refreshing to retain data. Because of this design, DRAM consumes less power compared to Static RAM (SRAM), which uses flip-flops and consumes more power continuously.
However, the access time of DRAM is slower than SRAM because refreshing and capacitor charging take extra time.
Q: 31 The performance of cache memory is measured in terms of
Seek Time
Access Time
Hit Ratio
Latency
[ Option C ]
Cache memory is a small, high-speed memory placed between the CPU and main memory. Its purpose is to reduce average memory access time by storing frequently used data.
The most important performance metric of cache memory is the Hit Ratio.
Hit Ratio = Number of cache hits / Total memory references
Cache Hit : Data is found in cache.
Cache Miss : Data is not found in cache and must be fetched from main memory.
Higher hit ratio means:
| TERM | DESCRIPTION |
|---|---|
| Seek Time | It is the time required to move disk head to required track. Used in hard disks. |
| Access Time | General term for time to read/write memory. |
| Latency | Delay before data transfer begins. |
Q: 32 The number of successful access to memory stated as a fraction is called as __________.
Access Rate
Success Rate
Miss Rate
Hit Rate
[ Option D ]
In memory systems (cache memory), the Hit Rate refers to the fraction of memory accesses that are successful, meaning the required data is found in the cache.
Hit Rate = (Number of Successful Accesses) / (Total Memory Access)
Miss Rate is the fraction of memory accesses in which the required data is not found in the cache and must be fetched from main memory.
Miss Rate = (Number of Misses) / (Total Memory Access)
Note:
Q: 33 A CPU has a 12 bit address for memory addressing. If the memory has a total capacity of 16 KB, what is the word length of the memory?
2 bytes
4 bytes
8 bytes
16 bytes
[ Option B ]
The number of address lines tells us how many memory locations (words) can be addressed.
With a 12-bit address, the CPU can address: 212=4096 words
The total memory capacity is given as 16 KB.
16 KB = 16×1024 = 16384 bytes
Now, word length means the number of bytes per word. It can be calculated as:
Word Length = Total memory size (in bytes) / Number of words
Word Length = 16384 / 4096 = 4 bytes.
Thank you so much for taking the time to read my Computer Science MCQs section carefully. Your support and interest mean a lot, and I truly appreciate you being part of this journey. Stay connected for more insights and updates! If you'd like to explore more tutorials and insights, check out my YouTube channel.
Don’t forget to subscribe and stay connected for future updates.