Breaking News

CS301: Computer Architecture Certification Exam Answers

Computer architecture refers to the design of computer systems, encompassing the structure and organization of computer components and how they interact to execute instructions. It involves both hardware and software aspects, including instruction sets, processor microarchitecture, memory systems, input/output mechanisms, and system organization.

Here are some key aspects of computer architecture:

  1. Instruction Set Architecture (ISA): This defines the set of instructions that a processor can execute, along with their formats and addressing modes.
  2. Processor Design: This includes the design of the central processing unit (CPU), which executes instructions fetched from memory. It encompasses aspects like pipelining, superscalar execution, out-of-order execution, and speculative execution.
  3. Memory Hierarchy: This refers to the organization of different levels of memory in a computer system, such as registers, cache memory, main memory (RAM), and secondary storage (hard drives, SSDs). The memory hierarchy is designed to optimize performance by providing fast access to frequently accessed data.
  4. Input/Output (I/O): This involves the mechanisms for transferring data between the computer and external devices, such as keyboards, mice, displays, storage devices, and networks.
  5. System Interconnects: This includes the buses, switches, and other components that enable communication between different parts of the computer system, such as the CPU, memory, and I/O devices.
  6. Parallelism and Concurrency: Modern computer architectures often incorporate parallelism at various levels, such as instruction-level parallelism (ILP), thread-level parallelism (TLP), and data-level parallelism (DLP), to improve performance.
  7. Power Efficiency: With the increasing prevalence of mobile and battery-powered devices, power efficiency has become a critical consideration in computer architecture design.
  8. Security: Computer architecture must also address security concerns, such as preventing unauthorized access to data, protecting against malware, and ensuring the integrity and confidentiality of information.

Computer architects strive to design systems that balance performance, power efficiency, cost, and other factors to meet the requirements of specific applications and use cases. They often use simulation, modeling, and performance analysis techniques to evaluate design choices and optimize system performance.

CS301: Computer Architecture Exam Quiz Answers

CS301 Computer Architecture
  • More than one program in memory
  • More than one memory in the system
  • More than one processor in the system
  • More than two processors in the system
  • The ALU
  • Back to memory
  • The program counters
  • The instruction registers
  • CPU chip
  • Floppy disk
  • Hard disk
  • Memory chip
  • Apple’s iMacs
  • IBM’s Watson
  • Mobile devices
  • Supercomputers
  • Instruction Register
  • Memory Data Register
  • Memory Address Register
  • Program Counter Register
Computer Architecture 1
  • 00
  • 01
  • 10
  • 11
  • 00111110
  • 11000001
  • 11000010
  • 11100010
  • 01001010
  • 01001011
  • 01101010
  • 11001111
  • 00010000011001010000000000000101
  • 00010000011001010000000000001010
  • 00100000011001010000000000000101
  • 00100000011001010000000000001010
  • ST and LD
  • JR and BEQ
  • ADD and SUB
  • PUSH and POP
  • add
  • jr
  • ld
  • or
ab
00011110
cd00X11
01X
111
1011X
  • b’d’ + a’b
  • ab’ + a’d’
  • d’ + ab’
  • ac + a’bd’
  • There are three stages
  • There is a clock line going to each full adder
  • The adder is slower than a carry looks ahead adder
  • Extra gates are needed besides the full adder gates
Computer Architecture 2
  • [ab + a’b’] S’ + [a’b + ab’] S
  • [ab + a’b] S’ + [a’b’ + ab’] S
  • [a’b + a’b’] S’ + [ab + ab’] S
  • [ab’ + a’b] S’ + [ a’b’ + ab] S
  • Loop a times {

b = b + b

} answer = b

  • c = 0

Loop a times {

c = c + b

} answer = b

  • Assume b > a

Loop n times {

b = b – a

if (b = 0) answer = n

if (b < 0) answer = n – 1

}

Assume b > a

  • Loop n times {

b = b – a

if (b = 0) answer = 0

if (b < 0) answer = b + a

}

  • 1 XOR, 1 AND, 2 OR
  • 1 XOR, 2 AND, 1 OR
  • 2 XOR, 2 AND, 1 OR
  • 2 XOR, 1 AND, 2 OR
  • PC
  • PC+4
  • 2*PC
  • 2*PC-1
  • One stage must wait for data from another stage in the pipeline
  • The pipeline is not able to provide any speedup to execution time
  • The next instruction is determined based on the results of the currently-executing instruction
  • Hardware is unable to support the combination of instructions that should execute in the same clock cycle
  • One stage must wait for data from another stage in the pipeline
  • The pipeline is not able to provide any speedup to execution time
  • The next instruction is determined based on the results of the currently-executing instruction
  • Hardware is unable to support the combination of instructions that should execute in the same clock cycle
  • Control hazard
  • Static parallelism
  • Dynamic parallelism
  • Speculative execution
  • It is more expensive than other types of cache organizations
  • Its access time is greater than that of other cache organizations
  • Its cache hit ratio is typically worse than with other organizations
  • It does not allow simultaneous access to the intended data and its tag
  • 0
  • 1
  • 3
  • 5
  • A disk
  • A cache
  • The register files
  • The main memory
  • Disk
  • Cache
  • Page table
  • Virtual Memory
  • Asynchronous
  • External
  • Internal
  • Synchronous
  • There are no redundant check disks
  • The number of redundant check disks is equal to the number of data disks
  • The number of redundant check disks is less than the number of data disks
  • The number of redundant check disks is more than the number of data disks
  • C/C++
  • MPI
  • OpenMP
  • Python
  • There is no improvement in performance as the number of processors increase
  • There is a diminishing improvement in performance as the number of processors increase
  • There is an increasing improvement in performance as the number of processors increase
  • There can be no more than a 5 times improvement in performance as the number of processors increase
  • 1.5 times faster
  • 1.67 times faster
  • 2 times faster
  • 3 times faster
  • Uniform memory access
  • A single physical address space
  • One physical address space per processor
  • Multiple memories shared by multiprocessors
  • SIMD Machines
  • MIMD machines
  • Shared Memory Multiprocessors
  • Distributed Shared Memory Multiprocessors
  • Most programs are too long
  • The use of cache memory for data
  • The use of cache memory for instructions
  • Because of compiler limitations
  • It is a processor that has multiple levels of cache
  • It is a processor that is efficient for all types of computing
  • It is a special purpose processor only useful for graphics processing
  • It is a processor used in all types of applications that involve data parallelism
  • 00101001.11
  • 00110100.11
  • 00110110.10
  • 00111011.01
  • In the stack
  • In the memory
  • In the CPU register
  • After OP code in the instruction
  • F = x + y’z
  • F = xy’ + yz + xz
  • F = xy + y’z + xz
  • F = xy’z + xy’z’ + x’yz + x’yz
  • AND, OR
  • OR, NOT
  • XOR, OR
  • XOR, AND
  • AND gates and MUXes
  • NOT gates and MUXes
  • OR gates and DEMUXes
  • XNOR gates and DECODERs
  • 2
  • 3
  • 4
  • 5
  • A data hazards
  • A memory faults
  • A control hazards
  • A structural hazard
  • Value prediction
  • Branch prediction
  • Memory unit forwarding
  • Execution unit forwarding
  • Carry lookahead
  • Branch prediction
  • Register renaming
  • Out of order execution
  • “Hit under miss”
  • High associativity
  • Multiported caches
  • Segregated caches
  • Cache, Main Memory, Disk, Register
  • Cache, Main Memory, Register, Disk
  • Cache, Register, Main Memory, Disk
  • Register, Cache, Main Memory, Disk
  • Cache memory
  • Volatile memory
  • Non-cache memory
  • Non-volatile memory
  • 2
  • 4
  • 16
  • 32
  • Threads may use local variables
  • Threads may use private variables
  • Threads may use shared variables
  • Using a semaphore is not effective
  • Increase in speed of processor chips
  • Increase in power density of the chip
  • Increase in video and graphics processing
  • Increase in cost of semiconductor manufacturing
  • Load balancing
  • Grid computing
  • Web search engine
  • Scientific computing
  • A Monte Carlo integration
  • Any highly sequential program
  • A C++ program with lots of for loops
  • A program with fine-grained parallelism
  • Clock frequency
  • Transistors on a chip
  • Processors on a chip
  • Chip power consumption
  • Controlled transfer
  • Conditional transfer
  • Uncontrolled transfer
  • Unconditional transfer
  • 6E
  • 7D
  • 8A
  • B5
  • 1.0× 10-9
  • 10.0 × 10-9
  • 100.00 × 10-9
  • 1000.00 × 10-9
  • Commander
  • Compiler
  • Interpreter
  • Simulator
  • add
  • beq
  • jr
  • ld
  • Data memory and Register File take part
  • Instruction memory and data memory take part
  • Instruction memory, ALU, and register take part
  • Instruction memory, Register File, ALU, and data memory take part
  • Cache
  • Register
  • Hard disk
  • Main memory
  • The synchronous bus is better: 20.1 vs. 15.3 MB/s
  • The synchronous bus is better: 30 vs. 18.2 MB/s
  • The asynchronous bus is better: 13.3 vs. 11.1 MB/s
  • The asynchronous bus is better: 20.1 vs. 15.3 MB/s
  • RAID 4 does not use parity
  • RAID 4 uses bit-interleaved parity
  • RAID 4 uses block-interleaved parity
  • RAID 4 uses distributed block-interleaved parity
  • Multiple threads are used in multiple cores
  • Multiple threads are used in multiple processors
  • Multiple threads share a single processor, but do not overlap
  • Multiple threads share a single processor in an overlapping fashion
  • It stays the same
  • It decreases to zero
  • It approaches the execution time of the sequential part of the code
  • It approaches the execution time of the non-sequential part of the code
Computer Architecture 2
  • 1 state, 2 inputs, 2 outputs
  • 2 states, 2 inputs, 1 output
  • 3 states, 1 input, 2 outputs
  • 3 states, 2 inputs, 1 output
  • A computer that is used by one person only
  • A computer that runs only one kind of software
  • A computer that is assigned to one and only one task
  • A computer that is meant for application software only
  • DTL
  • PMOS
  • RTL
  • TTL
ab
00011110
cd001X
0111X1
11
1011
  • cd’ + bd
  • c’ + ab’
  • c’d + b’d’
  • ad + b’d’
abcz
0000
0011
0101
0111
1000
1011
1101
1111

Select one:

  • a + b
  • b + c
  • ac + b
  • a’b + c
  • Loop a times {

b = b + b

} answer = b

  • c = 0

Loop a times {

c = c + b

} answer = b

  • Assume b > a

Loop n times {

b = b – a

if (b = 0) answer = n

if (b < 0) answer = n – 1

}

  • Assume b > a

Loop n times {

b = b – a

if (b = 0) answer = 0

if (b < 0) answer = b + a

}

  • The decoding of the instruction
  • The reading of the program counter value
  • The execution of operation using the ALU
  • The fetching of the instruction from the instruction memory
  • Decode the instruction; execute the instruction; transfer the data
  • Decode the instruction; transfer the data; execute the instruction
  • Execute the instruction; decode the instruction; transfer the data
  • Transfer the data; execute the instruction; decode the instruction
  • One stage must wait for data from another stage in the pipeline
  • The pipeline is not able to provide any speedup to execution time
  • The next instruction is determined based on the results of the currently-executing instruction
  • Hardware is unable to support the combination of instructions that should execute in the same clock cycle
  • Caching
  • Pipelining
  • Carry lookahead
  • Branch prediction
  • Pipelining
  • Data hazard
  • Concurrency
  • Instruction level parallelism
  • The cache block number
  • Whether there is a write-through or not
  • Whether the requested word is in the cache or not
  • Whether the cache entry contains a valid address or not
  • A disk
  • A cache
  • The register files
  • The main memory
  • Tape drive; PT
  • PT; victim cache
  • Dcache; Write buffer
  • Dcache; Main memory
  • The synchronous bus is better: 25 vs. 18.2 MB/s
  • The synchronous bus is better: 30 vs. 25.2 MB/s
  • The asynchronous bus is better: 13.3 vs. 11.1 MB/s
  • The asynchronous bus is better: 30 vs. 25.2 MB/s
  • 100.2 MB/s
  • 130.6 MB/s
  • 150.8 MB/s
  • 170.0 Mb/s
  • Asynchronous
  • External
  • Internal
  • Synchronous
  • There are no redundant check disks
  • The number of redundant check disks is equal to the number of data disks
  • The number of redundant check disks is less than the number of data disks
  • The number of redundant check disks is more than the number of data disks
  • 1.3333
  • 2
  • 2.6666
  • 8
  • Weak scaling
  • Timing issues
  • Strong scaling
  • Communication overhead
  • DTL RTL CMOS TTL
  • DTL RTL TTL CMOS
  • RTL DTL TTL CMOS
  • RTL TTL DTL CMOS
  • 1
  • n
  • log n
  • 2n
  • Decoding the instruction
  • Reading the program counter value
  • Executing the operation using the ALU
  • Fetching the instruction from the instruction memory
  • The program counters
  • The output of the ALU
  • Data from data memory
  • Decoding instructions from instruction memory
  • The number of pipe stages
  • 5 times that of a non-pipelined machine
  • The ratio of the fetch cycle period to the clock period
  • The ratio of time between instructions and clock cycle time
  • Value prediction
  • Branch prediction
  • Memory unit forwarding
  • Execution unit forwarding
  • 131.0 MB/s
  • 229.4 MB/s
  • 327.9 MB/s
  • 350.1 MB/s
  • Ranking a linked list
  • A matrix multiplication
  • Any highly sequential program
  • A program with fine-grained parallelism

About Clear My Certification

Check Also

CS402 Computer Communications and Networks

CS402: Computer Communications and Networks Certification Exam Answers

Computer communications and networks refer to the systems and infrastructure that enable communication and data …

Leave a Reply

Your email address will not be published. Required fields are marked *