Copies of figures from the book in PDF format. • Copies in two PDF documents for easy reference. Why Study Computer Organization and Architecture 3. First Generation: Vacuum Tubes. ENIAC The ENIAC (Electronic Numerical Integrator And Computer), designed and constructed at the University of. Key Characteristics of Computer Memory Systems. Location. Internal (e.g. processor registers, cache, main memory). External (e.g. optical disks, magnetic disks.
|Language:||English, Spanish, German|
|Genre:||Health & Fitness|
|ePub File Size:||20.50 MB|
|PDF File Size:||18.79 MB|
|Distribution:||Free* [*Register to download]|
To the loving memory of my mother, Anna J. Surowski, who made all things possible for her girls ronaldweinland.info Computer Organization and Architecture. The study materials provided in this web course is intended for the first level course on Computer Organization and. Architecture. It can be used as a. The term Computer Architecture was first defined in the paper by Amdahl, Blaauw Machine Implementation was defined as the actual system organization and.
Functional units of a Computer: Functional Units A computer consists of ve functionally independent main parts: Input Unit Computers accept coded information through input units. The most common input device is the keyboard. Whenever a key is pressed, the corresponding letter or digit is automatically translated into its corresponding binary code and transmitted to the processor. Many other kinds of input devices for human-computer interaction are: These are often used as graphic input devices in conjunction with displays.
Relative Addressing 2. Base-register addressing 3. Indexing Page 7 Dept. To compare the relative performance of two different computers, X and Y. The phrase "X is faster than Y" is used to mean that the response time or execution time is lower on X than on Y for the given task. In particular, "X is n times faster than Y" will mean The execution time is the reciprocal of performance, then This shows that the performance of X is n times higher than Y.
Processor Performance Equation 1 All computers are constructed using a clock running at a constant rate. These discrete time events are called ticks, clock ticks, clock periods, clocks, cycles, or clock cycles. Computer designers refer to the time of a clock period by its duration e. Problem: A program runs in 20 seconds on a computer A which has a 8GHz clock. Another computer B is built that will run this program in 12 seconds.
For this, computer B requires 2. What is the clock rate of B? If we know the number of clock cycles and the instruction count, we can calculate the average number of clock cycles per instruction CPI. It is difficult to change one parameter in complete isolation from others because the basic technologies involved in changing each characteristic are interdependent: Clock cycle time - Hardware technology and organization CPI - Organization and instruction set architecture Instruction count - Instruction set architecture and compiler technology Problem: C1 C2 Clock 3GHz 3GHz CPI 1.
Solution: 1 1 8 1.
Since the earliest machines were programmed in assembly language and memory was slow and expensive, the CISC philosophy made sense, and was commonly implemented in such large computers as the PDP and the DEC system 10 and 20 machines.
CISC was developed to make compiler development simpler. It shifts most of the burden of generating machine instructions to the processor. For example, instead of having to make a compiler write long machine instructions to calculate a square-root, a CISC processor would have a built-in ability to do this.
Some common characteristics of CISC instruction are: 1. Two operand format, where instructions have a source and a destination. Register to register, register to memory, and memory to register commands. Multiple addressing modes for memory, including specialized modes for indexing through arrays 4.
Variable length instructions where the length often varies according to the addressing mode 5. Instructions which require multiple clock cycles to execute. Complex instruction-decoding logic, driven by the need for a single instruction to support multiple addressing modes.
A small number of general purpose registers. This is the direct result of having instructions which can operate directly on memory and the limited amount of chip space not dedicated to instruction decoding, execution, and microcode storage.
Several special purpose registers. Many CISC designs set aside special registers for the stack pointer, interrupt handling, and so on. This can simplify the hardware design somewhat, at the expense of making the instruction set more complex.
A 'Condition code" register which is set as a side-effect of most instructions. This register reflects whether the result of the last operation is less than, equal to, or greater than zero and records if certain error conditions occur.
Advantages of CISC processors: 1. Microprogramming is as easy as assembly language to implement, and much less expensive than hardwiring a control unit. The ease of micro-coding new instructions allowed designers to make CISC machines upwardly compatible: a new computer could run the same programs as earlier computers because the new computer would contain a superset of the instructions of the earlier computers.
As each instruction became more capable, fewer instructions could be used to implement a given task. This made more efficient use of the relatively slow main memory.
Because microprogram instruction sets can be written to match the constructs of high-level languages, the compiler does not have to be as complicated. Disadvantages of CSIC processors: 1. So that as many instructions as possible could be stored in memory with the least possible wasted space, individual instructions could be of almost any length - this means that different instructions will take different amounts of clock time to execute, slowing down the overall performance of the machine.
CISC instructions typically set the condition codes as a side effect of the instruction. Not only does setting the condition codes take time, but programmers have to remember to examine the condition code bits before a subsequent instruction changes them. RISC Reduced Instruction Set Computer A type of microprocessor architecture that utilizes a small, highly-optimized set of instructions, rather than a more specialized set of instructions often found in other types of architectures. Some characteristic of most RISC processors are: 1.
This is due to the optimization of each instruction on the CPU. Pipelining: A technique that allows for simultaneous execution of parts, or stages, of instructions to more efficiently process instructions.
Large number of registers: The RISC design philosophy generally incorporates a larger number of registers to prevent in large amounts of interactions with memory. Reduced instruction set. Less complex, simple instructions. Hardwired control unit and machine instructions. Many symmetric registers which are organized into a register file.
These operations, as well as other arithmetic and logic operations, are implemented in the arithmetic and logic unit ALU of the processor. Addition and Subtraction of Signed Numbers The truth table for the sum and carry-out functions for adding equally weighted bits xi and yi in two numbers X and Y is shown below. The logic expression for sum bit si can be implemented with a 3-input XOR gate, and is a part of the logic required for a single stage of binary addition.
This is called full adder FA and is shown below. A cascaded connection of n full-adder blocks can be used to add two n-bit numbers, as shown below. Since the carries must propagate, or ripple, through this cascade, the configuration is also called a ripple-carry adder. The carry-out bit cn is not part of the answer. A circuit to detect overflow can be added to the n-bit adder.
It can also be shown that overflow occurs when the carry bits cn and cn1 are different. Therefore, a simpler circuit for detecting overflow can be obtained by implementing the expression cn cn1 with an XOR gate. The 2s-complementing a negative number is done in exactly the same manner as for a positive number. An XOR gate can be added to the above circuit to detect the overflow condition cn cn1.
The nal carry-out, cn, is available after 2n gate delays. Two approaches can be taken to reduce delay in adders.
One is to use the fastest possible electronic technology. Another one is to use a logic gate network called a carry-lookahead adder.
This occurs when both xi and yi are 1. The propagate function Pi means that an input carry will produce an output carry when either xi is 1 or yi is 1. All Gi and Pi functions can be formed independently and in parallel in one logic-gate delay after the X and Y operands are applied to the inputs of an n-bit adder.
Architecture in computer system, same as anywhere else, refers to the externally visual attributes of the system. Externally visual attributes, here in computer science, mean the way a system is visible to the logic of programs not the human eyes!
Organization of computer system is the way of practical implementation which results in realization of architectural specifications of a computer system.
How it came along[ edit ] History of computer systems, in strict sense of name, will date back to as back as the basic need for computation among humans.
We, however, are more concerned with architecture and organisation of Electronic computer systems only as 'the computing systems' before this had very vague or atleast different! Presper Eckert. This, although a great achievement altogether, was not of much importance on front of standards of Architecture and organisation.
The programming of this giant machine required manual change of circuitry by expert individuals by changing connecting wires and lots of switches; It sure was a tedious task. It worked on decimal systems much similar to the way we, humans, do in our normal lives. This computer was proposed by John von Neumann and others in It used stored program model for computers, wherein all instructions were also to be stored in memory along being data to be processed thereby removing the need for change in hardware structure to change the program.
This basic structure of computer system has since then served as the basic idea for a computer system. The trend continues even today with few changes in the design. This architecture however is more popular for implementation in IAS computer, as Neumann, later, shifted to this project.