Delay Slot

https://en.wikipedia.org/wiki/Delay_slot

From Wikipedia, the free encyclopedia

In computer architecture, a delay slot is an instruction slot being executed without the effects of a preceding instruction.[1] The most common form is a single arbitrary instruction located immediately after a branch instruction on a RISC or DSP architecture; this instruction will execute even if the preceding branch is taken. This makes the instruction execute out-of-order compared to its location in the original assembler language code.

Modern processor designs generally do not use delay slots, and instead perform ever more complex forms of branch prediction. In these systems, the CPU immediately moves on to what it believes will be the correct side of the branch and thereby eliminates the need for the code to specify some unrelated instruction, which may not always be obvious at compile-time. If the assumption is wrong, and the other side of the branch has to be called, this can introduce a lengthy delay. This occurs rarely enough that the speed up of avoiding the delay slot is easily made up by the smaller number of wrong decisions.

A central processing unit generally performs instructions from the machine code using a four-step process; the instruction is first read from memory, then decoded to understand what needs to be performed, those actions are then executed, and finally, any results are written back to memory. In early designs, each of these stages was performed in series, so that instructions took some multiple of the machine's clock cycle to complete. For instance, in the Zilog Z80, the minimum number of clocks needed to complete an instruction was four, but could be as many as 23 clocks for some (rare) instructions.[2]

At any given stage of the instruction's processing, only one part of the chip is involved. For instance, during the execution stage, typically only the arithmetic logic unit (ALU) is active, while other units, like those that interact with main memory or decode the instruction, are idle. One way to improve the overall performance of a computer is through the use of an instruction pipeline. This adds some additional circuitry to hold the intermediate states of the instruction as it flows through the units. While this does not improve the cycle timing of any single instruction, the idea is to allow a second instruction to use the other CPU sub-units when the previous instruction has moved on.[3]

For instance, while one instruction is using the ALU, the next instruction from the program can be in the decoder, and a third can be fetched from memory. In this assembly line type arrangement, the total number of instructions processed at any time can be improved by up to the number of pipeline stages. In the Z80, for example, a four-stage pipeline could improve overall throughput by four times. However, due to the complexity of the instruction timing, this would not be easy to implement. The much simpler instruction set architecture (ISA) of the MOS 6502 allowed a two-stage pipeline to be included, which gave it performance that was about double that of the Z80 at any given clock speed.[4]

A major issue with the implementation of pipelines in early systems was that instructions had widely varying cycle counts. For instance, the instruction to add two values would often be offered in multiple versions, or opcodes, which varied on where they read in the data. One version of add might take the value found in one processor register and add it to the value in another, another version might add the value found in memory to a register, while another might add the value in one memory location to another memory location. Each of these instructions takes a different amount of bytes to represent it in memory, meaning they take different amounts of time to fetch, may require multiple trips through the memory interface to gather values, etc. This greatly complicates the pipeline logic. One of the goals of the RISC chip design concept was to remove these variants so that the pipeline logic was simplified, which leads to the classic RISC pipeline which completes one instruction every cycle.

However, there is one problem that comes up in pipeline systems that can slow performance. This occurs when the next instruction may change depending on the results of the last. In most systems, this happens when a branch occurs. For instance, consider the following pseudo-code:

 top:
   read a number from memory and store it in a register
   read another number and store it in a different register
   add the two numbers into a third register
   write the result to memory
   read a number from memory and store it in another register
   ...

In this case, the program is linear and can be easily pipelined. As soon as the first read instruction has been read and is being decoded, the second read instruction can be read from memory. When the first moves to execute, the add is being read from memory while the second read is decoding, and so forth. Although it still takes the same number of cycles to complete the first read, by the time it is complete the value from the second is ready and the CPU can immediately add them. In a non-pipelined processor the first four instructions will take 16 cycles to complete, in a pipelined one, it takes only five.

Now consider what occurs when a branch is added:

 top:
   read a number from memory and store it in a register
   read another number and store it in a different register
   add the two numbers into a third register
   if the result in the 3rd register is greater than 1000, then go back to top:
   (if it is not) write the result to memory
   read a number from memory and store it in another register
   ...

In this example the outcome of the comparison on line four will cause the "next instruction" to change; sometimes it will be the following write to memory, and sometimes it will be the read from memory at the top. The processor's pipeline will normally have already read the next instruction, the write, by the time the ALU has calculated which path it will take. This is known as a branch hazard. If it has to return to the top, the write instruction has to be discarded and the read instruction read from memory instead. That takes one full instruction cycle, at a minimum, and results in the pipeline being empty for at least one instruction's time. This is known as a "pipeline stall" or "bubble", and, depending on the number of branches in the code, can have a noticeable impact on overall performance.

One strategy for dealing with this problem is to use a delay slot, which refers to the instruction slot after any instruction that needs more time to complete. In the examples above, the instruction that requires more time is the branch, which is by far the most common type of delay slot, and these are more commonly referred to as a branch delay slot.

In early implementations, the instruction following the branch would be filled with a no-operation, or NOP, simply to fill out the pipeline to ensure the timing was right such that by the time the NOP had been loaded from memory the branch was complete and the program counter could be updated with the correct value. This simple solution wastes the processing time available. More advanced solutions would instead try to identify another instruction, typically nearby in the code, to place in the delay slot so that useful work would be accomplished.

In the examples above, the read instruction at the end is completely independent, it does not rely on any other information and can be performed at any time. This makes it suitable for placement in the branch delay slot. Normally this would be handled automatically by the assembler program or compiler, which would re-order the instructions:

 read a number from memory and store it in a register
 read another number and store it in a different register
 add the two numbers into a third register
 if the result in the 3rd register is greater than 1000, then go back to the top
 read a number from memory and store it in another register
 (if it is not) write the result to memory
  ...

Now when the branch is executing, it goes ahead and performs the next instruction. By the time that instruction is read into the processor and starts to decode, the result of the comparison is ready and the processor can now decide which instruction to read next, the read at the top or the write at the bottom. This prevents any wasted time and keeps the pipeline full at all times.

Finding an instruction to fill the slot can be difficult. The compilers generally have a limited "window" to examine and may not find a suitable instruction in that range of code. Moreover, the instruction cannot rely on any of the data within the branch; if an add instruction takes a previous calculation as one of its inputs, that input cannot be part of the code in a branch that might be taken. Deciding if this is true can be very complex in the presence of register renaming, in which the processor may place data in registers other than what the code specifies without the compiler being aware of this.

Another side effect is that special handling is needed when managing breakpoints on instructions as well as stepping while debugging within the branch delay slot. An interrupt is unable to occur during a branch delay slot and is deferred until after the branch delay slot.[5][6] Placing branch instruction in the branch delay slot is prohibited or deprecated.[7][8][9]

The ideal number of branch delay slots in a particular pipeline implementation is dictated by the number of pipeline stages, the presence of register forwarding, what stage of the pipeline the branch conditions are computed, whether or not a branch target buffer (BTB) is used and many other factors. Software compatibility requirements dictate that an architecture may not change the number of delay slots from one generation to the next. This inevitably requires that newer hardware implementations contain extra hardware to ensure that the architectural behaviour is followed despite no longer being relevant.

Branch delay slots are found mainly in DSP architectures and older RISC architectures. MIPS, PA-RISC (delayed or non-delayed branch can be specified),[10] ETRAX CRIS, SuperH (unconditional branch instructions have one delay slot),[11] Am29000,[12] Intel i860 (unconditional branch instructions have one delay slot),[13] MC88000 (delayed or non-delayed branch can be specified),[14] and SPARC are RISC architectures that each have a single branch delay slot; PowerPC, ARM, Alpha, V850, and RISC-V do not have any. DSP architectures that each have a single branch delay slot include μPD77230[15] and the VS DSP. The SHARC DSP and MIPS-X use a double branch delay slot;[16] such a processor will execute a pair of instructions following a branch instruction before the branch takes effect. Both TMS320C3x[17] and TMS320C4x[8] use a triple branch delay slot. The TMS320C4x has both non-delayed and delayed branches.[8]

The following example shows delayed branches in assembly language for the SHARC DSP including a pair after the RTS instruction. Registers R0 through R9 are cleared to zero in order by number (the register cleared after R6 is R7, not R9). No instruction executes more than once.

     R0 = 0;
     CALL fn (DB);      /* call a function, below at label "fn" */
     R1 = 0;            /* first delay slot */
     R2 = 0;            /* second delay slot */
     /***** discontinuity here (the CALL takes effect) *****/
     R6 = 0;            /* the CALL/RTS comes back here, not at "R1 = 0" */
     JUMP end (DB);
     R7 = 0;            /* first delay slot */
     R8 = 0;            /* second delay slot */
     /***** discontinuity here (the JUMP takes effect) *****/
     /* next 4 instructions are called from above, as function "fn" */
fn:  R3 = 0;
     RTS (DB);          /* return to caller, past the caller's delay slots */
     R4 = 0;            /* first delay slot */
     R5 = 0;            /* second delay slot */
     /***** discontinuity here (the RTS takes effect) *****/
end: R9 = 0;

A load delay slot is an instruction which executes immediately after a load (of a register from memory) but does not see, and need not wait for, the result of the load. Load delay slots are very uncommon because load delays are highly unpredictable on modern hardware. A load may be satisfied from RAM or from a cache, and may be slowed by resource contention. Load delays were seen on very early RISC processor designs. The MIPS I ISA (implemented in the R2000 and R3000 microprocessors) suffers from this problem.

The following example is MIPS I assembly code, showing both a load delay slot and a branch delay slot.

   lw   v0,4(v1)   # load word from address v1+4 into v0
   nop             # wasted load delay slot
   jr   v0         # jump to the address specified by v0
   nop             # wasted branch delay slot
  1. ^ A.Patterson, David; L.Hennessy, John (1990). Computer Archtecture A Quantitative Approach. Morgan Kaufmann Publishers. p. 275. ISBN 1-55860-069-8.
  2. ^ "MSX Assembly Page".
  3. ^ "CMSC 411 Lecture 19, Pipelining Data Forwarding". University of Maryland Baltimore County Computer Science and Electrical Engineering Department. Retrieved 2020-01-22.
  4. ^ Cox, Russ (3 January 2011). "The MOS 6502 and the Best Layout Guy in the World".
  5. ^ "μPD77230 Advanced Signal Processor" (PDF). pp. 38(3-39), 70(3-41). Retrieved 2023-11-17.
  6. ^ "TMS320C4x User's Guide" (PDF). p. 75(3-15). Retrieved 2023-12-02.
  7. ^ "μPD77230 Advanced Signal Processor" (PDF). p. 191(4-76). Retrieved 2023-10-28.
  8. ^ a b c "TMS320C4x User's Guide" (PDF). p. 171(7-9). Retrieved 2023-10-29.
  9. ^ "MC88100 RISC Microprocessor User's Manual" (PDF). p. 88(3-33). Retrieved 2023-12-30.
  10. ^ DeRosa, John A.; Levy, Henry M. "An Evaluation of Branch Architectures". p. 1. Retrieved 2024-01-27.
  11. ^ "SH7020 and SH7021 Hardware ManualSuperH™ RISC engine". p. 42,70. Retrieved 2023-12-17.
  12. ^ "Evaluating and Programming the 29K RISC Family Third Edition – DRAFT" (PDF). p. 54. Retrieved 2023-12-20.
  13. ^ "i860™ 64-bit Microprocessor Programmer's Reference Manual" (PDF). p. 70(5-11). Retrieved 2023-12-21.
  14. ^ "MC88100 RISC Microprocessor User's Manual" (PDF). p. 81(3-26). Retrieved 2023-12-21.
  15. ^ "μPD77230 Advanced Signal Processor" (PDF). p. 191(4-76). Retrieved 2023-11-05.
  16. ^ "MIPS-X Instruction Set and Programmer's Manual" (PDF). p. 18. Retrieved 2023-12-03.
  17. ^ "The TMS320C30 Floating-Point Digital Signal Processor" (PDF). ti.com. p. 14. Retrieved 2023-11-04.
{
"by": "ankitg12",
"descendants": 0,
"id": 40246386,
"score": 2,
"time": 1714735326,
"title": "Delay Slot",
"type": "story",
"url": "https://en.wikipedia.org/wiki/Delay_slot"
}
{
"author": "Contributors to Wikimedia projects",
"date": "2024-12-15T00:03:29.000Z",
"description": null,
"image": "https://upload.wikimedia.org/wikipedia/en/thumb/9/99/Question_book-new.svg/50px-Question_book-new.svg.png",
"logo": "https://logo.clearbit.com/wikipedia.org",
"publisher": "Wikipedia",
"title": "Delay slot - Wikipedia",
"url": "https://en.wikipedia.org/wiki/Delay_slot"
}
{
"url": "https://en.wikipedia.org/wiki/Delay_slot",
"title": "Delay slot - Wikipedia",
"description": "From Wikipedia, the free encyclopedia \t\t\t\t\t In computer architecture, a delay slot is an instruction slot being executed without the effects of a preceding instruction.[1] The most common form is a single...",
"links": [
"https://en.wikipedia.org/wiki/Delay_slot"
],
"image": "",
"content": "<div>\n\t\t\t\t\t\t<p>From Wikipedia, the free encyclopedia</p>\n\t\t\t\t\t</div><div>\n<p>In <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Computer_architecture\" title=\"Computer architecture\">computer architecture</a>, a <b>delay slot</b> is an instruction slot being executed without the effects of a preceding instruction.<sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-1\"><span>[</span>1<span>]</span></a></sup> The most common form is a single arbitrary instruction located immediately after a <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Branch_(computer_science)\" title=\"Branch (computer science)\">branch</a> <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Instruction_(computer_science)\" title=\"Instruction (computer science)\">instruction</a> on a <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/RISC\" title=\"RISC\">RISC</a> or <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Digital_signal_processor\" title=\"Digital signal processor\">DSP</a> architecture; this instruction will execute even if the preceding branch is taken. This makes the instruction execute <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Out-of-order_execution\" title=\"Out-of-order execution\">out-of-order</a> compared to its location in the original <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Assembler_language\" title=\"Assembler language\">assembler language</a> code.\n</p><p>Modern processor designs generally do not use delay slots, and instead perform ever more complex forms of <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Branch_prediction\" title=\"Branch prediction\">branch prediction</a>. In these systems, the CPU immediately moves on to what it believes will be the correct side of the branch and thereby eliminates the need for the code to specify some unrelated instruction, which may not always be obvious at compile-time. If the assumption is wrong, and the other side of the branch has to be called, this can introduce a lengthy delay. This occurs rarely enough that the speed up of avoiding the delay slot is easily made up by the smaller number of wrong decisions.\n</p>\n<p>A <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Central_processing_unit\" title=\"Central processing unit\">central processing unit</a> generally performs instructions from the <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Machine_code\" title=\"Machine code\">machine code</a> using a four-step process; the instruction is first read from memory, then decoded to understand what needs to be performed, those actions are then executed, and finally, any results are written back to memory. In early designs, each of these stages was performed in series, so that instructions took some multiple of the machine's <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Clock_rate\" title=\"Clock rate\">clock cycle</a> to complete. For instance, in the <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Zilog_Z80\" title=\"Zilog Z80\">Zilog Z80</a>, the minimum number of clocks needed to complete an instruction was four, but could be as many as 23 clocks for some (rare) instructions.<sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-2\"><span>[</span>2<span>]</span></a></sup>\n</p><p>At any given stage of the instruction's processing, only one part of the chip is involved. For instance, during the execution stage, typically only the <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Arithmetic_logic_unit\" title=\"Arithmetic logic unit\">arithmetic logic unit</a> (ALU) is active, while other units, like those that interact with <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Main_memory\" title=\"Main memory\">main memory</a> or decode the instruction, are idle. One way to improve the overall performance of a computer is through the use of an <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Instruction_pipeline\" title=\"Instruction pipeline\">instruction pipeline</a>. This adds some additional circuitry to hold the intermediate states of the instruction as it flows through the units. While this does not improve the cycle timing of any single instruction, the idea is to allow a second instruction to use the other CPU sub-units when the previous instruction has moved on.<sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-pipeline-3\"><span>[</span>3<span>]</span></a></sup>\n</p><p>For instance, while one instruction is using the ALU, the next instruction from the program can be in the decoder, and a third can be fetched from memory. In this <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Assembly_line\" title=\"Assembly line\">assembly line</a> type arrangement, the total number of instructions processed at any time can be improved by up to the number of pipeline stages. In the Z80, for example, a four-stage pipeline could improve overall throughput by four times. However, due to the complexity of the instruction timing, this would not be easy to implement. The much simpler <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Instruction_set_architecture\" title=\"Instruction set architecture\">instruction set architecture</a> (ISA) of the <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/MOS_6502\" title=\"MOS 6502\">MOS 6502</a> allowed a two-stage pipeline to be included, which gave it performance that was about double that of the Z80 at any given clock speed.<sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-4\"><span>[</span>4<span>]</span></a></sup>\n</p>\n<p>A major issue with the implementation of pipelines in early systems was that instructions had widely varying cycle counts. For instance, the instruction to add two values would often be offered in multiple versions, or <i><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Opcode\" title=\"Opcode\">opcodes</a></i>, which varied on where they read in the data. One version of <code>add</code> might take the value found in one <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Processor_register\" title=\"Processor register\">processor register</a> and add it to the value in another, another version might add the value found in memory to a register, while another might add the value in one memory location to another memory location. Each of these instructions takes a different amount of bytes to represent it in memory, meaning they take different amounts of time to fetch, may require multiple trips through the memory interface to gather values, etc. This greatly complicates the pipeline logic. One of the goals of the <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/RISC\" title=\"RISC\">RISC</a> chip design concept was to remove these variants so that the pipeline logic was simplified, which leads to the <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Classic_RISC_pipeline\" title=\"Classic RISC pipeline\">classic RISC pipeline</a> which completes one instruction every cycle.\n</p><p>However, there is one problem that comes up in pipeline systems that can slow performance. This occurs when the next instruction may change depending on the results of the last. In most systems, this happens when a <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Branch_(computer_science)\" title=\"Branch (computer science)\">branch</a> occurs. For instance, consider the following pseudo-code:\n</p>\n<pre> top:\n read a number from memory and store it in a register\n read another number and store it in a different register\n add the two numbers into a third register\n write the result to memory\n read a number from memory and store it in another register\n ...\n</pre>\n<p>In this case, the program is linear and can be easily pipelined. As soon as the first <code>read</code> instruction has been read and is being decoded, the second <code>read</code> instruction can be read from memory. When the first moves to execute, the <code>add</code> is being read from memory while the second <code>read</code> is decoding, and so forth. Although it still takes the same number of cycles to complete the first <code>read</code>, by the time it is complete the value from the second is ready and the CPU can immediately add them. In a non-pipelined processor the first four instructions will take 16 cycles to complete, in a pipelined one, it takes only five.\n</p><p>Now consider what occurs when a branch is added:\n</p>\n<pre> top:\n read a number from memory and store it in a register\n read another number and store it in a different register\n add the two numbers into a third register\n if the result in the 3rd register is greater than 1000, then go back to top:\n (if it is not) write the result to memory\n read a number from memory and store it in another register\n ...\n</pre>\n<p>In this example the outcome of the comparison on line four will cause the \"next instruction\" to change; sometimes it will be the following <code>write</code> to memory, and sometimes it will be the <code>read</code> from memory at the top. The processor's pipeline will normally have already read the next instruction, the <code>write</code>, by the time the ALU has calculated which path it will take. This is known as a <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Hazard_(computer_architecture)\" title=\"Hazard (computer architecture)\">branch hazard</a>. If it has to return to the top, the <code>write</code> instruction has to be discarded and the <code>read</code> instruction read from memory instead. That takes one full instruction cycle, at a minimum, and results in the pipeline being empty for at least one instruction's time. This is known as a \"pipeline stall\" or \"bubble\", and, depending on the number of branches in the code, can have a noticeable impact on overall performance.\n</p>\n<p>One strategy for dealing with this problem is to use a <b>delay slot</b>, which refers to the instruction slot after any instruction that needs more time to complete. In the examples above, the instruction that requires more time is the branch, which is by far the most common type of delay slot, and these are more commonly referred to as a <b>branch delay slot</b>.\n</p><p>In early implementations, the instruction following the branch would be filled with a no-operation, or <code>NOP</code>, simply to fill out the pipeline to ensure the timing was right such that by the time the <code>NOP</code> had been loaded from memory the branch was complete and the <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Program_counter\" title=\"Program counter\">program counter</a> could be updated with the correct value. This simple solution wastes the processing time available. More advanced solutions would instead try to identify another instruction, typically nearby in the code, to place in the delay slot so that useful work would be accomplished.\n</p><p>In the examples above, the <code>read</code> instruction at the end is completely independent, it does not rely on any other information and can be performed at any time. This makes it suitable for placement in the branch delay slot. Normally this would be handled automatically by the <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Assembler_(computing)\" title=\"Assembler (computing)\">assembler program</a> or <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Compiler\" title=\"Compiler\">compiler</a>, which would re-order the instructions:\n</p>\n<pre> read a number from memory and store it in a register\n read another number and store it in a different register\n add the two numbers into a third register\n if the result in the 3rd register is greater than 1000, then go back to the top\n read a number from memory and store it in another register\n (if it is not) write the result to memory\n ...\n</pre>\n<p>Now when the branch is executing, it goes ahead and performs the next instruction. By the time that instruction is read into the processor and starts to decode, the result of the comparison is ready and the processor can now decide which instruction to read next, the <code>read</code> at the top or the <code>write</code> at the bottom. This prevents any wasted time and keeps the pipeline full at all times.\n</p><p>Finding an instruction to fill the slot can be difficult. The compilers generally have a limited \"window\" to examine and may not find a suitable instruction in that range of code. Moreover, the instruction cannot rely on any of the data within the branch; if an <code>add</code> instruction takes a previous calculation as one of its inputs, that input cannot be part of the code in a branch that might be taken. Deciding if this is true can be very complex in the presence of <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Register_renaming\" title=\"Register renaming\">register renaming</a>, in which the processor may place data in registers other than what the code specifies without the compiler being aware of this.\n</p><p>Another side effect is that special handling is needed when managing <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Breakpoint\" title=\"Breakpoint\">breakpoints</a> on instructions as well as stepping while <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Debugging\" title=\"Debugging\">debugging</a> within the branch delay slot. An interrupt is unable to occur during a branch delay slot and is deferred until after the branch delay slot.<sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-5\"><span>[</span>5<span>]</span></a></sup><sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-6\"><span>[</span>6<span>]</span></a></sup> Placing branch instruction in the branch delay slot is prohibited or deprecated.<sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-7\"><span>[</span>7<span>]</span></a></sup><sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-tms320c4x-bra-8\"><span>[</span>8<span>]</span></a></sup><sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-9\"><span>[</span>9<span>]</span></a></sup>\n</p><p>The ideal number of branch delay slots in a particular pipeline implementation is dictated by the number of pipeline stages, the presence of <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Hazard_(computer_architecture)#Register_forwarding\" title=\"Hazard (computer architecture)\">register forwarding</a>, what stage of the pipeline the branch conditions are computed, whether or not a <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Branch_target_buffer\" title=\"Branch target buffer\">branch target buffer</a> (BTB) is used and many other factors. Software compatibility requirements dictate that an architecture may not change the number of delay slots from one generation to the next. This inevitably requires that newer hardware implementations contain extra hardware to ensure that the architectural behaviour is followed despite no longer being relevant.\n</p>\n<p>Branch delay slots are found mainly in <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Digital_signal_processor\" title=\"Digital signal processor\">DSP</a> architectures and older <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/RISC\" title=\"RISC\">RISC</a> architectures. <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/MIPS_architecture\" title=\"MIPS architecture\">MIPS</a>, <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/PA-RISC\" title=\"PA-RISC\">PA-RISC</a> (delayed or non-delayed branch can be specified),<sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-10\"><span>[</span>10<span>]</span></a></sup> <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/ETRAX_CRIS\" title=\"ETRAX CRIS\">ETRAX CRIS</a>, <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/SuperH\" title=\"SuperH\">SuperH</a> (unconditional branch instructions have one delay slot),<sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-11\"><span>[</span>11<span>]</span></a></sup> <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Am29000\" title=\"Am29000\">Am29000</a>,<sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-12\"><span>[</span>12<span>]</span></a></sup> <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Intel_i860\" title=\"Intel i860\">Intel i860</a> (unconditional branch instructions have one delay slot),<sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-13\"><span>[</span>13<span>]</span></a></sup> <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/MC88000\" title=\"MC88000\">MC88000</a> (delayed or non-delayed branch can be specified),<sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-14\"><span>[</span>14<span>]</span></a></sup> and <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/SPARC\" title=\"SPARC\">SPARC</a> are RISC architectures that each have a single branch delay slot; <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/PowerPC\" title=\"PowerPC\">PowerPC</a>, <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/ARM_architecture\" title=\"ARM architecture\">ARM</a>, <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/DEC_Alpha\" title=\"DEC Alpha\">Alpha</a>, <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/V850\" title=\"V850\">V850</a>, and <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/RISC-V\" title=\"RISC-V\">RISC-V</a> do not have any. <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Digital_signal_processor\" title=\"Digital signal processor\">DSP</a> architectures that each have a single branch delay slot include <a target=\"_blank\" href=\"https://en.wikipedia.org/w/index.php?title=NEC_%CE%BCPD77230&amp;action=edit&amp;redlink=1\" title=\"NEC μPD77230 (page does not exist)\">μPD77230</a><sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-15\"><span>[</span>15<span>]</span></a></sup> and the <a target=\"_blank\" href=\"https://en.wikipedia.org/w/index.php?title=VS_DSP&amp;action=edit&amp;redlink=1\" title=\"VS DSP (page does not exist)\">VS DSP</a>. The <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Super_Harvard_Architecture_Single-Chip_Computer\" title=\"Super Harvard Architecture Single-Chip Computer\">SHARC</a> DSP and <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/MIPS-X\" title=\"MIPS-X\">MIPS-X</a> use a double branch delay slot;<sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-16\"><span>[</span>16<span>]</span></a></sup> such a processor will execute a pair of instructions following a branch instruction before the branch takes effect. Both <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Texas_Instruments_TMS320\" title=\"Texas Instruments TMS320\">TMS320C3x</a><sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-17\"><span>[</span>17<span>]</span></a></sup> and <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Texas_Instruments_TMS320\" title=\"Texas Instruments TMS320\">TMS320C4x</a><sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-tms320c4x-bra-8\"><span>[</span>8<span>]</span></a></sup> use a triple branch delay slot. The TMS320C4x has both non-delayed and delayed branches.<sup><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_note-tms320c4x-bra-8\"><span>[</span>8<span>]</span></a></sup>\n</p><p>The following example shows delayed branches in assembly language for the SHARC DSP including a pair after the RTS instruction. Registers R0 through R9 are cleared to zero in order by number (the register cleared after R6 is R7, not R9). No instruction executes more than once.\n</p>\n<pre> R0 = 0;\n CALL fn (DB); /* call a function, below at label \"fn\" */\n R1 = 0; /* first delay slot */\n R2 = 0; /* second delay slot */\n /***** discontinuity here (the CALL takes effect) *****/\n R6 = 0; /* the CALL/RTS comes back here, not at \"R1 = 0\" */\n JUMP end (DB);\n R7 = 0; /* first delay slot */\n R8 = 0; /* second delay slot */\n /***** discontinuity here (the JUMP takes effect) *****/\n /* next 4 instructions are called from above, as function \"fn\" */\nfn: R3 = 0;\n RTS (DB); /* return to caller, past the caller's delay slots */\n R4 = 0; /* first delay slot */\n R5 = 0; /* second delay slot */\n /***** discontinuity here (the RTS takes effect) *****/\nend: R9 = 0;\n</pre>\n<p>A load delay slot is an instruction which executes immediately after a load (of a register from memory) but does not see, and need not wait for, the result of the load. Load delay slots are very uncommon because load delays are highly unpredictable on modern hardware. A load may be satisfied from RAM or from a cache, and may be slowed by resource contention. Load delays were seen on very early RISC processor designs. The <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/MIPS_I\" title=\"MIPS I\">MIPS I</a> ISA (implemented in the <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/R2000_(microprocessor)\" title=\"R2000 (microprocessor)\">R2000</a> and <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/R3000\" title=\"R3000\">R3000</a> microprocessors) suffers from this problem.\n</p><p>The following example is MIPS I assembly code, showing both a load delay slot and a branch delay slot.\n</p>\n<div><pre><span></span><span> </span><span>lw</span><span> </span><span>v0</span><span>,</span><span>4</span><span>(</span><span>v1</span><span>)</span><span> </span><span># load word from address v1+4 into v0</span>\n<span> </span><span>nop</span><span> </span><span># wasted load delay slot</span>\n<span> </span><span>jr</span><span> </span><span>v0</span><span> </span><span># jump to the address specified by v0</span>\n<span> </span><span>nop</span><span> </span><span># wasted branch delay slot</span>\n</pre></div>\n<ul><li><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Control_flow\" title=\"Control flow\">Control flow</a></li>\n<li><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Bubble_(computing)\" title=\"Bubble (computing)\">Bubble (computing)</a></li>\n<li><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Branch_predication\" title=\"Branch predication\">Branch predication</a></li></ul>\n<div><ol>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-1\">^</a></b></span> <span>A.Patterson, David; L.Hennessy, John (1990). <i>Computer Archtecture A Quantitative Approach</i>. Morgan Kaufmann Publishers. p. 275. <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/ISBN_(identifier)\" title=\"ISBN (identifier)\">ISBN</a> <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Special:BookSources/1-55860-069-8\" title=\"Special:BookSources/1-55860-069-8\">1-55860-069-8</a>.<span></span></span>\n</li>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-2\">^</a></b></span> <span><a target=\"_blank\" href=\"https://map.grauw.nl/resources/z80instr.php\">\"MSX Assembly Page\"</a>.<span></span></span>\n</li>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-pipeline_3-0\">^</a></b></span> <span><a target=\"_blank\" href=\"http://www.csee.umbc.edu/~squire/cs411_l19.html\">\"CMSC 411 Lecture 19, Pipelining Data Forwarding\"</a>. University of Maryland Baltimore County Computer Science and Electrical Engineering Department<span>. Retrieved <span>2020-01-22</span></span>.<span></span></span>\n</li>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-4\">^</a></b></span> <span>Cox, Russ (3 January 2011). <a target=\"_blank\" href=\"https://research.swtch.com/6502\">\"The MOS 6502 and the Best Layout Guy in the World\"</a>.<span></span></span>\n</li>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-5\">^</a></b></span> <span><a target=\"_blank\" href=\"http://www.bitsavers.org/components/nec/_dataSheets/uPD77230_Advanced_Signal_Processor_198603.pdf\">\"μPD77230 Advanced Signal Processor\"</a> <span>(PDF)</span>. pp. 38(3-39), 70(3-41)<span>. Retrieved <span>2023-11-17</span></span>.<span></span></span>\n</li>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-6\">^</a></b></span> <span><a target=\"_blank\" href=\"https://www.ti.com/lit/ug/spru063c/spru063c.pdf#page=171\">\"TMS320C4x User's Guide\"</a> <span>(PDF)</span>. p. 75(3-15)<span>. Retrieved <span>2023-12-02</span></span>.<span></span></span>\n</li>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-7\">^</a></b></span> <span><a target=\"_blank\" href=\"http://www.bitsavers.org/components/nec/_dataSheets/uPD77230_Advanced_Signal_Processor_198603.pdf#page=191\">\"μPD77230 Advanced Signal Processor\"</a> <span>(PDF)</span>. p. 191(4-76)<span>. Retrieved <span>2023-10-28</span></span>.<span></span></span>\n</li>\n<li><span>^ <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-tms320c4x-bra_8-0\"><sup><i><b>a</b></i></sup></a> <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-tms320c4x-bra_8-1\"><sup><i><b>b</b></i></sup></a> <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-tms320c4x-bra_8-2\"><sup><i><b>c</b></i></sup></a></span> <span><a target=\"_blank\" href=\"https://www.ti.com/lit/ug/spru063c/spru063c.pdf#page=171\">\"TMS320C4x User's Guide\"</a> <span>(PDF)</span>. p. 171(7-9)<span>. Retrieved <span>2023-10-29</span></span>.<span></span></span>\n</li>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-9\">^</a></b></span> <span><a target=\"_blank\" href=\"http://www.bitsavers.org/components/motorola/88000/MC88100_RISC_Microprocessor_Users_Manual_2ed_1990.pdf#page=88\">\"MC88100 RISC Microprocessor User's Manual\"</a> <span>(PDF)</span>. p. 88(3-33)<span>. Retrieved <span>2023-12-30</span></span>.<span></span></span>\n</li>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-10\">^</a></b></span> <span>DeRosa, John A.; Levy, Henry M. <a target=\"_blank\" href=\"https://dl.acm.org/doi/pdf/10.1145/30350.30352\">\"An Evaluation of Branch Architectures\"</a>. p. 1<span>. Retrieved <span>2024-01-27</span></span>.<span></span></span>\n</li>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-11\">^</a></b></span> <span><a target=\"_blank\" href=\"https://www.renesas.com/us/en/document/mah/superh-risc-engine-sh7020-and-sh7021-hd6437020-hd6477021-hd6437021-hd6417021?r=469371\">\"SH7020 and SH7021 Hardware ManualSuperH™ RISC engine\"</a>. p. 42,70<span>. Retrieved <span>2023-12-17</span></span>.<span></span></span>\n</li>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-12\">^</a></b></span> <span><a target=\"_blank\" href=\"https://ia902802.us.archive.org/22/items/29kprog/29kprog.pdf#page=78\">\"Evaluating and Programming the 29K RISC Family Third Edition – DRAFT\"</a> <span>(PDF)</span>. p. 54<span>. Retrieved <span>2023-12-20</span></span>.<span></span></span>\n</li>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-13\">^</a></b></span> <span><a target=\"_blank\" href=\"http://www.bitsavers.org/components/intel/i860/240329-002_i860_64-Bit_Microprocessor_Programmers_Reference_Feb89.pdf#page=70\">\"i860™ 64-bit Microprocessor Programmer's Reference Manual\"</a> <span>(PDF)</span>. p. 70(5-11)<span>. Retrieved <span>2023-12-21</span></span>.<span></span></span>\n</li>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-14\">^</a></b></span> <span><a target=\"_blank\" href=\"http://www.bitsavers.org/components/motorola/88000/MC88100_RISC_Microprocessor_Users_Manual_2ed_1990.pdf#page=81\">\"MC88100 RISC Microprocessor User's Manual\"</a> <span>(PDF)</span>. p. 81(3-26)<span>. Retrieved <span>2023-12-21</span></span>.<span></span></span>\n</li>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-15\">^</a></b></span> <span><a target=\"_blank\" href=\"http://www.bitsavers.org/components/nec/_dataSheets/uPD77230_Advanced_Signal_Processor_198603.pdf\">\"μPD77230 Advanced Signal Processor\"</a> <span>(PDF)</span>. p. 191(4-76)<span>. Retrieved <span>2023-11-05</span></span>.<span></span></span>\n</li>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-16\">^</a></b></span> <span><a target=\"_blank\" href=\"https://apps.dtic.mil/sti/tr/pdf/ADA181619.pdf\">\"MIPS-X Instruction Set and Programmer's Manual\"</a> <span>(PDF)</span>. p. 18<span>. Retrieved <span>2023-12-03</span></span>.<span></span></span>\n</li>\n<li><span><b><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Delay_slot#cite_ref-17\">^</a></b></span> <span><a target=\"_blank\" href=\"https://www.ti.com/lit/an/spra397/spra397.pdf#page=14\">\"The TMS320C30 Floating-Point Digital Signal Processor\"</a> <span>(PDF)</span>. ti.com. p. 14<span>. Retrieved <span>2023-11-04</span></span>.<span></span></span>\n</li>\n</ol></div>\n<div>\n<ul><li>DeRosa, J.A.; Levy, H.M. (1987). <a target=\"_blank\" href=\"https://dl.acm.org/doi/pdf/10.1145/30350.30352\">\"An evaluation of branch architectures §2 Delayed Branches\"</a>. <i>Proceedings of the 14th annual international symposium on Computer architecture (ISCA '87)</i>. Association for Computing Machinery. pp. 10–16. <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Doi_(identifier)\" title=\"Doi (identifier)\">doi</a>:<a target=\"_blank\" href=\"https://doi.org/10.1145%2F30350.30352\">10.1145/30350.30352</a>. <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/ISBN_(identifier)\" title=\"ISBN (identifier)\">ISBN</a> <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Special:BookSources/978-0-8186-0776-9\" title=\"Special:BookSources/978-0-8186-0776-9\">978-0-8186-0776-9</a>. <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/S2CID_(identifier)\" title=\"S2CID (identifier)\">S2CID</a> <a target=\"_blank\" href=\"https://api.semanticscholar.org/CorpusID:1870852\">1870852</a>.<span></span></li>\n<li>Prabhu, Gurpur M. <a target=\"_blank\" href=\"https://web.archive.org/web/20200807053823/http://web.cs.iastate.edu/~prabhu/Tutorial/PIPELINE/branchPred.html\">\"Branch Prediction Schemes\"</a>. <i>Computer Architecture Tutorial</i>. Iowa State University. Archived from <a target=\"_blank\" href=\"http://web.cs.iastate.edu/~prabhu/Tutorial/PIPELINE/branchPred.html\">the original</a> on 2020-08-07.<span></span></li></ul>\n</div>\n</div>",
"author": "",
"favicon": "https://en.wikipedia.org/static/favicon/wikipedia.ico",
"source": "en.wikipedia.org",
"published": "2003-10-30t04:55:21z",
"ttr": 491,
"type": "website"
}