Difference between revisions of "InOrder"

From gem5
Jump to: navigation, search
Line 1: Line 1:
The InOrder CPU model was designed to provide a generic framework to simulate in-order pipelines with an arbitrary ISA and with arbitrary pipeline descriptions. The model was originally conceived by closely mirroring the O3CPU model to provide a simulation framework that would operate at the "Tick" granularity. We then abstract the individual stages in the O3 model to provide [[InOrder Pipeline Stages | generic pipeline stages]] for the InOrder CPU to leverage in creating an user-defined amount of pipeline stages. Additionally, we abstract each component that a CPU might need to access (ALU, Branch Predictor, etc.) into a "resource" that needs to be requested by each instruction according to the [[InOrder Resource Request | resource-request]] model we implemented. This will potentially allow for researchers to model custom pipelines without the cost of designing the complete CPU from scratch.
+
The InOrder CPU model was designed to provide a generic framework to simulate in-order pipelines with an arbitrary ISA and with arbitrary pipeline descriptions. The model was originally conceived by closely mirroring the O3CPU model to provide a simulation framework that would operate at the "Tick" granularity. We then abstract the individual stages in the O3 model to provide [[InOrder Pipeline Stages | generic pipeline stages]] for the InOrder CPU to leverage in creating an user-defined amount of pipeline stages. Additionally, we abstract each component that a CPU might need to access (ALU, Branch Predictor, etc.) into a "resource" that needs to be requested by each instruction according to the [[InOrder Resource-Request Model | resource-request]] model we implemented. This will potentially allow for researchers to model custom pipelines without the cost of designing the complete CPU from scratch.
  
 
For more information, please check the following documentation about the InOrder mode, browse the code, and even also access the m5-users (standard usage) or m5-dev@m5sim.org (for developer)
 
For more information, please check the following documentation about the InOrder mode, browse the code, and even also access the m5-users (standard usage) or m5-dev@m5sim.org (for developer)
mailing lists.
+
mailing lists:
 +
 
 +
** [[InOrder Pipeline Stages | Pipeline Stages]]
 +
** [[InOrder Resource-Request Model | Resource-Request Modeling]]
 +
** [[InOrder Resource Pool | Resource Pool]]
 +
** [[InOrder Instruction Schedules | Instruction Schedules]]
 +
** [[InOrder Pipeline Description | Pipeline Description]] 
 +
** [[InOrder ToDo List]]
  
 
----
 
----

Revision as of 18:33, 12 January 2010

The InOrder CPU model was designed to provide a generic framework to simulate in-order pipelines with an arbitrary ISA and with arbitrary pipeline descriptions. The model was originally conceived by closely mirroring the O3CPU model to provide a simulation framework that would operate at the "Tick" granularity. We then abstract the individual stages in the O3 model to provide generic pipeline stages for the InOrder CPU to leverage in creating an user-defined amount of pipeline stages. Additionally, we abstract each component that a CPU might need to access (ALU, Branch Predictor, etc.) into a "resource" that needs to be requested by each instruction according to the resource-request model we implemented. This will potentially allow for researchers to model custom pipelines without the cost of designing the complete CPU from scratch.

For more information, please check the following documentation about the InOrder mode, browse the code, and even also access the m5-users (standard usage) or m5-dev@m5sim.org (for developer) mailing lists:


  • Soumyaroop Roy has been kind enough to provide a "test-status-page" of the M5-Inorder model in the work he has been doing.

(Note: The latest versions of the inorder model can be found in the m5-dev repository. Please check there for updates and additional information.)

  • InOrder CPU FAQ - As more questions are received, we'll start to put to frequent answers here...