Difference between revisions of "Ruby"

From gem5
Jump to: navigation, search
m (Protocols)
(Update replacement policy information)
 
(319 intermediate revisions by 9 users not shown)
Line 1: Line 1:
==Ruby==
+
== High level components of Ruby ==
=== High level components of Ruby ===
 
  Need to talk about the overview of Ruby and what are the major components.
 
''Rathijit will do it''
 
  
==== SLICC + Coherence protocols: ====
+
Ruby implements a detailed simulation model for the memory subsystem. It models inclusive/exclusive cache hierarchies with various [[Replacement_policy|replacement policies]], coherence protocol implementations, interconnection networks, DMA and memory controllers, various sequencers that initiate memory requests and handle responses. The models are modular, flexible and highly configurable. Three key aspects of these models are:
    Need to say what is SLICC and whats its purpose.  
 
    Talk about high level strcture of a typical coherence protocol file, that SLICC uses to generate code.  
 
    A simple example structure from protocol like MI_example can help here.
 
  
''Nilay will do it''
+
# Separation of concerns -- for example, the coherence protocol specifications are separate from the replacement policies and cache index mapping, the network topology is specified separately from the implementation.
 +
# Rich configurability -- almost any aspect affecting the memory hierarchy functionality and timing can be controlled.
 +
# Rapid prototyping -- a high-level specification language, SLICC, is used to specify functionality of various controllers.
  
==== Protocol independent memory components ====
+
The following picture, taken from the GEMS tutorial in ISCA 2005, shows a high-level view of the main components in Ruby.
# Cache Memory
+
[[File:ruby_overview.jpg|600px|center]]
# Replacement Policies
 
# Memory Controller
 
  
''Arka will do it''
+
=== SLICC + Coherence protocols: ===
 +
 
 +
'''''[[SLICC]]''''' stands for ''Specification Language for Implementing Cache Coherence''. It is a domain specific language that is used for specifying cache coherence protocols. In essence, a cache coherence protocol behaves like a state machine. SLICC is used for specifying the behavior of the state machine. Since the aim is to model the hardware as close as possible, SLICC imposes constraints on the state machines that can be specified. For example, SLICC can impose restrictions on the number of transitions that can take place in a single cycle. Apart from protocol specification, SLICC also combines together some of the components in the memory model. As can be seen in the following picture, the state machine takes its input from the input ports of the inter-connection network and queues the output at the output ports of the network, thus tying together the cache / memory controllers with the inter-connection network itself.
  
==== Topologies and Networks ====
+
[[File:slicc_overview.jpg|700px|center]]
  
The interconnection network connects the various components of the memory hierarchy (cache, memory, dma controllers) together. There are 3 key parts to this:
+
The following cache coherence protocols are supported:
 
# '''Topology specification''': These are specified with python files that describe the topology (mesh/crossbar/ etc.), link latencies and link bandwidth. The network topology is thus configurable.
 
# '''Cycle-accurate network model''': [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4919636 Garnet] is a cycle accurate, pipelined, network model that builds and simulates the specified topology. It simulates the router pipeline and movement of flits across the network subject to the routing algorithm, latency and bandwidth constraints.
 
# '''Network Power model''': The [http://www.princeton.edu/~peh/orion.html Orion] power model is used to keep track of router and link activity in the network. It calculates both router static power and link and router dynamic power as flits move through the network.
 
  
=== Implementation of Ruby ===
+
# '''[[MI_example]]''': example protocol, 1-level cache.
 +
# '''[[MESI_Two_Level]]''': single chip, 2-level caches, strictly-inclusive hierarchy.
 +
# '''[[MOESI_CMP_directory]]''': multiple chips, 2-level caches, non-inclusive (neither strictly inclusive nor exclusive) hierarchy.
 +
# '''[[MOESI_CMP_token]]''': 2-level caches. TODO.
 +
# '''[[MOESI_hammer]]''': single chip, 2-level private caches, strictly-exclusive hierarchy.
 +
# '''[[Garnet_standalone]]''': protocol to run the Garnet network in a standalone manner.
 +
# '''[[MESI Three Level]]''': 3-level caches, strictly-inclusive hierarchy.
  
==== Directory Structure ====
+
Commonly used notations and data structures in the protocols have been described in detail [[Cache Coherence Protocols|here]].
  
* '''src/mem/'''
+
=== Protocol independent memory components ===
** '''protocols''': SLICC specification for coherence protocols
 
** '''slicc''': implementation for SLICC parser and code generator
 
** '''ruby'''
 
*** '''buffers''': implementation for message buffers that are used for exchanging information between the cache, directory, memory controllers and the interconnect
 
*** '''common''': frequently used data structures, e.g. Address (with bit-manipulation methods), histogram, data block, basic types (int32, uint64, etc.)
 
*** '''eventqueue''': Ruby’s event queue mechanism for scheduling events
 
*** '''filters''': various Bloom filters
 
*** '''network''': Interconnect implementation, sample topology specification, network power calculations
 
*** '''profiler''': Profiling for cache events, memory controller events
 
*** '''recorder''':  Cache warmup and access trace recording
 
*** '''slicc_interface''': Message data structure, various mappings (e.g. address to directory node), utility functions (e.g. conversion between address & int, convert address to cache line address)
 
*** '''system''': Protocol independent memory components – CacheMemory, DirectoryMemory, Sequencer, RubyPort
 
  
==== SLICC ====
+
# '''Sequencer'''
    Explain functionality/ capability of SLICC
+
# '''Cache Memory'''
    Talk about
+
# '''Replacement Policies'''
    AST, Symbols, Parser and code generation in some details but NO need to cover every file and/or functions.
+
# '''Memory Controller'''
    Few examples should suffice.
 
  
''Nilay will do it''
+
In general cache coherence protocol independent components comprises of the Sequencer, Cache Memory structure, [[Replacement_policy|replacement policies]] and the Memory controller. The Sequencer class is responsible for feeding the memory subsystem (including the caches and the off-chip memory) with load/store/atomic memory requests from the processor. Every memory request when completed by the memory subsystem also send back the response to the processor via the Sequencer. There is one Sequencer for each hardware thread (or core) simulated in the system. The Cache Memory models a set-associative cache structure with parameterizable size, associativity, and replacement policy. L1, L2, L3 caches in the system are instances of Cache Memory, if they exist. The replacement policies are kept modular from the Cache Memory, so that different instances of Cache Memory can use different replacement policies of their choice. The Memory Controller is responsible for simulating and servicing any request that misses on all the on-chip caches of the simulated system. Memory Controller currently simple, but models DRAM ban contention, DRAM refresh faithfully. It also models close-page policy for DRAM buffer. 
  
==== Protocols ====
+
'''''Each component is described in details [[Coherence-Protocol-Independent Memory Components|here]].'''''
    Need to talk about each protocol being shipped. Need to talk about protocol specific configuration parameters.
 
    NO need to explain every action or every state/events, but need to give overall idea and how it works
 
    and assumptions (if any).
 
  
===== MI example =====
+
=== Interconnection Network ===
  
This is a simple cache coherence protocol that is used to illustrate protocol specification using SLICC.
+
The interconnection network connects the various components of the memory hierarchy (cache, memory, dma controllers) together.  
  
====== Related Files ======
+
[[File:Interconnection_network.jpg|600px|center]]
  
MI_example-cache.sm (cache controller specification) <br>
+
The key components of an interconnection network are:
MI_example-dir.sm (directory controller specification) <br>
 
MI_example-dma.sm (dma controller specification) <br>
 
MI_example-msg.sm (message type specification) <br>
 
MI_example.slicc (container file) <br>
 
  
====== Cache Hierarchy ======
+
# '''Topology'''
 +
# '''Routing'''
 +
# '''Flow Control'''
 +
# '''Router Microarchitecture'''
  
This protocol assumes a 1-level cache hierarchy. The cache is private to each node. The caches are kept coherent by a directory controller. Since the hierarchy is only 1-level, there is no inclusion/exclusion requirement.
+
'''''More details about the network model implementation are described [[Interconnection Network|here]].'''''
  
====== Stable States and Invariants ======
+
Alternatively, Interconnection network could be replaced with the external simulator [http://www.atc.unican.es/topaz/  TOPAZ]. This simulator is ready to run within gem5 and adds a significant number of [https://sites.google.com/site/atcgalerna/home-1/publications/files/NOCS-2012_Topaz.pdf?attredirects=0  features] over original ruby network simulator. It includes, new advanced router micro-architectures, new topologies, precision-performance adjustable router models, mechanisms to speed-up network simulation, etc ... The presentation of the tool (and the reason why is not included in the gem5 repostories)  is [http://thread.gmane.org/gmane.comp.emulators.m5.users/9651 here]
  
* '''M''' : The cache block has been accessed (read/written) by this node. No other node holds a copy of the cache block
+
== Life of a memory request in Ruby ==
* '''I''' : The cache block at this node is invalid
 
  
====== Cache controller ======
+
In this section we will provide a high level overview of how a memory request is serviced by Ruby as a whole and what components in Ruby it goes through. For detailed operations within each components though, refer to previous sections describing each component in isolation.
  
* Requests, Responses, Triggers:
+
# A memory request from a core or hardware context of gem5 enters the jurisdiction of Ruby through the '''''RubyPort::recvTiming''''' interface (in src/mem/ruby/system/RubyPort.hh/cc). The number of Rubyport instantiation in the simulated system is equal to the number of hardware thread context or cores (in case of ''non-multithreaded'' cores). A port from the side of each core is tied to a corresponding RubyPort.
** Load, Instruction fetch, Store from the core
+
# The memory request arrives as a gem5 packet and RubyPort is responsible for converting it to a RubyRequest object that is understood by various components of Ruby. It also finds out if the request is for some PIO or not and maneuvers the packet to correct PIO. Finally once it has generated the corresponding RubyRequest object and ascertained that the request is a ''normal'' memory request (not PIO access), it passes the request to the '''''Sequencer::makeRequest''''' interface of the attached Sequencer object with the port (variable ''ruby_port'' holds the pointer to it). Observe that Sequencer class itself is a derived class from the RubyPort class.
** Replacement from self
+
# As mentioned in the section describing Sequencer class of Ruby, there are as many objects of Sequencer in a simulated system as the number of hardware thread context (which is also equal to the number of RubyPort object in the system) and there is an one-to-one mapping between the Sequencer objects and the hardware thread context. Once a memory request arrives at the '''''Sequencer::makeRequest''''', it does various accounting and resource allocation for the request and finally pushes the request to the Ruby's coherent cache hierarchy for satisfying the request while accounting for the delay in servicing the same. The request is pushed to the Cache hierarchy by enqueueing the request to the ''mandatory queue'' after accounting for L1 cache access latency. The ''mandatory queue'' (variable name ''m_mandatory_q_ptr'') effectively acts as the interface between the Sequencer and the SLICC generated cache coherence files.
** Data from the directory controller
+
# L1 cache controllers (generated by SLICC according to the coherence protocol specifications) dequeues request from the ''mandatory queue'' and looks up the cache, makes necessary coherence state transitions and/or pushes the request to the next level of cache hierarchy as per the requirements. Different controller and components of SLICC generated Ruby code communicates among themselves through instantiations of ''MessageBuffer'' class of Ruby (src/mem/ruby/buffers/MessageBuffer.cc/hh) , which can act as ordered or unordered buffer or queues. Also the delays in servicing different steps for satisfying a memory request gets accounted for scheduling enqueue-ing and dequeue-ing operations accordingly. If the requested cache block may be found in L1 caches and with required coherence permissions then the request is satisfied and immediately returned. Otherwise the request is pushed to the next level of cache hierarchy through ''MessageBuffer''. A request can go all the way up to the Ruby's Memory Controller (also called Directory in many protocols).  Once the request get satisfied it is pushed upwards in the hierarchy through ''MessageBuffer''s.
** Forwarded request (intervention) from the directory controller
+
# The ''MessageBuffers'' also act as entry point of coherence messages to the on-chip interconnect modeled. The MesageBuffers are connected according to the interconnect topology specified. The coherence messages thus travel through this on-chip interconnect accordingly. 
** Writeback acknowledgement from the directory controller
+
# Once the requested cache block is available at L1 cache with desired coherence permissions, the L1 cache controller informs the corresponding Sequencer object by calling its '''''readCallback''''' or ''''writeCallback''''' method depending upon the type of the request. Note that by the time these methods on Sequencer are called the latency of servicing the request has been implicitly accounted for.
** Invalidations from directory controller (on dma activity)
+
# The Sequencer then clears up the accounting information for the corresponding request and then calls the '''''RubyPort::ruby_hit_callback''''' method. This ultimately returns the result of the request to the corresponding port of the core/ hardware context of the frontend (gem5).
  
* Main Operation:
+
== Directory Structure ==
** On a '''load/Instruction fetch/Store''' request from the core:
 
*** it checks whether the corresponding block is present in the M state. If so, it returns a hit
 
*** otherwise, if in I state, it initiates a GETX request from the directory controller
 
** ''Note: This protocol does not differentiate between load and store requests''
 
  
** On a '''replacement''' trigger from self:
+
* '''src/mem/'''
*** it evicts the block, issues a writeback request to the directory controller
+
** '''protocols''': SLICC specification for coherence protocols
*** it waits for acknowledgement from the directory controller
+
** '''slicc''': implementation for SLICC parser and code generator
** ''Note: writebacks are acknowledged to prevent races''
+
** '''ruby'''
 
+
*** '''common''': frequently used data structures, e.g. Address (with bit-manipulation methods), histogram, data block
** On a '''forwarded request''' from the directory controller:
+
*** '''filters''': various Bloom filters (stale code from GEMS)
*** This means that the block was in M state at this node when the request was generated by some other node
+
*** '''network''': Interconnect implementation, sample topology specification, network power calculations, message buffers used for connecting controllers
*** It sends the block directly to the requesting node (cache-to-cache transfer)
+
*** '''profiler''': Profiling for cache events, memory controller events
*** It evicts the block from this node
+
*** '''recorder''':  Cache warmup and access trace recording
 
+
*** '''slicc_interface''': Message data structure, various mappings (e.g. address to directory node), utility functions (e.g. conversion between address & int, convert address to cache line address)
** '''Invalidations''' are similar to replacements
+
*** '''structures''': Protocol independent memory components CacheMemory, DirectoryMemory
 
+
*** '''system''': Glue components – Sequencer, RubyPort, RubySystem
====== Directory controller ======
 
 
 
* Requests, Responses, Triggers:
 
** GETX from the cores, Forwarded GETX to the cores
 
** Data from memory, Data to the cores
 
** Writeback requests from the cores, Writeback acknowledgements to the cores
 
** DMA read, write requests from the DMA controllers
 
 
 
* Main Operation:
 
** The directory maintains track of which core has a block in the M state. It designates this core as owner of the block.
 
** On a '''GETX''' request from a core:
 
*** If the block is not present, a memory fetch request is initiated
 
*** If the block is already present, then it means the request is generated from some other core
 
**** In this case, a forwarded request is sent to the original owner
 
**** Ownership of the block is transferred to the requestor
 
** On a '''writeback''' request from a core:
 
*** If the core is owner, the data is written to memory and acknowledgement is sent back to the core
 
*** If the core is not owner, a NACK is sent back
 
**** This can happen in a race condition
 
**** The core evicted the block while a forwarded request some other core was on the way and the directory has already changed ownership for the core
 
**** The evicting core holds the data till the forwarded request arrives
 
** On '''DMA''' accesses (read/write)
 
*** Invalidation is sent to the owner node (if any). Otherwise data is fetched from memory.
 
*** This ensures that the most recent data is available.
 
 
 
====== Other features ======
 
 
 
** MI protocols don't support LL/SC semantics.
 
** This protocol has no timeout mechanisms.
 
 
 
===== MOESI_hammer =====
 
''Somayeh will do it''
 
 
 
===== MOESI_CMP_token =====
 
''Shoaib will do it''
 
===== MOESI_CMP_directory =====
 
 
 
In contrast with the MESI protocol, the MOESI protocol introduces an additional '''Owned''' state. This enables sharing of a block after modification without needing to write it back to memory first. However, in that case, only 1 node is the owner while the others are sharers. The owner node has the responsibility to write the block back to memory on eviction. Sharers may evict the block without writeback. An overview of the protocol can be found [http://en.wikipedia.org/wiki/MOESI_protocol here].
 
 
 
====== Related Files ======
 
 
 
MOESI_CMP_directory-L1cache.sm (L1 cache controller specification) <br>
 
MOESI_CMP_directory-L2cache.sm (L1 cache controller specification) <br>
 
MOESI_CMP_directory-dir.sm (directory controller specification) <br>
 
MOESI_CMP_directory-dma.sm (dma controller specification) <br>
 
MOESI_CMP_directory-msg.sm (message type specification) <br>
 
MOESI_CMP_directory.slicc (container file) <br>
 
 
 
====== Cache Hierarchy ======
 
 
 
====== Stable States and Invariants ======
 
 
 
====== L1 Cache controller ======
 
 
 
====== L2 Cache controller ======
 
 
 
====== Directory controller ======
 
 
 
====== Other features ======
 
 
 
''Rathijit will do it''
 
===== MESI_CMP_directory =====
 
''Arka will do it''
 
 
 
==== Protocol Independent Memory components ====
 
===== System =====
 
''Arka will do it''
 
 
 
===== Sequencer =====
 
''Arka will do it''
 
 
 
===== CacheMemory =====
 
''Arka will do it''
 
 
 
===== DMASequencer =====
 
''Rathijit will do it''
 
 
 
===== Memory Controller =====
 
''Rathijit will do it''
 
 
 
==== Topologies and Networks ====
 
===== Topology specification =====
 
Python files specify connections.  Shortest path graph traversals program the routing tables.
 
===== Network implementation =====
 
# SimpleNetwork
 
# Garnet
 
 
 
==== Life of a memory request in Ruby ====
 
Cpu model generates a packet -> RubyPort converts it to a ruby request -> L1 cache controller converts it to a protocol specific message ...etc.
 
 
 
''Arka will do it''
 

Latest revision as of 06:31, 5 November 2019

High level components of Ruby

Ruby implements a detailed simulation model for the memory subsystem. It models inclusive/exclusive cache hierarchies with various replacement policies, coherence protocol implementations, interconnection networks, DMA and memory controllers, various sequencers that initiate memory requests and handle responses. The models are modular, flexible and highly configurable. Three key aspects of these models are:

  1. Separation of concerns -- for example, the coherence protocol specifications are separate from the replacement policies and cache index mapping, the network topology is specified separately from the implementation.
  2. Rich configurability -- almost any aspect affecting the memory hierarchy functionality and timing can be controlled.
  3. Rapid prototyping -- a high-level specification language, SLICC, is used to specify functionality of various controllers.

The following picture, taken from the GEMS tutorial in ISCA 2005, shows a high-level view of the main components in Ruby.

Ruby overview.jpg

SLICC + Coherence protocols:

SLICC stands for Specification Language for Implementing Cache Coherence. It is a domain specific language that is used for specifying cache coherence protocols. In essence, a cache coherence protocol behaves like a state machine. SLICC is used for specifying the behavior of the state machine. Since the aim is to model the hardware as close as possible, SLICC imposes constraints on the state machines that can be specified. For example, SLICC can impose restrictions on the number of transitions that can take place in a single cycle. Apart from protocol specification, SLICC also combines together some of the components in the memory model. As can be seen in the following picture, the state machine takes its input from the input ports of the inter-connection network and queues the output at the output ports of the network, thus tying together the cache / memory controllers with the inter-connection network itself.

Slicc overview.jpg

The following cache coherence protocols are supported:

  1. MI_example: example protocol, 1-level cache.
  2. MESI_Two_Level: single chip, 2-level caches, strictly-inclusive hierarchy.
  3. MOESI_CMP_directory: multiple chips, 2-level caches, non-inclusive (neither strictly inclusive nor exclusive) hierarchy.
  4. MOESI_CMP_token: 2-level caches. TODO.
  5. MOESI_hammer: single chip, 2-level private caches, strictly-exclusive hierarchy.
  6. Garnet_standalone: protocol to run the Garnet network in a standalone manner.
  7. MESI Three Level: 3-level caches, strictly-inclusive hierarchy.

Commonly used notations and data structures in the protocols have been described in detail here.

Protocol independent memory components

  1. Sequencer
  2. Cache Memory
  3. Replacement Policies
  4. Memory Controller

In general cache coherence protocol independent components comprises of the Sequencer, Cache Memory structure, replacement policies and the Memory controller. The Sequencer class is responsible for feeding the memory subsystem (including the caches and the off-chip memory) with load/store/atomic memory requests from the processor. Every memory request when completed by the memory subsystem also send back the response to the processor via the Sequencer. There is one Sequencer for each hardware thread (or core) simulated in the system. The Cache Memory models a set-associative cache structure with parameterizable size, associativity, and replacement policy. L1, L2, L3 caches in the system are instances of Cache Memory, if they exist. The replacement policies are kept modular from the Cache Memory, so that different instances of Cache Memory can use different replacement policies of their choice. The Memory Controller is responsible for simulating and servicing any request that misses on all the on-chip caches of the simulated system. Memory Controller currently simple, but models DRAM ban contention, DRAM refresh faithfully. It also models close-page policy for DRAM buffer.

Each component is described in details here.

Interconnection Network

The interconnection network connects the various components of the memory hierarchy (cache, memory, dma controllers) together.

Interconnection network.jpg

The key components of an interconnection network are:

  1. Topology
  2. Routing
  3. Flow Control
  4. Router Microarchitecture

More details about the network model implementation are described here.

Alternatively, Interconnection network could be replaced with the external simulator TOPAZ. This simulator is ready to run within gem5 and adds a significant number of features over original ruby network simulator. It includes, new advanced router micro-architectures, new topologies, precision-performance adjustable router models, mechanisms to speed-up network simulation, etc ... The presentation of the tool (and the reason why is not included in the gem5 repostories) is here

Life of a memory request in Ruby

In this section we will provide a high level overview of how a memory request is serviced by Ruby as a whole and what components in Ruby it goes through. For detailed operations within each components though, refer to previous sections describing each component in isolation.

  1. A memory request from a core or hardware context of gem5 enters the jurisdiction of Ruby through the RubyPort::recvTiming interface (in src/mem/ruby/system/RubyPort.hh/cc). The number of Rubyport instantiation in the simulated system is equal to the number of hardware thread context or cores (in case of non-multithreaded cores). A port from the side of each core is tied to a corresponding RubyPort.
  2. The memory request arrives as a gem5 packet and RubyPort is responsible for converting it to a RubyRequest object that is understood by various components of Ruby. It also finds out if the request is for some PIO or not and maneuvers the packet to correct PIO. Finally once it has generated the corresponding RubyRequest object and ascertained that the request is a normal memory request (not PIO access), it passes the request to the Sequencer::makeRequest interface of the attached Sequencer object with the port (variable ruby_port holds the pointer to it). Observe that Sequencer class itself is a derived class from the RubyPort class.
  3. As mentioned in the section describing Sequencer class of Ruby, there are as many objects of Sequencer in a simulated system as the number of hardware thread context (which is also equal to the number of RubyPort object in the system) and there is an one-to-one mapping between the Sequencer objects and the hardware thread context. Once a memory request arrives at the Sequencer::makeRequest, it does various accounting and resource allocation for the request and finally pushes the request to the Ruby's coherent cache hierarchy for satisfying the request while accounting for the delay in servicing the same. The request is pushed to the Cache hierarchy by enqueueing the request to the mandatory queue after accounting for L1 cache access latency. The mandatory queue (variable name m_mandatory_q_ptr) effectively acts as the interface between the Sequencer and the SLICC generated cache coherence files.
  4. L1 cache controllers (generated by SLICC according to the coherence protocol specifications) dequeues request from the mandatory queue and looks up the cache, makes necessary coherence state transitions and/or pushes the request to the next level of cache hierarchy as per the requirements. Different controller and components of SLICC generated Ruby code communicates among themselves through instantiations of MessageBuffer class of Ruby (src/mem/ruby/buffers/MessageBuffer.cc/hh) , which can act as ordered or unordered buffer or queues. Also the delays in servicing different steps for satisfying a memory request gets accounted for scheduling enqueue-ing and dequeue-ing operations accordingly. If the requested cache block may be found in L1 caches and with required coherence permissions then the request is satisfied and immediately returned. Otherwise the request is pushed to the next level of cache hierarchy through MessageBuffer. A request can go all the way up to the Ruby's Memory Controller (also called Directory in many protocols). Once the request get satisfied it is pushed upwards in the hierarchy through MessageBuffers.
  5. The MessageBuffers also act as entry point of coherence messages to the on-chip interconnect modeled. The MesageBuffers are connected according to the interconnect topology specified. The coherence messages thus travel through this on-chip interconnect accordingly.
  6. Once the requested cache block is available at L1 cache with desired coherence permissions, the L1 cache controller informs the corresponding Sequencer object by calling its readCallback or 'writeCallback method depending upon the type of the request. Note that by the time these methods on Sequencer are called the latency of servicing the request has been implicitly accounted for.
  7. The Sequencer then clears up the accounting information for the corresponding request and then calls the RubyPort::ruby_hit_callback method. This ultimately returns the result of the request to the corresponding port of the core/ hardware context of the frontend (gem5).

Directory Structure

  • src/mem/
    • protocols: SLICC specification for coherence protocols
    • slicc: implementation for SLICC parser and code generator
    • ruby
      • common: frequently used data structures, e.g. Address (with bit-manipulation methods), histogram, data block
      • filters: various Bloom filters (stale code from GEMS)
      • network: Interconnect implementation, sample topology specification, network power calculations, message buffers used for connecting controllers
      • profiler: Profiling for cache events, memory controller events
      • recorder: Cache warmup and access trace recording
      • slicc_interface: Message data structure, various mappings (e.g. address to directory node), utility functions (e.g. conversion between address & int, convert address to cache line address)
      • structures: Protocol independent memory components – CacheMemory, DirectoryMemory
      • system: Glue components – Sequencer, RubyPort, RubySystem