Difference between revisions of "MC Item"

From NaplesPU Documentation
Jump to: navigation, search
Line 4: Line 4:
  
 
  typedef enum logic [`HOST_COMMAND_WIDTH - 1 : 0] {
 
  typedef enum logic [`HOST_COMMAND_WIDTH - 1 : 0] {
//COMMAND FOR BOOT
+
  //COMMAND FOR BOOT
BOOT_COMMAND        = 0,
+
  BOOT_COMMAND        = 0,
ENABLE_THREAD        = 1,
+
  ENABLE_THREAD        = 1,
GET_CONTROL_REGISTER = 2,
+
  GET_CONTROL_REGISTER = 2,
SET_CONTROL_REGISTER = 3,
+
  SET_CONTROL_REGISTER = 3,
GET_CONSOLE_STATUS  = 4,
+
  GET_CONSOLE_STATUS  = 4,
GET_CONSOLE_DATA    = 5,
+
  GET_CONSOLE_DATA    = 5,
WRITE_CONSOLE_DATA  = 6,
+
  WRITE_CONSOLE_DATA  = 6,
//Core Logger CMD
+
  //Core Logger CMD
CORE_LOG            = 9
+
  CORE_LOG            = 9
 
  } host_message_type_t;
 
  } host_message_type_t;
  

Revision as of 17:41, 15 May 2019

The item interface provides communication between the host and the many-core system. This module (named host_request_manager) fetches commands from the host, organized into items (as described in Item interface).

The many-core version of the project defines different commands for host-side communication through the item interface. The commands are defined through an enum type in the nuplus_message_service.sv header file in the src/include/ folder:

typedef enum logic [`HOST_COMMAND_WIDTH - 1 : 0] {
  //COMMAND FOR BOOT
  BOOT_COMMAND         = 0,
  ENABLE_THREAD        = 1,
  GET_CONTROL_REGISTER = 2,
  SET_CONTROL_REGISTER = 3,
  GET_CONSOLE_STATUS   = 4,
  GET_CONSOLE_DATA     = 5,
  WRITE_CONSOLE_DATA   = 6,
  //Core Logger CMD
  CORE_LOG             = 9
} host_message_type_t;

The structure above defines the supported items, although the current boot_manager on the NUPLUS tile provides only two functionalities: BOOT_COMMAND and ENABLE_THREAD described below.

Setting PCs

BOOT_COMMAND item sets a PC of a given thread in a given tile. After the command, the FSM waits for the tile ID where the core is located, then the thread ID, and finally the PC value to set. When those values are fetched the item interface sends a message on the service virtual channel (number 4), in a host_message_t format:

typedef struct packed {                       
   address_t hi_job_pc;                       
   logic hi_job_valid;                        
   thread_id_t hi_job_thread_id;              
   logic [`THREAD_NUMB - 1 : 0] hi_thread_en; 
   host_message_type_t message;               
} host_message_t;

The target core receives the messages and its boot_manager un-marshalls the message and forwards it on the hi_* interface of the core:

if ( message_from_net.message == BOOT_COMMAND ) begin
   job_valid     <= message_from_net.hi_job_valid;
   job_pc        <= message_from_net.hi_job_pc;
   job_thread_id <= message_from_net.hi_job_thread_id;
   next_state    <= NOTIFY_BOOT;
hi_job_valid             <= job_valid;
hi_job_pc                <= job_pc;
hi_job_thread_id         <= job_thread_id;

The boot_manager sends an ACK back to the host_request_manager, which forwards this ACK to the host and a new command can be evaluated.

Running threads

ENABLE_THREAD item loads the user thread mask (bitmap) into the hi_thread_en register of the TC module:

always_ff @ ( posedge clk, posedge reset ) begin
  if ( reset ) begin
    hi_thread_en <= 0;
  end else begin
    if ( message_in_valid && message_from_net.message == ENABLE_THREAD )
      hi_thread_en <= message_from_net.hi_thread_en;
    end
  end