Difference between revisions of "MC Item"
(→Running threads) |
|||
(6 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | The item interface provides communication between the host and the many-core system. | + | The item interface provides communication between the host and the many-core system. The <code>npu_item_interface</code> module on the H2C tile fetches commands from the host, organized into items (as described in [[SC_Item|Item interface]]), and forwards them to the destination tile through the NoC. Items for an NPU core are handled by the <code>boot_manager</code> on the destination tile, which interacts with TC and control registers on the NPU side. |
− | The many-core version of the project defines | + | The many-core version of the project defines commands for host-side communication through the item interface. Such commands are defined in an enum type in the <code>npu_message_service.sv</code> header file: |
typedef enum logic [`HOST_COMMAND_WIDTH - 1 : 0] { | typedef enum logic [`HOST_COMMAND_WIDTH - 1 : 0] { | ||
Line 16: | Line 16: | ||
} host_message_type_t; | } host_message_type_t; | ||
− | The structure above defines | + | The structure above defines all supported items, although the current <code>boot_manager</code> on the NPU tile provides only two functionalities: BOOT_COMMAND and ENABLE_THREAD described below. |
== Setting PCs == | == Setting PCs == | ||
Line 29: | Line 29: | ||
} host_message_t; | } host_message_t; | ||
− | The target core receives the messages and its | + | The target core receives the messages and its <code>boot_manager</code> un-marshalls the message and forwards it on the ''hi_*'' interface of the core: |
if ( message_from_net.message == BOOT_COMMAND ) begin | if ( message_from_net.message == BOOT_COMMAND ) begin | ||
Line 44: | Line 44: | ||
== Running threads == | == Running threads == | ||
− | '''ENABLE_THREAD''' item loads the user thread mask (bitmap) into the ''hi_thread_en'' register | + | '''ENABLE_THREAD''' item loads the user thread mask (bitmap) into the ''hi_thread_en'' register directly connected to the Thread Controller module: |
always_ff @ ( posedge clk, posedge reset ) begin | always_ff @ ( posedge clk, posedge reset ) begin |
Latest revision as of 14:28, 21 June 2019
The item interface provides communication between the host and the many-core system. The npu_item_interface
module on the H2C tile fetches commands from the host, organized into items (as described in Item interface), and forwards them to the destination tile through the NoC. Items for an NPU core are handled by the boot_manager
on the destination tile, which interacts with TC and control registers on the NPU side.
The many-core version of the project defines commands for host-side communication through the item interface. Such commands are defined in an enum type in the npu_message_service.sv
header file:
typedef enum logic [`HOST_COMMAND_WIDTH - 1 : 0] { //COMMAND FOR BOOT BOOT_COMMAND = 0, ENABLE_THREAD = 1, GET_CONTROL_REGISTER = 2, SET_CONTROL_REGISTER = 3, GET_CONSOLE_STATUS = 4, GET_CONSOLE_DATA = 5, WRITE_CONSOLE_DATA = 6, //Core Logger CMD CORE_LOG = 9 } host_message_type_t;
The structure above defines all supported items, although the current boot_manager
on the NPU tile provides only two functionalities: BOOT_COMMAND and ENABLE_THREAD described below.
Setting PCs
BOOT_COMMAND item sets a PC of a given thread in a given tile. After the command, the FSM waits for the tile ID where the core is located, then the thread ID, and finally the PC value to set. When those values are fetched the item interface sends a message on the service virtual channel (number 4), in a host_message_t format:
typedef struct packed { address_t hi_job_pc; logic hi_job_valid; thread_id_t hi_job_thread_id; logic [`THREAD_NUMB - 1 : 0] hi_thread_en; host_message_type_t message; } host_message_t;
The target core receives the messages and its boot_manager
un-marshalls the message and forwards it on the hi_* interface of the core:
if ( message_from_net.message == BOOT_COMMAND ) begin job_valid <= message_from_net.hi_job_valid; job_pc <= message_from_net.hi_job_pc; job_thread_id <= message_from_net.hi_job_thread_id; next_state <= NOTIFY_BOOT;
hi_job_valid <= job_valid; hi_job_pc <= job_pc; hi_job_thread_id <= job_thread_id;
The boot_manager sends an ACK back to the host_request_manager, which forwards this ACK to the host and a new command can be evaluated.
Running threads
ENABLE_THREAD item loads the user thread mask (bitmap) into the hi_thread_en register directly connected to the Thread Controller module:
always_ff @ ( posedge clk, posedge reset ) begin if ( reset ) begin hi_thread_en <= 0; end else begin if ( message_in_valid && message_from_net.message == ENABLE_THREAD ) hi_thread_en <= message_from_net.hi_thread_en; end end