Intel Quartus Prime Standard Edition User Guide: Platform Designer
Creating a System with Platform Designer
Platform Designer automatically creates interconnect logic from high-level connectivity that you specify. The interconnect automation eliminates the time-consuming task of specifying system-level HDL connections.
Platform Designer allows you to specify interface requirements and integrate IP components within a graphical representation of the system. The Intel® Quartus® Prime software installation includes the Intel FPGA IP library available from the IP Catalog in Platform Designer.
You can integrate optimized and verified Intel FPGA IP cores into a design to shorten design cycles and maximize performance. Platform Designer also supports integration of IP cores from third-parties, or custom components that you define.
Platform Designer provides support for the following:
- Create and reuse components—define and reuse custom parameterizable components in a Hardware Component Definition File (_hw.tcl) that describes and packages IP components.
- Command-line support—optionally use command-line utilities and scripts to perform functions available in the Platform Designer GUI.
- Up to 64-bit addressing.
- Optimization of interconnect and pipelining within the system and auto-adaptation of data widths and burst characteristics.
- Inter-operation between standard protocols.
Platform Designer System Design Flow
You can use the Platform Designer GUI to quickly create and customize a Platform Designer system for integration with an Intel® Quartus® Prime project. Alternatively, you can perform many of the functions available in the Platform Designer GUI at the command-line, as Platform Designer Command-Line Utilities describes.
When you create a system in the GUI, Platform Designer creates a .qsys or .qip file that represents the system in your Intel® Quartus® Prime software project.
The circled numbers in the diagram correspond with the following topics in this chapter:
- Starting or Opening a Project in Platform Designer
- Adding IP Components to a System
- Connecting System Components
- Specifying Interconnect Requirements
- Synchronizing System Component Information
- Generating a Platform Designer System
- Simulating a Platform Designer System
- Integrating a Platform Designer System with the Intel Quartus Prime Software
Starting or Opening a Project in Platform Designer
-
To start a new Platform Designer project, save the default system that
appears when you open Platform Designer (File > Save), or click File > New System, and then save your new project.
Platform Designer saves the new project in the Intel® Quartus® Prime project directory. To alternatively save your Platform Designer project in a different directory, click File > Save As.
- To open a recent Platform Designer project, click File > Open to browse for the project, or locate a recent project with the File > Recent Projects command.
- To revert the project currently open in Platform Designer to the saved version, click the first item in the Recent Projects list.
Viewing a Platform Designer System
When you select or edit an item in one Platform Designer tab, all other tabs update to reflect your selection or edit. For example, if you select the cpu_0 in the Hierarchy tab, the Parameters tab immediately updates to display cpu_0 parameters.
Click the View menu to interact with the elements of your system in various tabs.
The Platform Designer GUI is fully customizable. You can arrange and display Platform Designer GUI elements that you most commonly use, and then save and reuse useful GUI layouts.
The IP Catalog and Hierarchy tabs display to the left of the main frame by default. The System Contents , Address Map, Interconnect Requirements, and Device Family tabs display in the main frame.
The Messages tab displays in the lower portion of Platform Designer. Double-clicking a message in the Messages tab changes focus to the associated element in the relevant tab to facilitate debugging. When the Messages tab is closed or not open in your workspace, error and warning message counts continue to display in the status bar of the Platform Designer window.

Viewing the System Hierarchy
The Hierarchy tab provides the following information and functionality:
- Lists connections between components.
- Lists names of signals in exported interfaces.
- Right-click to connect, edit, add, remove, or duplicate elements in the hierarchy.
- Displays internal connections of Platform Designer subsystems that you include as IP components. By contrast, the System Contents tab displays only the exported interfaces of Platform Designer subsystems.
Expanding the System Hierarchy
Click the + icon to expand any interface in the Hierarchy tab to view sub-components, associated elements, and signals for the interface. The Hierarchy tab displays a unique icon for each element type in the system. In the example below, the ram_master selection appears selected in both the System Contents and Hierarchy tabs.
Filtering the System Contents
For example, you can click the Filter button to display only instances that include memory-mapped interfaces, or display only instances that connect to a particular Nios® II processor. Conversely, you can temporarily hide clock and reset interfaces to further simplify the display.
Viewing Clock and Reset Domains
Click View > Clock Domains or click View > Reset Domains to display these tabs.
Platform Designer determines clock and reset domains by the associated clocks and resets. This information displays when you hover over interfaces in your system.
The Clock Domains and Reset Domains tabs also allow you to locate system performance bottlenecks. The tabs indicate connection points where Platform Designer automatically inserts clock-crossing adapters and reset synchronizers during system generation. View the following information on these tabs to create optimal connections between interfaces:
- The number of clock and reset domains in the system
- The interfaces and modules that each clock or reset domain contains
- The locations of clock or reset crossings
- The connection point of automatically inserted clock or reset adapters
- The proper location for manual insertion of a clock or reset adapter
Viewing Clock Domains in a System
- Click View > Clock Domains.
- Select any clock or reset domain in the list to view associated interfaces. The corresponding selection appears in the System Contents tab.
-
To highlight clock domains in the
System Contents
tab, click Show clock
domains in the system table or at the bottom of the
System Contents
tab.
Figure 4. Shows Clock Domains in the System Table
-
To view a single clock domain, or multiple clock domains and their
modules and connections, select the clock name or names in the Clock Domains tab. The modules for the selected clock domain or domains and
connections highlight in the
System Contents
tab. Detailed information for the current selection
appears in the clock domain details pane.
Figure 5. Clock DomainsNote: If a connection crosses a clock domain, the connection circle appears as a red dot in the System Contents tab
-
To view interfaces that cross clock domains, expand the Clock Domain Crossings icon in the Clock Domains tab, and select each element to view its
details in the
System Contents
tab.
Platform Designer lists the interfaces that cross clock domains under Clock Domain Crossings. As you click through the elements, detailed information appears in the clock domain details pane. Platform Designer also highlights the selection in the System Contents tab.
Viewing Reset Domains in a System
- To open the Reset Domains tab, click View > Reset Domains.
-
To show reset domains in the
System Contents
tab, click
the Show reset domains in the system table icon in the
System Contents
tab.
Figure 6. Show Reset Domains in the System Table
-
To view a single reset domain, or multiple reset domains and
their modules and connections, click the reset names in the Reset Domain tab.
Platform Designer displays your selection according to the following rules:
- When you select multiple reset domains, the System Contents tab shows interfaces and modules in both reset domains.
- When you select a single reset domain, the other reset domains are grayed out, unless the two domains have interfaces in common.
- Reset interfaces appear black when connected to multiple reset domains.
- Reset interfaces appear gray when they are not connected to all of the selected reset domains.
- If an interface is contained in multiple reset domains, the interface is grayed out.
Detailed information for your selection appears in the reset domain details pane.
Note: Red dots in the Connections column between reset sinks and sources indicate auto insertions by Platform Designer during system generation, for example, a reset synchronizer. Platform Designer decides when to display a red dot with the following protocol, and ends the decision process at first match.- Multiple resets fan into a common sink.
- Reset inputs are associated with different clock domains.
- Reset inputs have different synchronicity.
Viewing Avalon Memory-Mapped Domains in a System
Click View > Avalon Memory Mapped Domains to display this tab.
- Filter the System Contents tab to display a single Avalon domain, or multiple domains. Further filter your view with selections in the Filters dialog box.
- To rename an Avalon memory-mapped domain, double-click the domain name. Detailed information for the current selection appears in the Avalon domain details pane.
-
To enable and disable the highlighting of the Avalon domains in the System Contents tab, click the domain control tool at the bottom of the System Contents tab.
Figure 8. Avalon Memory Mapped Domains Control Tool
Viewing the System Schematic
If your selection is a subsystem, You can use the Move to the top of the hierarchy Move up one level of hierarchy, and Drill into a subsystem to explore its contents buttons to traverse the schematic of a hierarchical system.

Viewing System Assignments and Connections

Customizing the Platform Designer Layout
You can arrange your workspace by dragging and dropping, and then grouping tabs in an order appropriate to your design development, or close or dock tabs that you are not using.
Dock tabs in the main frame as a group, or individually by clicking the tab control in the upper-right corner of the main frame. Tool tips on the upper-right corner of the tab describe possible workspace arrangements, for example, restoring or disconnecting a tab to or from your workspace.
When you save your system, Platform Designer also saves the current workspace configuration. When you re-open a saved system, Platform Designer restores the last saved workspace.
The Reset to System Layout command on the View menu restores the workspace to its default configuration for Platform Designer system design. The Reset to IP Layout command restores the workspace to its default configuration for defining and generating single IP cores.
- Click items on the View menu to display and then optionally dock the tabs. Rearrange the tabs to suit your preferences.
-
To save the current Platform Designer
window configuration as a custom layout, click View > Custom Layouts > Save. Platform Designer saves your custom
layout in your project directory, and adds the layout to the custom layouts
list, and the layouts.ini file. The
layouts.ini file determines the order
of layouts in the list.
-
Use any of the following methods to revert to another
layout:
- To revert the layout to the default system design layout, click View > Reset to System Layout. This layout displays the System Contents , Address Map, Interconnect Requirements, and Messages tabs in the main pane, and the IP Catalog and Hierarchy tabs along the left pane.
- To revert the layout to the default system design layout, click View > Reset to IP Layout. This layout displays the Parameters and Messages tabs in the main pane, and the Details, Block Symbol, and Presets tabs along the right pane.
- To reset your Platform Designer window configuration to a previously saved layout, click View > Custom Layouts, and then select the custom layout.
- Press Ctrl+3 to quickly change the Platform Designer layout.
-
To manage your saved custom layouts, click View > Custom Layouts. The Manage Custom Layouts
dialog box opens and allows you to apply a variety of functions that facilitate
custom layout management. For example, you can import or export a layout from or
to a different directory.
Figure 11. Manage Custom Layouts
Adding IP Components to a System
Follow these steps to locate, parameterize, and instantiate an IP component in a Platform Designer system:
-
To locate a component by name, type some or all of the
component’s name in the IP Catalog search box. For example, type memory to locate memory-mapped IP components.
You can also find components by category.
Figure 12. Platform Designer IP Catalog
-
Double-click any component to launch the component's parameter
editor and specify options for the component.
For some IP components, you can select and Apply a pre-defined set of parameters values for specific applications from the Presets list.
Figure 13. Parameter Editor - To complete customization of the IP component, click Finish. The IP component appears in the System Contents tab.
Modifying IP Parameters
To display a components parameters on the Parameters tab:
- click View > Parameters.
- Select the component in the System Contents or Hierarchy tabs..
The Parameters tab provides the following functionality:
- Parameters field—adjust the parameters to align with your design requirements, including changing the name of the top-level instance.
- Component Banner—displays the hierarchical path for the component and allows you to enable display of internal names. Below the hierarchical path, the parameter editor shows the HDL entity name and the IP file path for the selected IP component. Right-click in the banner to display internal parameter names for use with scripted flows.
- Read/Write Waveforms—displays the interface timing and the corresponding read and write waveforms.
- Details—displays links to detailed information about the component.
- Parameterization Messages—displays parameter warning and error messages about the IP component.
Changes that you make in the Parameters tab affect your entire system, and dynamically update other open tabs in Platform Designer. Any change that you make on the Parameters tab, automatically updates the corresponding .ip file that stores the component's parameterization.
If you create your own custom IP components, you can use the Hardware Component Description File (_hw.tcl) to specify configurable parameters.
If you use the ip-deploy or qsys-script commands rather than the Platform Designer GUI, you must use internal parameter names with these parameters.
Viewing Component or Parameter Details
To view a component's details:
- Click the parameters for a component in the parameter editor, Platform Designer displays the description of the parameter in the Details tab.
- To return to the complete description for the component, click the header in the Parameters tab.
Viewing a Component's Block Symbol
The Block Symbol tab displays a symbolic representation of any component you select in the Hierarchy or System Contents tabs. The block symbol shows the component's port interfaces and signals. The Show signals option allows you to turn on or off signal graphics.
The Block Symbol tab appears by default in the parameter editor when you add a component to your system. When the Block Symbol tab is open in your workspace, it reflects changes that you make in other tabs.
Applying Preset Parameters for Specific Applications
The Preset tab displays the names of any available preset settings for an IP component. The preset preserves a collection of parameter setting that may be appropriate for a specific protocol or application. Not all IP components include preset parameters. Double-click the preset parameter name to apply the preset parameter values to a component you are defining.
Creating IP Custom Preset Parameters Settings
Follow these steps to save custom preset parameter settings:
- In IP Catalog, double-click any component to launch the parameter editor.
- To search for a specific preset for the initial settings, type a partial preset name in the search box.
- In the Presets tab, click New to specify the Preset name and Preset description.
- Under Select parameters to include in the preset, enable or disable the parameters you want to include in the preset.
-
Specify the path for the Preset file
that preserves the collection of parameter settings.
Figure 17. Create New PresetIf the file location that you specify is not already in the IP search path, Platform Designer adds the location of the new preset file to the IP search path.
- Click Save.
- To apply the preset to an IP component, click Apply. Preset parameter values that match the current parameter settings appear in bold.
Adding Third-Party IP Components
You can add third-party IP components created by Intel partners to your Platform Designer system. Third-party partner IP components have interfaces that Platform Designer supports, such as Avalon® -MM or AMBA* AXI. Third-party partner IP components can also include timing and placement constraints, software drivers, simulation models, and reference designs.
To locate supported third-party IP components on the Intel web page, follow these steps:
- From the Intel website, navigate to the Find IP page, and then click Find IP on the tool.
- Use the Search box and the End Market, Technology, Devices or Provider filters to locate the IP that you want to use.
- Click Enter.
- Sort the table of results for the Platform Designer Compliant column. You cannot use non-compliant components in Platform Designer.
- Click the IP name to view information, request evaluation, or request download.
- After you download the IP files, add the IP location to the IP search path to add the IP to IP Catalog, as IP Search Path Recursive Search describes.
IP Search Path Recursive Search
The Intel® Quartus® Prime software automatically searches and identifies IP components in the IP search path. The search is recursive for some directories, and only to a specific depth for others. You can use ** characters to halt a recursive search at any directory that contains a _hw.tcl or .ipx file.
In the following list of search locations, ** indicates a recursive descent.
Location | Description |
---|---|
PROJECT_DIR/* | Finds IP components and index files in the Intel® Quartus® Prime project directory. |
PROJECT_DIR/ip/**/* | Finds IP components and index files in any subdirectory of the /ip subdirectory of the Intel® Quartus® Prime project directory. |
IP Search Path Precedence
- Project directory.
- Project database directory.
- Project IP search path specified in IP Search Locations, or with the SEARCH_PATH assignment for the current project revision.
- Global IP search path specified in IP Search Locations, or with the SEARCH_PATH assignment in the quartus2.ini file.
- Quartus software libraries directory, such as <Quartus Installation>\libraries.
IP Component Description Files
The Intel® Quartus® Prime software identifies parameterizable IP components in the IP search path for the following files:
- Component Description File (_hw.tcl)—defines a single IP core.
- IP Index File (.ipx)—each .ipx file indexes a collection of available IP cores. This file specifies the relative path of directories to search for IP cores. In general, .ipx files facilitate faster searches.
Defining the IP Search Path with Index Files
You can specify a search path in the user_components.ipx file in either in Platform Designer (Tools > Options) or the Intel® Quartus® Prime software (Tools > Options > IP Catalog Search Locations). This method of discovering IP components allows you to add a locations dependent of the default search path. The user_components.ipx file directs Platform Designer to the location of an IP component or directory to search.
A <path> element in a .ipx file specifies a directory where Platform Designer can search for IP components. A <component> entry specifies the path to a single component. <path> elements allow wildcards in definitions. An asterisk matches any file name. If you use an asterisk as a directory name, it matches any number of subdirectories.
Path Element in an .ipx File
<library> <path path="…<user directory>" /> <path path="…<user directory>" /> … <component … file="…<user directory>" /> … </library>
A <component> element in an .ipx file contains several attributes to define a component. If you provide the required details for each component in an .ipx file, the startup time for Platform Designer is less than if Platform Designer must discover the files in a directory.
Component Element in an .ipx File
The example shows two <component> elements. Note that the paths for file names are specified relative to the .ipx file.
<library> <component name="A Platform Designer Component" displayName="Platform Designer FIR Filter Component" version="2.1" file="./components/qsys_filters/fir_hw.tcl" /> <component name="rgb2cmyk_component" displayName="RGB2CMYK Converter(Color Conversion Category!)" version="0.9" file="./components/qsys_converters/color/rgb2cmyk_hw.tcl" /> </library>
Creating or Opening an IP Core Variant
Follow these steps to define an IP core variant in Platform Designer:
- In Platform Designer, click File > New IP Variant.
-
On the IP Variant tab,
specify the Quartus project to contain
the IP variant.
Figure 18. Platform Designer IP Variant Tab
-
Specify any of the following options:
- Revision—optionally select a specific revision of a project.
- Device family—when defining a new project or None, allows you to specify the target Intel® FPGA device family. Otherwise this field is non-editable and displays the Quartus project target device family. Click Retrieve Values to populate the fields.
- Device part—when defining a new project or None, allows you to specify the target Intel® FPGA device part number. Otherwise this field is non-editable and displays the Quartus project target device part number.
- Specify the IP variant name, or browse for an existing IP variant.
- For Component type, click Select and select the IP component from the IP Catalog.
- Click Create. The IP parameter editor appears. Specify the parameter values that you want for the IP variant.
- To generate the IP variant synthesis and optional simulation files, click Generate HDL, specify Generation Options, and click Generate. Refer to Generation Dialog Box Options for generation options.
Connecting System Components
For example, you can connect a memory-mapped master interface to a slave interface, and an interrupt sender interface to an interrupt receiver interface. You can connect any interfaces exported from a Platform Designer system within a parent system.
Platform Designer uses the high-level connectivity you specify to instantiate a suitable HDL fabric to perform the needed adaptation and arbitration between components. Platform Designer generates and includes this interconnect fabric in the RTL system output.
Potential connections between interfaces appear as gray interconnect lines with an open circle icon at the intersection of the potential connection.
To implement a connection, follow these steps:
- Click inside an open connection circle to implement the connection between the interfaces. When you make a connection, Platform Designer changes the connection line to black, and fills the connection circle. Clicking a filled-in circle removes the connection.
- to display the list of current and possible connections for
interfaces in the Hierarchy or
System Contents
tabs, click View > Connections. Figure 20. Connection Display for Exported Interfaces
- Perform any of the following to modify connections:
- On the Connections tab, enable or disable the Connected column to enable or disable any connection. The Clock Crossing, Data Width, and Burst columns provide interconnect information about added adapters that can result in slower fMAX or increased area utilization.
- On the System Contents tab, right-click in the Connection column and disable or enable Allow Connection Editing.
- On the Connections tab view and make connections for exported interfaces. Double-click an interface in the Export column to view all possible connections in the Connections column as pins. To restore the representation of the connections, and remove the interface from the Export column, click the pin.
Platform Designer 64-Bit Addressing Support
The address parameters appear in the Base and End columns in the System Contents tab, on the Address Map tab, in the parameter editor, and in validation messages. Platform Designer displays as many digits as needed in order to display the top-most set bit, for example, 12 hex digits for a 48-bit address.
A Platform Designer system can have multiple 64-bit masters, with each master having its own address space. You can share slaves between masters, and masters can map slaves to different addresses. For example, one master can interact with slave 0 at base address 0000_0000_0000, and another master can see the same slave at base address c000_000_000.
Intel® Quartus® Prime debugging tools provide access to the state of an addressable system via the Avalon® -MM interconnect. These tools are also 64-bit compatible, and process within a 64-bit address space, including a JTAG to Avalon® master bridge.
Platform Designer supports auto base address assignment for Avalon® -MM components. In the Address Map tab, click Auto Assign Base Address.
Support for Avalon -MM Non-Power of Two Data Widths
Platform Designer issues a validation error if an Avalon® -MM master or slave interface on a multi point connection is parameterized with a non-power of two data width.
Connecting Masters and Slaves
The Address Map tab shows the slaves on the left, the masters across the top, and the address span of the connection in each cell. If there is no connection between a master and a slave, the table cell is empty. In this case, use the Address Map tab to view the individual memory addresses for each connected master.
Platform Designer enables you to design a system where two masters access the same slave at different addresses. If you use this feature, Platform Designer labels the Base and End address columns in the System Contents tab as "mixed" rather than providing the address range.
To create or edit a connection between master and slave IP components:
- In Platform Designer, click the Address Map tab.
- Locate the table cell that represents the connection between the master and slave component pair.
-
Either type in a base address, or update the current base
address in the cell. The base address of a slave component must be a multiple of
the address span of the component. This restriction is a requirement of the
Platform Designer interconnect, which provides an
efficient address decoding logic, which in turn allows Platform Designer to achieve the best possible fMAX.
Figure 21. Address Map Tab for Connection Masters and Slaves
Changing a Conduit to a Reset
- In the IP Catalog search box, locate IOPLL Intel FPGA IP and double-click to add the component to your system.
- In the System Contents tab, select the PLL component.
- Click View > Component Instantiation and open the Component Instantiation tab for the selected component.
- In the Signals & Interfaces tab, select the locked conduit interface.
- Change the Type from Conduit to Reset Input, and the Synchronous edges from Deassert to None.
- Select the locked [1] signal below the locked interface.
- Change the Signal Type from export to reset_n. Change the Direction from output to input.
- Click Apply.
The conduit interface changes to reset for the instantiated PLL component.
Previewing the System Interconnect
To open the System with Platform Designer Interconnect window, click System > Show System With Platform Designer Interconnect.
The System with Platform Designer Interconnect window has the following tabs:
- System Contents—displays the original instances in your system, as well as the inserted interconnect instances. Connections between interfaces are replaced by connections to interconnect where applicable.
- Hierarchy—displays a system hierarchical navigator, expanding the system contents to show modules, interfaces, signals, contents of subsystems, and connections.
- Parameters—displays the parameters for the selected element in the Hierarchy tab.
- Memory-Mapped Interconnect—allows you to select a memory-mapped interconnect module and view its internal command and response networks. You can also insert pipeline stages to achieve timing closure.
The System Contents, Hierarchy, and Parameters tabs are read-only. Edits that you apply on the Memory-Mapped Interconnect tab are automatically reflected on the Interconnect Requirements tab.
The Memory-Mapped Interconnect tab in the System with Platform Designer Interconnect window displays a graphical representation of command and response datapaths in your system. Datapaths allow you precise control over pipelining in the interconnect. Platform Designer displays separate figures for the command and response datapaths. You can access the datapaths by clicking their respective tabs in the Memory-Mapped Interconnect tab.
Each node element in a figure represents either a master or slave that communicates over the interconnect, or an interconnect sub-module. Each edge is an abstraction of connectivity between elements, and its direction represents the flow of the commands or responses.
Click Highlight Mode (Path, Successors, Predecessors) to identify edges and datapaths between modules. Turn on Show Pipelinable Locations to add greyed-out registers on edges where pipelining is allowed in the interconnect.
Specifying Interconnect Requirements
Available options in the Setting column vary, depending on the Identifier column value. Click the drop-down menu to select the settings, and to assign the corresponding values to the settings.
- To create a new Identifier to assign an interconnect requirement, click Add. A new_target row appears for edit.
- Click the new_target cell and select $system to define a system-wide requirement, or select any interface name to specify interconnect requirements for the interface.
- In the same row, click the new_requirement cell, select any of the available requirements, as Interconnect Requirements describes.
- In the same row, Click the new_requirement_value cell and specify the requirement value.
Interconnect Requirements
Option | Description | ||||||
---|---|---|---|---|---|---|---|
Limit interconnect pipeline stages to |
Specifies the maximum number of pipeline stages that Platform Designer can insert in each command and response path to increase the fMAX at the expense of additional latency. You can specify between 0 and 4 pipeline stages, where 0 means that the interconnect has a combinational datapath. This setting is specific for each Platform Designer system or subsystem. |
||||||
Clock crossing adapter type |
Specifies the default implementation for
automatically inserted clock crossing adapters:
|
||||||
Automate default slave insertion | Directs Platform Designer to automatically insert a default slave for undefined memory region accesses during system generation. | ||||||
Enable instrumentation | When you set this option to TRUE, Platform Designer enables debug instrumentation in the Platform Designer interconnect, which then monitors interconnect performance in the system console. | ||||||
Burst Adapter Implementation |
Allows you to choose the converter type that
Platform Designer applies to each
burst.
|
||||||
Enable ECC protection |
Specifies the default implementation for ECC protection for memory elements.
|
Option | Value | Description |
---|---|---|
Security |
|
After you establish connections between the masters and slaves, allows you to set the security options, as needed, for each master and slave in your system. |
Secure address ranges | Accepts valid address range. | Allows you to type in any valid address range. |
Defining Instance Parameters
You can use the Instance Parameters tab to define how the specified values for the instance parameters affect the sub-components in the Platform Designer system. You define an Instance Script that creates queries for the instance parameters, and sets the values of the parameters for the lower-level system components.
When you click Preview Instance, Platform Designer creates a preview of the current Platform Designer system with the specified parameters and instance script and opens the parameter editor. This command allows you to see how an instance of a system appears when you use it in another system. The preview instance does not affect your saved system.
To use instance parameters, the IP components or subsystems in your Platform Designer system must have parameters that can be set when they are instantiated in a higher-level system.
If you create hierarchical Platform Designer systems, each Platform Designer system in the hierarchy can include instance parameters to pass parameter values through multiple levels of hierarchy.
Creating an Instance Parameter Script in Platform Designer
package require -exact qsys <version>
To use Tcl commands that work with instance parameters in the instance script, you must specify the commands within a Tcl composition callback. In the instance script, you specify the name for the composition callback with the following command:
set_module_property COMPOSITION_CALLBACK <name of callback procedure>
Specify the appropriate Tcl commands inside the Tcl procedure with the following syntax:
proc <name of procedure defined in previous command> {} {#Tcl commands to query and set parameters go here}
Instance Parameter Script Example
In this example, an instance script uses the pio_width parameter to set the width parameter of a parallel I/O (PIO) component. The script combines the get_parameter_value and set_instance_parameter_value commands using brackets.
# Request a specific version of the scripting API
package require -exact qsys 13.1
# Set the name of the procedure to manipulate parameters:
set_module_property COMPOSITION_CALLBACK compose
proc compose {} {
# Get the pio_width parameter value from this Platform Designer system and
# pass the value to the width parameter of the pio_0 instance
set_instance_parameter_value pio_0 width \
[get_parameter_value pio_width]
}
Platform Designer Instance Parameter Script Tcl Commands
get_parameter_value
Description
Returns the current value of a parameter defined previously with the add_parameter command.Usage
get_parameter_value <parameter>Returns
The value of the parameter.Arguments
- parameter
- The name of the parameter whose value is being retrieved.
Example
get_parameter_value fifo_width
get_parameters
Description
Returns the names of all the parameters in the component.Usage
get_parametersReturns
A list of parameter names.Arguments
No arguments.Example
get_parameters
set_instance_parameter_value
Description
Sets the value of a parameter for a child instance. Derived parameters and SYSTEM_INFO parameters for the child instance may not be set using this command.Usage
set_instance_parameter_value <instance> <parameter> <value>Returns
No return value.Arguments
- instance
- The name of the child instance.
- parameter
- The name of the parameter.
- value
- The new parameter value.
Example
set_instance_parameter_value uart_0 baudRate 9600
Implementing Performance Monitoring
Platform Designer supports performance monitoring for only Avalon-MM interfaces. In your Platform Designer system, you can monitor the performance of no less than three, and no greater than 15 Avalon-MM interface components at one time.
- Open a system in Platform Designer.
- Click View > Instrumentation.
- To enable performance monitoring, turn on Add debug instrumentation to the Platform Designer Interconnect option. Enabling this option allows the system to interact with the Bus Analyzer Toolkit, accessible from the Intel® Quartus® Prime Tools menu.
-
For any interconnect, enable or disable the Add Performance
Monitor option.
Figure 24. Enabling Performance Monitoring
Configuring Platform Designer System Security
Platform Designer interconnect supports the Arm* TrustZone* security extension. The Platform Designer Arm* TrustZone* security extension includes secure and non-secure transaction designations, and a protocol for processing between the designations, as Table 6 describes.
The AXI AxPROT protection signal specifies a secure or non-secure transaction. When an AXI master sends a command, the AxPROT signal specifies whether the command is secure or non-secure. When an AXI slave receives a command, the AxPROT signal determines whether the command is secure or non-secure. Determining the security of a transaction while sending or receiving a transaction is a run-time protocol.
AXI masters and slaves can be TrustZone* -aware. All other master and slave interfaces, such as Avalon® -MM interfaces, are non- TrustZone* -aware.
The Avalon® specification does not include a protection signal. Consequently, when an Avalon® master sends a command, there is no embedded security and Platform Designer recognizes the command as non-secure. Similarly, when an Avalon® slave receives a command, the slave always accepts the command and responds.
- To begin creating a secure system, add masters and slaves to your system, as Adding IP Components to a System describes.
- Make connections between the masters and slaves in your system, as Connecting Masters and Slaves describes.
- Click View > Interconnect Requirements. The Interconnect Requirements tab allows you to specify system-wide and interconnect-specific requirements.
- To specify security requirements for an interconnect, click the Add button.
- In the Identifier column, select the interconnect in the new_target cell.
- In the Setting column, select Security.
-
In the Value column, select the appropriate
Secure, Non-Secure,
Secure Ranges, or
TrustZone-aware security for the interface. Refer to
System Security Options for details of each
option.
Figure 25. Security Settings in Interconnect Requirements Tab
- After setting compile-time security options for non- TrustZone* -aware master and slave interfaces, you must identify those masters that require a default slave before generation, as Specifying a Default Slave.
System Security Options
Option | Description |
---|---|
Secure | Master sends only secure transactions, and the slave receives only secure transactions. Platform Designer treats transactions from a secure master as secure. Platform Designer blocks non-secure transactions to a secure slave and routes to the default slave. |
Non-Secure | The master sends only non-secure transactions, and the slave receives any transaction, secure or non-secure. Platform Designer treats transactions from a non-secure master as non-secure. Platform Designer allows all transactions, regardless of security status, to reach a non-secure slave. |
Secure Ranges | Applies to only the slave interface. Allows you to specify secure memory regions for a slave. Platform Designer blocks non-secure transactions to secure regions and routes to the default slave. The specified address ranges within the slave's address span are secure, all other address ranges are not. The format is a comma-separated list of inclusive-low and inclusive-high addresses, for example, 0x0:0xfff,0x2000:0x20ff |
TrustZone-aware | TrustZone-aware masters have signals that control the
security status of their transactions. TrustZone-aware slaves can accept
these signals and handle security independently. The following applies to secure systems that mix secure and non- TrustZone* -aware components:
|
Specifying a Default Slave
You can achieve an optimized secure system by partitioning your design and carefully designating secure or non-secure address maps to maintain reliable data. Avoid a design that includes a non-secure master that initiates transactions to a secure slave resulting in unsuccessful transfers, within the same hierarchy.
A transaction that violates security is rerouted to the default slave and subsequently responds to the master with an error. The following rules apply to specifying a default slave:
- You can designate any slave as the default slave.
- You can share a default slave between multiple masters.
- Have one default slave for each interconnect domain.
- An interconnect domain is a group of connected memory-mapped masters and slaves that share the same interconnect. The altera_error_response_slave component includes the required TrustZone* features.
- Specify interconnect security settings, as Configuring Platform Designer System Security describes.
- In the System Contents , right-click any column and turn on the Security and Default Slave columns.
-
In the
System Contents
tab, turn on the Default
Slave option for the slave interface. A master can have only one
default slave.
Transaction Type |
TrustZone* -aware Master |
Non- TrustZone* -aware Master Secure |
Non- TrustZone* -aware Master Non-Secure |
---|---|---|---|
TrustZone* -aware slave/memory |
OK |
OK |
OK |
Non- TrustZone* -aware slave (secure) |
Per-access |
OK |
Not allowed |
Non- TrustZone* -aware slave (non-secure) |
OK |
OK |
OK |
Non- TrustZone* -aware memory (secure region) |
Per-access |
OK |
Not allowed |
Non- TrustZone* -aware memory (non-secure region) |
OK |
OK |
OK |
Accessing Undefined Memory Regions
Access to an undefined memory region occurs when a transaction from a master targets a memory region unspecified in the slave memory map. To ensure predictable response behavior when this condition occurs, you must specify a default slave, as Specifying a Default Slave describes.
You can designate any memory-mapped slave as a default slave. Have only one default slave for each interconnect domain in your system. Platform Designer then routes undefined memory region accesses to the default slave, which terminates the transaction with an error response.
If you do not specify the default slave, Platform Designer automatically assigns the slave at the lowest address within the memory map for the master that issues the request as the default slave.
Accessing undefined memory regions can occur in the following cases:
- When there are gaps within the accessible memory map region that are within the addressable range of slaves, but are not mapped.
- Accesses by a master to a region that does not belong to any slaves that is mapped to the master.
- When a non-secured transaction is accessing a secured slave. This applies to only slaves that are secured at compilation time.
- When a read-only slave is accessed with a write command, or a write-only slave is accessed with a read command.
Upgrading Outdated IP Components
When you open a Platform Designer system that contains outdated IP components, Platform Designer automatically attempts to upgrade the IP components if it cannot locate the requested version.
Most Platform Designer IP components support automatic upgrade.
Platform Designer allows you to include a path to older IP components in the IP Search Path, and then use those components even if upgraded versions are available. However, older versions of IP components may not work in newer version of Platform Designer.
If a Platform Designer system includes IP components outside of the project directory or the directory of the .qsys file, you must add the location of these components to the Platform Designer IP Search Path (Tools > Options).
To upgrade IP cores:
-
With the Platform Designer system open, click System > Upgrade IP Cores.
Only IP Components that are associated with the open Platform Designer system, and that do not support automatic upgrade appear in Upgrade IP Cores dialog box.
-
In the Upgrade IP Cores dialog box, select one or
multiple IP components, and then click Upgrade.
A green check mark appears for the IP components that Platform Designer successfully upgrades.
- Generate the Platform Designer system.
Alternatively, you can upgrade IP components with the following command:
qsys-generate -–upgrade-ip-cores <qsys_file>
The <qsys_file> variable accepts a path to the .qsys file. You do not need to run this command in the same directory as the .qsys file. Platform Designer reports the start and finish of the command-line upgrade, but does not name the particular IP components upgraded.
For device migration information, refer to Introduction to Intel® FPGA IP .
Troubleshooting IP or Platform Designer System Upgrade
Upgrade IP Components Field | Description |
---|---|
Status | Displays the "Success" or "Failed" status of each upgrade or migration. Click the status of any upgrade that fails to open the IP Upgrade Report. |
Version | Dynamically updates the version number when upgrade is successful. The text is red when the IP requires upgrade. |
Device Family | Dynamically updates to the new device family when migration is successful. The text is red when the IP core requires upgrade. |
Auto Upgrade | Runs automatic upgrade on all IP cores that support auto upgrade. Also, automatically generates a <Project Directory> /ip_upgrade_port_diff_report report for IP cores or Platform Designer systems that fail upgrade. Review these reports to determine any port differences between the current and previous IP core version. |
- If the current version of the software does not support the IP variant, right-click the component and click Remove IP Component from Project. Replace this IP core or Platform Designer system with the one supported in the current version of the software.
- If the current target device does not support the IP variant, select a supported device family for the project, or replace the IP variant with a suitable replacement that supports your target device.
- If an upgrade or migration fails, click Failed in the Status field to display and review details of the IP Upgrade Report. Click the Release Notes link for the latest known issues about the IP core. Use this information to determine the nature of the upgrade or migration failure and make corrections before upgrade.
- Run Auto Upgrade to automatically generate an IP Ports Diff report for each IP core or Platform Designer system that fails upgrade. Review the reports to determine any port differences between the current and previous IP core version. Click Upgrade in Editor to make specific port changes and regenerate your IP core or Platform Designer system.
- If your IP core or Platform Designer system does not support Auto Upgrade, click Upgrade in Editor to resolve errors and regenerate the component in the parameter editor.
Synchronizing System Component Information
You must synchronize any mismatches between the component instantiation, and the component's corresponding .ip prior to system generation.
-
Select the mismatched signal or interface in the
System Contents
tab, and then and click View > System Info. Alternatively, you can double-click the corresponding Component
Instantiation Warning in the System
Messages tab.
Figure 27. System Info Tab
- View any component mismatches in the System Info tab. Select individual interfaces, signals, or parameters to view the specific value differences in the Component and IP file columns. Value mismatches between the Component Instantiation and the IP file appear in blue. Missing elements appear in green.
-
To synchronize the Component
Instantiation and IP file
.ip values in the system, perform one or
more of the following:
- Select a specific mismatched parameter, interface, or signal and click >> to synchronize the items.
- Click Sync All to synchronize all values for the current component.
- Click Sync All System Info to synchronize all IP components in the current system at once.
Generating a Platform Designer System
- Open a system in Platform Designer.
- Consider whether to specify a unique generation ID, as Specifying the Generation ID describes.
- Click the Generate HDL button. The Generation dialog box appears.
- Specify options for generation of Synthesis, Simulation, and testbench files, as Generation Dialog Box Options describes.
-
Consider whether to specify options for Parallel IP
Generation, as Disabling or Enabling Parallel IP Generation describes.
-
To start system generation, click Generate.
Note: Platform Designer may add unique suffixes (hashes) to ip component files during generation to ensure uniqueness of the file. The uniquness of the files is necessary because the IP component is dynamic. The RTL generates during runtime, according to the input parameters. This methodology ensures no collisions between the multiple variants of the same IP. The hash derives from the parameter values that you specify. A given set of parameter values produces the same hash for each generation.
Generation Dialog Box Options
You can specify the following system generation options in the Generation dialog box:
Option | Description |
---|---|
Create HDL design files for synthesis | Allows you to specify Verilog or VHDL file type generation for the system's top-level definition and child instances. Select None to skip generation of synthesis files. |
Create timing and resource estimates for each IP in your system to be used with third-party synthesis tools | Generates a non-functional Verilog Design File (.v) for use by supported third-party EDA synthesis tools. Estimates timing and resource usage for the IP component. The generated netlist file name is <ip_component_name>_syn.v. |
Create Block Symbol File (.bsf) | Generates a Block Symbol File (.bsf) for use in a larger system schematic Block Diagram File (.bdf). |
Generate IP Core Documentation | Generates the IP user guide documentation for the components in your system (when available). |
Create simulation model | Allows you to generate Verilog HDL or VHDL simulation
model and simulation script files. Note:
ModelSim* - Intel® FPGA Edition supports native, mixed-language
(VHDL/Verilog/SystemVerilog) simulation. Therefore, Intel simulation libraries
may not be compatible with single language simulators. If you
have a VHDL-only license, some versions of
ModelSim®
simulators may not
support simulation for IPs written in Verilog. As a workaround,
you can use
ModelSim* - Intel® FPGA Edition, or purchase a mixed language simulation license from
Mentor.
|
Path | Specifies the output directory path. |
Specifying the Generation ID
The Generation ID parameter is a unique integer value that derives from the timestamp during Platform Designer system generation. You can optionally modify this value to a value of your choosing to identify the system.
To specify the Generation ID parameter:
- In the Hierarchy tab, select the top-level system.
- Click View > Parameters.
-
Under System Identifier,
view or edit the value of Generation ID.
Figure 28. Generation ID in Parameters Tab
Files Generated for IP Cores and Platform Designer Systems
The Intel® Quartus® Prime Standard Edition software generates one of the following output file structures for individual IP cores that use one of the legacy parameter editors.
Generating System Testbench Files
You can generate a standard or simple testbench system with BFM or Mentor Verification IP (for AMBA* 3 AXI or AMBA* 4 AXI) components that drive the external interfaces of the system. Platform Designer generates a Verilog HDL or VHDL simulation model for the testbench system to use in the simulation tool.
First generate a testbench system, and then modify the testbench system in Platform Designer before generating the simulation model. Typically, you select only one of the simulation model options.
- Open and configure a system in Platform Designer.
- Click Generate > Generate Testbench System. The Generation dialog box appears.
-
Specify options for the test bench system:
Table 9. Testbench Generation Options Option Description Create testbench Platform Designer system Specifies a simple or standard testbench system: - Standard, BFMs for standard Platform Designer Interconnect—Creates a testbench Platform Designer system with BFM IP components attached to exported Avalon and AMBA* 3 AXI or AMBA* 3 AXI interfaces. Includes any simulation partner modules specified by IP components in the system. The testbench generator supports AXI interfaces and can connect AMBA* 3 AXI or AMBA* 3 AXI interfaces to Mentor Graphics AMBA* 3 AXI or AMBA* 3 AXI master/slave BFMs. However, BFMs support address widths only up to 32-bits.
- Simple, BFMs for clocks and resets—Creates a testbench Platform Designer system with BFM IP components driving only clock and reset interfaces. Includes any simulation partner modules specified by IP components in the system.
Create testbench simulation model Specifies Verilog HDL or VHDL simulation model files and simulation scripts for the testbench. Use this option if you do not need to modify the Platform Designer-generated testbench before running the simulation. Output directory Specifies the path for output of generated testbench files. Turn on Clear output to remove any previously generated content from the location. Parallel IP Generation Turn on Use multiple processors for faster IP generation (when available) to generate IP using multiple CPUs when available in your system. - Click Generate. The testbench files generate according to your specifications.
- Open the testbench system in Platform Designer. Make changes to the BFMs, as needed, such as changing the instance names and VHDL ID value. For example, you can modify the VHDL ID value in the Avalon Interrupt Source Intel FPGA IP component.
- If you modify a BFM, regenerate the simulation model for the testbench system.
- Compile the system and load the Platform Designer system and testbench into your simulator, and then run the simulation.
Platform Designer Testbench Simulation Output Directories
Platform Designer Testbench Files
Platform Designer generates the following testbench files.
File Name or Directory Name |
Description |
---|---|
<system>_tb.qsys |
The Platform Designer testbench system. |
<system>_tb.v
or <system>_tb.vhd |
The top‑level testbench file that connects BFMs to the top‑level interfaces of <system>_tb.qsys. |
<system>_tb.spd | Required input file for ip-make-simscript to generate simulation scripts for supported simulators. The .spd file contains a list of files generated for simulation and information about memory that you can initialize. |
<system>.html
and <system>_tb.html |
A system report that contains connection information, a memory map showing the address of each slave with respect to each master to which it is connected, and parameter assignments. |
<system>_generation.rpt | Platform Designer generation log file. A summary of the messages that Platform Designer issues during testbench system generation. |
<system>.ipx |
The IP Index File (.ipx) lists the available IP components, or a reference to other directories to search for IP components. |
<system>.svd |
Allows HPS System Debug tools to view the register maps of peripherals connected to HPS within a Platform Designer system. Similarly, during synthesis the .svd files for slave interfaces visible to System Console masters are stored in the .sof file in the debug section. System Console reads this section, which Platform Designer can query for register map information. For system slaves, Platform Designer can access the registers by name. |
mentor/ |
Contains a ModelSim® script msim_setup.tcl to set up and run a simulation |
aldec/ |
Contains a Riviera-PRO* script rivierapro_setup.tcl to setup and run a simulation. |
/synopsys/vcs /synopsys/vcsmx |
Contains a shell script vcs_setup.sh to set up and run a VCS* simulation. Contains a shell script vcsmx_setup.sh and synopsys_ sim.setup file to set up and run a VCS* MX simulation. |
/cadence |
Contains a shell script ncsim_setup.sh and other setup files to set up and run an NCSIM simulation. |
/submodules | Contains HDL files for the submodule of the Platform Designer testbench system. |
<child IP cores>/ | For each generated child IP core directory, Platform Designer testbench generates /synth and /sim subdirectories. |
Generating Example Designs for IP Components
Use any of the following methods to generate example designs for IP components:
- Double-click the IP component in the Platform Designer IP Catalog or System Contents tab. The parameter editor for the component appears. If available, click the Example Design button in the parameter editor to generate the example design. The Example Design button only appears in the parameter editor if an example is available.
- For some IP components, click Generate > Generate Example Design to access an example design. This command only enables when a design example is available.
The following Platform Designer system example designs demonstrate various design features and flows that you can replicate in your Platform Designer system.
Generating the HPS IP Component System View Description File
The .svd (or CMSIS-SVD) file format is an XML schema specified as part of the Cortex Microcontroller Software Interface Standard (CMSIS) that Arm* provides. The .svd file allows HPS system debug tools (such as the DS-5 Debugger) to view the register maps of peripherals connected to HPS in a Platform Designer system.
Generating Header Files for Master Components
Option | Description |
---|---|
<sopc> |
Path to Platform Designer .sopcinfo file, or the file directory. If you omit this option, the path defaults to the current directory. If you specify a directory path, you must make sure that there is a .sopcinfo file in the directory. |
--separate-masters |
Does not combine a module's masters that are in the same address space. |
--output-dir[=<dirname>] |
Allows you to specify multiple header files in dirname. The default output directory is '.' |
--single[=<filename>] |
Allows you to create a single header file, filename. |
--single-prefix[=<prefix>] |
Prefixes macros from a selected single master. |
--module[=<moduleName>] |
Specifies the module name when creating a single header file. |
--master[=<masterName>] |
Specifies the master name when creating a single header file. |
--format[=<type>] |
Specifies the header file format. Default file format is .h. |
--silent |
Does not display normal messages. |
--help |
Displays help for sopc-create-header-files. |
By default, the sopc-create-header-files command creates multiple header files. There is one header file for the entire system, and one header file for each master group in each module. A master group is a set of masters in a module in the same address space. In general, a module may have multiple master groups. Addresses and available devices are a function of the master group.
Alternatively, you can use the --single option to create one header file for one master group. If there is one CPU module in the Platform Designer system with one master group, the command generates a header file for that CPU's master group. If there are no CPU modules, but there is one module with one master group, the command generates the header file for that module's master group.
You can use the --module and --master options to override these defaults. If your module has multiple master groups, use the --master option to specify the name of a master in the desired master group.
Type | Suffix | Uses | Example |
---|---|---|---|
h | .h |
C/C++ header files |
#define FOO 12 |
m4 | .m4 |
Macro files for m4 |
m4_define("FOO", 12) |
sh | .sh |
Shell scripts |
FOO=12 |
mk | .mk |
Makefiles |
FOO := 12 |
pm | .pm |
Perl scripts |
$macros{FOO} = 12; |
Simulating a Platform Designer System
You can use scripts to compile the required device libraries and system design files in the correct order and elaborate or load the top-level system for simulation.
Variable | Description |
---|---|
TOP_LEVEL_NAME | If the testbench Platform Designer system is not the top‑level instance in your simulation environment because you instantiate the Platform Designer testbench within your own top-level simulation file, set the TOP_LEVEL_NAME variable to the top-level hierarchy name. |
QSYS_SIMDIR | If the simulation files generated by Platform Designer are not in the simulation working directory, use the QSYS_SIMDIR variable to specify the directory location of the Platform Designer simulation files. |
QUARTUS_INSTALL_DIR | Points to the Quartus installation directory that contains the device family library. |
Top-Level Simulation HDL File for a Testbench System
The example below shows the pattern_generator_tb generated for a Platform Designer system called pattern_generator. The top.sv file defines the top-level module that instantiates the pattern_generator_tb simulation model, as well as a custom SystemVerilog test program with BFM transactions, called test_program.
module top(); pattern_generator_tb tb(); test_program pgm(); endmodule
Adding Assertion Monitors for Simulation

Similarly, you can insert an Avalon-ST monitor between Avalon-ST source and sink interfaces.
Simulating Software Running on a Nios II Processor
- Click Generate > Generate Testbench System.
- In the Generation dialog box, select Simple, BFMs for clocks and resets.
- For Create testbench simulation model, select Verilog or VHDL.
- Click Generate.
- Open the Nios® II Software Build Tools for Eclipse.
- Set up an application project and board support package (BSP) for the <system>.sopcinfo file.
- To simulate, right-click the application project in Eclipse, and then click Run as > Nios® II ModelSim® . This command prepares the ModelSim® simulation environment, and compiles and loads the Nios® II software simulation.
- To run the simulation in ModelSim® , type run -all in the ModelSim® transcript window.
- Set the ModelSim® settings and select the Platform Designer Testbench Simulation Package Descriptor (.spd) file, < system >_tb.spd. The .spd file generates with the testbench simulation model for Nios® II designs, and specifies the files you require for Nios® II simulation.
Integrating a Platform Designer System with the Intel Quartus Prime Software
You can choose to include the .qsys file automatically in your Intel® Quartus® Prime project when you generate your Platform Designer system by turning on the Automatically add Intel® Quartus® Prime IP files to all projects option in the Intel® Quartus® Prime software (Tools > Options > IP Settings). If this option is turned off, the Intel® Quartus® Prime software asks you if you want to include the .qsys file in your Intel® Quartus® Prime project after you exit Platform Designer.
If you want file generation to occur as part of the Intel® Quartus® Prime software's compilation, you should include the .qsys file in your Intel® Quartus® Prime project. If you want to manually control file generation outside of the Intel® Quartus® Prime software, you should include the .qip file in your Intel® Quartus® Prime project.
Does Intel® Quartus® Prime Overwrite Platform Designer-Generated Files During Compilation?
Platform Designer supports standard and legacy device generation. Standard device generation refers to generating files for the Intel® Arria® 10 device, and later device families. Legacy device generation refers to generating files for device families prior to the release of the Intel® Arria® 10 device, including MAX 10 devices.
When you integrate your Platform Designer system with the Intel® Quartus® Prime software, if a .qsys file is included as a source file, Platform Designer generates standard device files under <system>/ next to the location of the .qsys file. For legacy devices, if a .qsys file is included as a source file, Platform Designer generates HDL files in the Intel® Quartus® Prime project directory under /db/ip.
For standard devices, Platform Designer-generated files are only overwritten during Intel® Quartus® Prime compilation if the .qip file is removed or missing. For legacy devices, each time you compile your Intel® Quartus® Prime project with a .qsys file, the Platform Designer-generated files are overwritten. Therefore, you should not edit Platform Designer-generated HDL in the /db/ip directory; any edits made to these files are lost and never used as input to the Quartus HDL synthesis engine.
Integrate a Platform Designer System and the Intel Quartus Prime Software With the .qsys File
- In Platform Designer, create and save a Platform Designer system.
- To automatically include the .qsys file in the your Intel® Quartus® Prime project during compilation, in the Intel® Quartus® Prime software, select Tools > Options > IP Settings, and turn on Automatically add Intel® Quartus® Prime IP files to all projects.
- When the Automatically add Intel® Quartus® Prime IP files to all projects option is not checked, when you exit Platform Designer, the Intel® Quartus® Prime software displays a dialog box asking whether you want to add the .qsys file to your Intel® Quartus® Prime project. Click Yes to add the .qsys file to your Intel® Quartus® Prime project.
- In the Intel® Quartus® Prime software, select Processing > Start Compilation.
Integrate a Platform Designer System and the Intel Quartus Prime Software With the .qip File
- In Platform Designer, create and save a Platform Designer system.
- In Platform Designer, click Generate HDL.
- In the Intel® Quartus® Prime software, select Assignments > Settings > Files.
- On the Files page, use the controls to locate your .qip file, and then add it to your Intel® Quartus® Prime project.
- In the Intel® Quartus® Prime software, select Processing > Start Compilation.
Managing Hierarchical Platform Designer Systems
All hierarchical Platform Designer systems appear in the IP Catalog under Project > System. You select the system from the IP Catalog to reuse the system across multiple designs. In a team-based hierarchical design flow, you can divide large designs into subsystems and allow team members develop subsystems simultaneously.
Adding a Subsystem to a Platform Designer System
- Create a Platform Designer system to use as the subsystem.
- Open a Platform Designer system to contain the subsystem.
-
On the
System Contents
tab, use any of the following methods to add the subsystem:
- Right-click anywhere in the System Contents and click Add a new subsystem to the current system.
- Click the Add a new subsystem to the current system button on the toolbar.
- Press Ctrl+Shift+N.
-
In the Confirm New System Name dialog box, confirm or specify
the new system file name and click OK. The system appears as a new
subsystem in the
System Contents
.
Figure 32. Add a Subsystem to a Platform Designer Design
Viewing and Traversing Subsystem Contents
- Open a Platform Designer system that contains a subsystem.
-
Use any of the following methods to view the subsystem contents:
- Double-click a subsystem in the Hierarchy tab. The subsystem opens in the System Contents .
- Right-click a system in the Hierarchy, System Contents, or Schematic tabs, and then select Drill into subsystem.
- Press Ctrl+Shift+D in the System Contents tab.
-
Use any of the following
System Contents
or Schematic tab toolbar
buttons to traverse the system and subsystems:
Table 14. System Contents and Schematic Tab Navigation Buttons Button Description Move to the top of the hierarchy—navigates to the top-level (parent) .qsys file for the system. Move up one level of hierarchy—navigates up one hierarchy level from the current selection. Drill into a subsystem to explore its contents—opens the subsystem you select in the System Contents . Note: In the System Contents tab, you can press Ctrl+Shift+U to navigate up one level, and Ctrl+Shift+D to drill into a system.

Editing a Subsystem
- Open a Platform Designer system that contains a subsystem.
- In the System Contents or Schematic tabs, use the Move Up, Move Down, Move to Top, and Move to Bottom toolbar buttons to navigate the system level you want to edit. Platform Designer updates to reflect your selection.
- To edit a system, double-click the system in the Hierarchy tab. The system opens and is available for edit in all Platform Designer views.
-
In the
System Contents
tab, you can rename any element, add,
remove, or duplicate connections, and export interfaces, as appropriate.
Note: Changes to a subsystem affect all instances. Platform Designer identifies unsaved changes to a subsystem with an asterisk next to the subsystem in the Hierarchy tab.
Changing a Component's Hierarchy Level
You can lower the hierarchical level of a component, even into its own subsystem, which can simplify the top-level system view. You can also raise the level of a component or subsystem to share the component or subsystem between two unique subsystems. Management of hierarchy levels facilitates system optimization and can reduce complex connectivity in your subsystems.
- Open a Platform Designer system that contains a subsystem.
- In the System Contents tab, to group and change the hierarchy level of multiple components that share a system-level component, multi-select the components, right-click, and then click Push down into new subsystem. Platform Designer pushes the components into their own subsystem and re-establishes the exported signals and connectivity in the new location.
- In the System Contents tab, to pull a component up out of a subsystem, select the component, and then click Pull up. Platform Designer pulls the component up out of the subsystem and re-establishes the exported signals and connectivity in the new location.
Saving a Subsystem
Follow these steps to save a subsystem:
- Open a Platform Designer system that contains a subsystem.
- Click File > Save to save your Platform Designer design.
-
In the Confirm New System Filenames dialog box, click OK to accept the subsystem file names.
Note: If you have not yet saved your top-level system, or multiple subsystems, you can type a new name, and then press Enter, to move to the next un-named system.
- In the Confirm New System Filenames dialog box, to edit the name of a subsystem, click the subsystem, and then type the new name.
Exporting a System as an IP Component
- Open a Platform Designer system.
- Click File > Export System as hw.tcl Component.
Hierarchical System Using Instance Parameters Example
Follow the steps below to create a system that contains an on-chip memory IP component with instance parameters, and the instantiating higher-level Platform Designer system. With your completed system, you can vary the values of the instance parameters to review their effect within the On-Chip Memory component.
Create the Memory System
- In Platform Designer, click File > New System.
- Right-click clk_0, and then click Remove.
- In the IP Catalog search box, type on-chip to locate the On-Chip Memory (RAM or ROM) component.
-
Double-click to add the On-Chip Memory component to your
system.
The parameter editor opens. When you click Finish, Platform Designer adds the component to your system with default selections.
- Rename the On-Chip Memory component to onchip_memory_0.
- In the System Contents tab, for the clk1 element (onchip_memory_0), double-click the Export column.
- In the System Contents tab, for the s1 element (onchip_memory_0), double-click the Export column.
- In the System Contents tab, for the reset1 element (onchip_memory_0), double-click the Export column.
-
Click File > Save to save your Platform Designer system as memory_system.qsys.
Figure 35. On-Chip Memory Component System and Instance Parameters (memory_system.qsys)
Add Platform Designer Instance Parameters
- In the memory_system.qsys system, click View > Instance Parameters.
- Click Add Parameter.
- In the Name and Display Name columns, rename the new_parameter_0 parameter to component_data_width.
- For component_data_width, select Integer for Type, and 8 as the Default Value.
- Click Add Parameter.
- In the Name and Display Name columns, rename the new_parameter_0 parameter to component_memory_size.
-
For component_memory_size,
select Integer for Type, and 1024 as the Default
Value.
Figure 36. Platform Designer Instance Parameters Tab
-
In the Instance Script
section, type the commands that control how Platform Designer passes parameters to an
instance from the higher-level system. For example, in the script below, the
onchip_memory_0 instance receives its
dataWidth and memorySize parameter values from the instance parameters that you
define.
# request a specific version of the scripting API package require -exact qsys 15.0 # Set the name of the procedure to manipulate parameters set_module_property COMPOSITION_CALLBACK compose proc compose {} { # manipulate parameters in here set_instance_parameter_value onchip_memory_0 dataWidth [get_parameter_value component_data_width] set_instance_parameter_value onchip_memory_0 memorySize [get_parameter_value component_memory_size] set value [get_instance_parameter_value onchip_memory_0 dataWidth] send_message info "Value of onchip memory ram data width is $value " }
-
Click Preview Instance
to open the parameter editor GUI.
Preview Instance allows you to see how an instance of a system appears when you use it in another system.Figure 37. Preview an Instance in the Parameter Editor
- Click File > Save.
Create a Platform Designer Instantiating Memory System
- In Platform Designer, click File > New System.
- Right-click clk_0, and then click Remove.
-
In the IP Catalog, under System, double-click memory_system.
The parameter editor opens. When you click Finish, Platform Designer adds the component to your system.
- In the Systems Contents tab, for each element under system_0, double-click the Export column.
-
Click File > Save to save your Platform Designer as instantiating_memory_system.qsys.
Figure 38. Instantiating Memory System (instantiating_memory_system.qsys)
Apply Instance Parameters at a Higher-Level Platform Designer System and Pass the Parameters to the Instantiated Lower-Level System
- In the instantiating_memory_system.qsys system, in the Hierarchy tab, click and expand system_0 (memory_system.qsys).
-
Click View > Parameters.
The instance parameters for the memory_system.qsys display in the parameter editor.Figure 39. Displays memory_system.qsys Instance Parameters in the Parameter Editor
- On the Parameters tab, change the value of memory_data_width to 16, and memory_memory_size to 2048.
-
In the Hierarchy tab,
under system_0 (memory_system.qsys),
click onchip_memory_0.
When you select onchip_memory_0, the new parameter values for Data width and Total memory size size are displayed.Figure 40. Changing the Values of an Instance Parameters
Creating a System with Platform Designer Revision History
The following revision history applies to this chapter:
Document Version | Intel® Quartus® Prime Version | Changes |
---|---|---|
2019.05.14 | 18.1.0 |
|
2018.12.15 | 18.1.0 |
|
2018.09.24 | 18.1.0 |
|
2017.11.06 | 17.1.0 |
|
2016.05.03 | 16.0.0 |
|
2015.11.02 | 15.1.0 |
|
2015.05.04 | 15.0.0 |
|
December 2014 | 14.1.0 |
|
August 2014 | 14.0a10.0 |
|
June 2014 | 14.0.0 |
|
November 2013 | 13.1.0 |
|
May 2013 | 13.0.0 |
|
November 2012 | 12.1.0 |
|
June 2012 | 12.0.0 |
|
November 2011 | 11.1.0 |
|
May 2011 | 11.0.0 |
|
December 2010 | 10.1.0 | Initial release. |
Optimizing Platform Designer System Performance
The foundation of any system is the interconnect logic that connects hardware blocks or components. Creating interconnect logic is time consuming and prone to errors, and existing interconnect logic is difficult to modify when design requirements change. The Platform Designer system integration tool addresses these issues and provides an automatically generated and optimized interconnect designed to satisfy the system requirements.
Platform Designer supports Avalon® , AMBA* 3 AXI (version 1.0), AMBA* 4 AXI (version 2.0), AMBA* 4 AXI-Lite (version 2.0), AMBA* 4 AXI-Stream (version 1.0), and AMBA* 3 APB (version 1.0) interface specifications.
Designing with Avalon and AXI Interfaces
Avalon® Streaming ( Avalon® -ST) links connect point-to-point, unidirectional interfaces and are typically used in data stream applications. Each pair of components is connected without any requirement to arbitrate between the data source and sink.
Because Platform Designer supports multiplexed memory-mapped and streaming connections, you can implement systems that use multiplexed logic for control and streaming for data in a single design.
Designing Streaming Components
For example, if the component’s Avalon® -ST output or source of streaming data is back-pressured because the ready signal is deasserted, then the component must back-pressure its input or sink interface to avoid overflow.
You can use a FIFO to back-pressure internally on the output side of the component so that the input can accept more data even if the output is back-pressured. Then, you can use the FIFO almost full flag to back-pressure the sink interface or input data when the FIFO has only enough space to satisfy the internal latency. You can drive the data valid signal of the output or source interface with the FIFO not empty flag when that data is available.
Designing Memory-Mapped Components
When designing with memory-mapped components, you can implement any component that contains multiple registers mapped to memory locations, for example, a set of four output registers to support software read back from logic. Components that implement read and write memory-mapped transactions require three main building blocks: an address decoder, a register file, and a read multiplexer.
The decoder enables the appropriate 32-bit or 64-bit register for writes. For reads, the address bits drive the multiplexer selection bits. The read signal registers the data from the multiplexer, adding a pipeline stage so that the component can achieve a higher clock frequency.
This slave component has four write wait states and one read wait state. Alternatively, if you want high throughput, you may set both the read and write wait states to zero, and then specify a read latency of one, because the component also supports pipelined reads.
Using Hierarchy in Systems
Hierarchy can simplify verification control of slaves connected to each master in a memory-mapped system. Before you implement subsystems in your design, you should plan the system hierarchical blocks at the top-level, using the following guidelines:
- Plan shared resources—Determine the best location for shared resources in the system hierarchy. For example, if two subsystems share resources, add the components that use those resources to a higher-level system for easy access.
- Plan shared address space between subsystems—Planning the address space ensures you can set appropriate sizes for bridges between subsystems.
-
Plan how much
latency you may need to add to your system—When you add an
Avalon®
-MM
Pipeline Bridge between subsystems, you may add latency to the overall system. You
can reduce the added latency by parameterizing the bridge with zero cycles of
latency, and by turning off the pipeline command and response signals. Figure 42. Avalon® -MM Pipeline Bridge
In this example, two Nios® II processor subsystems share resources for message passing. Bridges in each subsystem export the Nios® II data master to the top-level system that includes the mutex (mutual exclusion component) and shared memory component (which could be another on-chip RAM, or a controller for an off-chip RAM device).
You can also design systems that process multiple data channels by instantiating the same subsystem for each channel. This approach is easier to maintain than a larger, non-hierarchical system. Additionally, such systems are easier to scale because you can calculate the required resources as a multiple of the subsystem requirements.
Using Concurrency in Memory-Mapped Systems
Implementing Concurrency With Multiple Masters
Implementing concurrency requires multiple masters in a Platform Designer system. Systems that include a processor contain at least two master interfaces because the processors include separate instruction and data masters. You can categorize master components as follows:
- General purpose processors, such as Nios® II processors
- DMA (direct memory access) engines
- Communication interfaces, such as PCI Express
Implementing Concurrency With Multiple Slaves
You can create multiple slave interfaces for a particular function to increase concurrency in your design.
In this example, there are two channel processing systems. In the first, four hosts must arbitrate for the single slave interface of the channel processor. In the second, each host drives a dedicated slave interface, allowing all master interfaces to simultaneously access the slave interfaces of the component. Arbitration is not necessary when there is a single host and slave interface.
Implementing Concurrency with DMA Engines
In some systems, you can use DMA engines to increase throughput. You can use a DMA engine to transfer blocks of data between interfaces, which then frees the CPU from doing this task. A DMA engine transfers data between a programmed start and end address without intervention, and the data throughput is dictated by the components connected to the DMA. Factors that affect data throughput include data width and clock frequency.
In this example, the system can sustain more concurrent read and write operations by including more DMA engines. Accesses to the read and write buffers in the top system are split between two DMA engines, as shown in the Dual DMA Channels at the bottom of the figure.
The DMA engine operates with Avalon® -MM write and read masters. An AXI DMA typically has only one master, because in AXI, the write and read channels on the master are independent and can process transactions simultaneously.
Inserting Pipeline Stages to Increase System Frequency
Platform Designer provides the Limit interconnect pipeline stages to option on the Interconnect Requirements tab to automatically add pipeline stages to the Platform Designer interconnect when you generate a system.
The Limit interconnect pipeline stages to parameter in the Interconnect Requirements tab allows you to define the maximum Avalon® -ST pipeline stages that Platform Designer can insert during generation. You can specify between 0 to 4 pipeline stages, where 0 means that the interconnect has a combinational datapath. You can specify a unique interconnect pipeline stage value for each subsystem.
For more information, refer to Interconnect Pipelining.
Using Bridges
An Avalon® bridge has an Avalon® -MM slave interface and an Avalon® -MM master interface. You can have many components connected to the bridge slave interface, or many components connected to the bridge master interface. You can also have a single component connected to a single bridge slave or master interface.
You can configure the data width of the bridge, which can affect how Platform Designer generates bus sizing logic in the interconnect. Both interfaces support Avalon® -MM pipelined transfers with variable latency, and can also support configurable burst lengths.
Transfers to the bridge slave interface are propagated to the master interface, which connects to components downstream from the bridge. Bridges can provide more control over interconnect pipelining than the Limit interconnect pipeline stages to option.
Using Bridges to Increase System Frequency
Inserting Pipeline Bridges
The Avalon® -MM pipeline bridge component integrates into any Platform Designer system. The pipeline bridge options can increase logic utilization and read latency. The change in topology may also reduce concurrency if multiple masters arbitrate for the bridge. You can use the Avalon® -MM pipeline bridge to control topology without adding a pipeline stage. A pipeline bridge that does not add a pipeline stage is optimal in some latency-sensitive applications. For example, a CPU may benefit from minimal latency when accessing memory.
Implementing Command Pipelining (Master-to-Slave)
The arbitration logic for the slave interface must multiplex the address, writedata, and burstcount signals. The multiplexer width increases proportionally with the number of masters connecting to a single slave interface. The increased multiplexer width may become a timing critical path in the system. If a single pipeline bridge does not provide enough pipelining, you can instantiate multiple instances of the bridge in a tree structure to increase the pipelining and further reduce the width of the multiplexer at the slave interface.
Implementing Response Pipelining (Slave-to-Master)
The interconnect inserts a multiplexer for every read datapath back to the master. As the number of slaves supporting read transfers connecting to the master increases, the width of the read data multiplexer also increases. If the performance increase is insufficient with one bridge, you can use multiple bridges in a tree structure to improve fMAX.
Using Clock Crossing Bridges
The clock crossing bridge contains a pair of clock crossing FIFOs, which isolate the master and slave interfaces in separate, asynchronous clock domains. Transfers to the slave interface are propagated to the master interface.
When you use a FIFO clock crossing bridge for the clock domain crossing, you add data buffering. Buffering allows pipelined read masters to post multiple reads to the bridge, even if the slaves downstream from the bridge do not support pipelined transfers.
You can also use a clock crossing bridge to place high and low frequency components in separate clock domains. If you limit the fast clock domain to the portion of your design that requires high performance, you may achieve a higher fMAX for this portion of the design. For example, the majority of processor peripherals in embedded designs do not need to operate at high frequencies, therefore, you do not need to use a high-frequency clock for these components. When you compile a design with the Intel® Quartus® Prime software, compilation may take more time when the clock frequency requirements are difficult to meet because the Fitter needs more time to place registers to achieve the required fMAX. To reduce the amount of effort that the Fitter uses on low priority and low performance components, you can place these behind a clock crossing bridge operating at a lower frequency, allowing the Fitter to increase the effort placed on the higher priority and higher frequency datapaths.
Using Bridges to Minimize Design Logic
Avoiding Speed Optimizations That Increase Logic
You can add an additional pipeline stage with a pipeline bridge between masters and slaves to reduce the amount of combinational logic between registers, which can increase system performance. If you can increase the fMAX of your design logic, you may be able to turn off the Intel® Quartus® Prime software optimization settings, such as the Perform register duplication setting. Register duplication creates duplicate registers in two or more physical locations in the FPGA to reduce register-to-register delays. You may also want to choose Speed for the optimization method, which typically results in higher logic utilization due to logic duplication. By making use of the registers or FIFOs available in the bridges, you can increase the design speed and avoid needless logic duplication or speed optimizations, thereby reducing the logic utilization of the design.
Limiting Concurrency
The amount of logic generated for the interconnect often increases as the system becomes larger because Platform Designer creates arbitration logic for every slave interface that is shared by multiple master interfaces. Platform Designer inserts multiplexer logic between master interfaces that connect to multiple slave interfaces if both support read datapaths.
Most embedded processor designs contain components that are either incapable of supporting high data throughput, or do not need to be accessed frequently. These components can contain master or slave interfaces. Because the interconnect supports concurrent accesses, you may want to limit concurrency by inserting bridges into the datapath to limit the amount of arbitration and multiplexer logic generated.
For example, if a system contains three master and three slave interfaces that are interconnected, Platform Designer generates three arbiters and three multiplexers for the read datapath. If these masters do not require a significant amount of simultaneous throughput, you can reduce the resources that your design consumes by connecting the three masters to a pipeline bridge. The bridge controls the three slave interfaces and reduces the interconnect into a bus structure. Platform Designer creates one arbitration block between the bridge and the three masters, and a single read datapath multiplexer between the bridge and three slaves, and prevents concurrency. This implementation is similar to a standard bus architecture.
You should not use this method for high throughput datapaths to ensure that you do not limit overall system performance.
Using Bridges to Minimize Adapter Logic
Platform Designer creates burst adapters when the maximum burst length of the master is greater than the master burst length of the slave. The adapter logic creates extra logic resources, which can be substantial when your system contains master interfaces connected to many components that do not share the same characteristics. By placing bridges in your design, you can reduce the amount of adapter logic that Platform Designer generates.
Determining Effective Placement of Bridges
To determine the effective placement of a bridge, you should initially analyze each master in your system to determine if the connected slave devices support different bursting capabilities or operate in a different clock domain. The maximum burstcount of a component is visible as the burstcount signal in the HDL file of the component. The maximum burst length is 2 (width(burstcount -1)), therefore, if the burstcount width is four bits, the maximum burst length is eight. If no burstcount signal is present, the component does not support bursting or has a burst length of 1.
To determine if the system requires a clock crossing adapter between the master and slave interfaces, check the Clock column for the master and slave interfaces. If the clock is different for the master and slave interfaces, Platform Designer inserts a clock crossing adapter between them. To avoid creating multiple adapters, you can place the components containing slave interfaces behind a bridge so that Platform Designer creates a single adapter. By placing multiple components with the same burst or clock characteristics behind a bridge, you limit concurrency and the number of adapters.
You can also use a bridge to separate AXI and Avalon domains to minimize burst adaptation logic. For example, if there are multiple Avalon slaves that are connected to an AXI master, you can consider inserting a bridge to access the adaptation logic once before the bridge, instead of once per slave. This implementation results in latency, and you would also lose concurrency between reads and writes.
Changing the Response Buffer Depth
When you use automatic clock-crossing adapters, Platform Designer determines the required depth of FIFO buffering based on the slave properties. If a slave has a high Maximum Pending Reads parameter, the resulting deep response buffer FIFO that Platform Designer inserts between the master and slave can consume a lot of device resources. To control the response FIFO depth, you can use a clock crossing bridge and manually adjust its FIFO depth to trade off throughput with smaller memory utilization.
For example, if you have masters that cannot saturate the slave, you do not need response buffering. Using a bridge reduces the FIFO memory depth and reduces the Maximum Pending Reads available from the slave.
Considering the Effects of Using Bridges
Before you use pipeline or clock crossing bridges in a design, you should carefully consider their effects. Bridges can have any combination of consequences on your design, which could be positive or negative. Benchmarking your system before and after inserting bridges can help you determine the impact to the design.
Increased Latency
Adding a bridge to a design has an effect on the read latency between the master and the slave. Depending on the system requirements and the type of master and slave, this latency increase may not be acceptable in your design.
Acceptable Latency Increase
For a pipeline bridge, Platform Designer adds a cycle of latency for each pipeline option that is enabled. The buffering in the clock crossing bridge also adds latency. If you use a pipelined or burst master that posts many read transfers, the increase in latency does not impact performance significantly because the latency increase is very small compared to the length of the data transfer.
For example, if you use a pipelined read master such as a DMA controller to read data from a component with a fixed read latency of four clock cycles, but only perform a single word transfer, the overhead is three clock cycles out of the total of four. This is true when there is no additional pipeline latency in the interconnect. The read throughput is only 25%.
However, if 100 words of data are transferred without interruptions, the overhead is three cycles out of the total of 103 clock cycles. This corresponds to a read efficiency of approximately 97% when there is no additional pipeline latency in the interconnect. Adding a pipeline bridge to this read path adds two extra clock cycles of latency. The transfer requires 105 cycles to complete, corresponding to an efficiency of approximately 94%. Although the efficiency decreased by 3%, adding the bridge may increase the fMAX by 5%. For example, if the clock frequency can be increased, the overall throughput would improve. As the number of words transferred increases, the efficiency increases to nearly 100%, whether or not a pipeline bridge is present.
Unacceptable Latency Increase
Processors are sensitive to high latency read times and typically retrieve data for use in calculations that cannot proceed until the data arrives. Before adding a bridge to the datapath of a processor instruction or data master, determine whether the clock frequency increase justifies the added latency.
A Nios® II processor instruction master has a cache memory with a read latency of four cycles, which is eight sequential words of data return for each read. At 100 MHz, the first read takes 40 ns to complete. Each successive word takes 10 ns so that eight reads complete in 110 ns.
Adding a clock crossing bridge allows the memory to operate at 125 MHz. However, this increase in frequency is negated by the increase in latency because if the clock crossing bridge adds six clock cycles of latency at 100 MHz, then the memory continues to operate with a read latency of four clock cycles. Consequently, the first read from memory takes 100 ns, and each successive word takes 10 ns because reads arrive at the frequency of the processor, which is 100 MHz. In total, eight reads complete after 170 ns. Although the memory operates at a higher clock frequency, the frequency at which the master operates limits the throughput.
Limited Concurrency
Placing a bridge between multiple master and slave interfaces limits the number of concurrent transfers your system can initiate. This limitation is the same when connecting multiple master interfaces to a single slave interface. The slave interface of the bridge is shared by all the masters and, as a result, Platform Designer creates arbitration logic. If the components placed behind a bridge are infrequently accessed, this concurrency limitation may be acceptable.
Bridges can have a negative impact on system performance if you use them inappropriately. For example, if multiple memories are used by several masters, you should not place the memory components behind a bridge. The bridge limits memory performance by preventing concurrent memory accesses. Placing multiple memory components behind a bridge can cause the separate slave interfaces to appear as one large memory to the masters accessing the bridge; all masters must access the same slave interface.
A memory subsystem with one bridge that acts as a single slave interface for the Avalon® -MM Nios® II and DMA masters, which results in a bottleneck architecture. The bridge acts as a bottleneck between the two masters and the memories.
If the fMAX of your memory interfaces is low and you want to use a pipeline bridge between subsystems, you can place each memory behind its own bridge, which increases the fMAX of the system without sacrificing concurrency.
Address Space Translation
The slave interface of a pipeline or clock crossing bridge has a base address and address span. You can set the base address, or allow Platform Designer to set it automatically. The address of the slave interface is the base offset address of all the components connected to the bridge. The address of components connected to the bridge is the sum of the base offset and the address of that component.
The master interface of the bridge drives only the address bits that represent the offset from the base address of the bridge slave interface. Any time a master accesses a slave through a bridge, both addresses must be added together, otherwise the transfer fails. The Address Map tab displays the addresses of the slaves connected to each master and includes address translations caused by system bridges.
In this example, the Nios® II processor connects to a bridge located at base address 0x1000, a slave connects to the bridge master interface at an offset of 0x20, and the processor performs a write transfer to the fourth 32-bit or 64-bit word within the slave. Nios® II drives the address 0x102C to interconnect, which is within the address range of the bridge. The bridge master interface drives 0x2C, which is within the address range of the slave, and the transfer completes.
Address Coherency
To simplify the system design, all masters should access slaves at the same location. In many systems, a processor passes buffer locations to other mastering components, such as a DMA controller. If the processor and DMA controller do not access the slave at the same location, Platform Designer must compensate for the differences.
A Nios® II processor and DMA controller access a slave interface located at address 0x20. The processor connects directly to the slave interface. The DMA controller connects to a pipeline bridge located at address 0x1000, which then connects to the slave interface. Because the DMA controller accesses the pipeline bridge first, it must drive 0x1020 to access the first location of the slave interface. Because the processor accesses the slave from a different location, you must maintain two base addresses for the slave device.
To avoid the requirement for two addresses, you can add an additional bridge to the system, set its base address to 0x1000, and then disable all the pipelining options in the second bridge so that the bridge has minimal impact on system timing and resource utilization. Because this second bridge has the same base address as the original bridge, the processor and DMA controller access the slave interface with the same address range.
Increasing Transfer Throughput
Increasing the transfer efficiency of the master and slave interfaces in your system increases the throughput of your design. Designs with strict cost or power requirements benefit from increasing the transfer efficiency because you can then use less expensive, lower frequency devices. Designs requiring high performance also benefit from increased transfer efficiency because increased efficiency improves the performance of frequency–limited hardware.
Throughput is the number of symbols (such as bytes) of data that Platform Designer can transfer in a given clock cycle. Read latency is the number of clock cycles between the address and data phase of a transaction. For example, a read latency of two means that the data is valid two cycles after the address is posted. If the master must wait for one request to finish before the next begins, such as with a processor, then the read latency is very important to the overall throughput.
You can measure throughput and latency in simulation by observing the waveforms, or using the verification IP monitors.
Using Pipelined Transfers
Pipelined transfers increase the read efficiency by allowing a master to post multiple reads before data from an earlier read returns. Masters that support pipelined transfers post transfers continuously, relying on the readdatavalid signal to indicate valid data. Slaves support pipelined transfers by including the readdatavalid signal or operating with a fixed read latency.
AXI masters declare how many outstanding writes and reads it can issue with the writeIssuingCapability and readIssuingCapability parameters. In the same way, a slave can declare how many reads it can accept with the readAcceptanceCapability parameter. AXI masters with a read issuing capability greater than one are pipelined in the same way as Avalon® masters and the readdatavalid signal.
Using the Maximum Pending Reads Parameter
If you create a custom component with a slave interface supporting variable-latency reads, you must specify the Maximum Pending Reads parameter in the Component Editor. Platform Designer uses this parameter to generate the appropriate interconnect and represent the maximum number of read transfers that your pipelined slave component can process. If the number of reads presented to the slave interface exceeds the Maximum Pending Reads parameter, then the slave interface must assert waitrequest.
Optimizing the value of the Maximum Pending Reads parameter requires an understanding of the latencies of your custom components. This parameter should be based on the component’s highest read latency for the various logic paths inside the component. For example, if your pipelined component has two modes, one requiring two clock cycles and the other five, set the Maximum Pending Reads parameter to 5 to allow your component to pipeline five transfers, and eliminating dead cycles after the initial five-cycle latency.
You can also determine the correct value for the Maximum Pending Reads parameter by monitoring the number of reads that are pending during system simulation or while running the hardware. To use this method, set the parameter to a high value and use a master that issues read requests on every clock. You can use a DMA for this task if the data is written to a location that does not frequently assert waitrequest. If you implement this method, you can observe your component with a logic analyzer or built-in monitoring hardware.
Choosing the correct value for the Maximum Pending Reads parameter of your custom pipelined read component is important. If you underestimate the parameter value, you may cause a master interface to stall with a waitrequest until the slave responds to an earlier read request and frees a FIFO position.
The Maximum Pending Reads parameter controls the depth of the response FIFO inserted into the interconnect for each master connected to the slave. This FIFO does not use significant hardware resources. Overestimating the Maximum Pending Reads parameter results in a slight increase in hardware utilization. For these reasons, if you are not sure of the optimal value, you should overestimate this value.
If your system includes a bridge, you must set the Maximum Pending Reads parameter on the bridge as well. To allow maximum throughput, this value should be equal to or greater than the Maximum Pending Reads value for the connected slave that has the highest value. You can limit the maximum pending reads of a slave and reduce the buffer depth by reducing the parameter value on the bridge if the high throughput is not required. If you do not know the Maximum Pending Reads value for all the slave components, you can monitor the number of reads that are pending during system simulation while running the hardware. To use this method, set the Maximum Pending Reads parameter to a high value and use a master that issues read requests on every clock, such as a DMA. Then, reduce the number of maximum pending reads of the bridge until the bridge reduces the performance of any masters accessing the bridge.
Arbitration Shares and Bursts
You can adjust the arbitration process by assigning a larger number of shares to masters that need greater throughput. The larger the arbitration share, the more transfers are allocated to the master to access a slave. The masters gets uninterrupted access to the slave for its number of shares, as long as the master is reading or writing.
If a master cannot post a transfer, and other masters are waiting to gain access to a particular slave, the arbiter grants access to another master. This mechanism prevents a master from wasting arbitration cycles if it cannot post back-to-back transfers. A bursting transaction contains multiple beats (or words) of data, starting from a single address. Bursts allow a master to maintain access to a slave for more than a single word transfer. If a bursting master posts a write transfer with a burst length of eight, it is guaranteed arbitration for eight write cycles.
You can assign arbitration shares to an Avalon® -MM bursting master and AXI masters (which are always considered a bursting master). Each share consists of one burst transaction (such as multi cycle write), and allows a master to complete a number of bursts before arbitration switches to the next master.
Differences Between Arbitration Shares and Bursts
The following three key characteristics distinguish arbitration shares and bursts:
- Arbitration Lock
- Sequential Addressing
- Burst Adapters
Arbitration Lock
When a master posts a burst transfer, the arbitration is locked for that master; consequently, the bursting master should be capable of sustaining transfers for the duration of the locked period. If, after the fourth write, the master deasserts the write signal ( Avalon® -MM write or AXI wvalid) for fifty cycles, all other masters continue to wait for access during this stalled period.
To avoid wasted bandwidth, your master designs should wait until a full burst transfer is ready before requesting access to a slave device. Alternatively, you can avoid wasted bandwidth by posting burstcounts equal to the amount of data that is ready. For example, if you create a custom bursting write master with a maximum burstcount of eight, but only three words of data are ready, you can present a burstcount of three. This strategy does not result in optimal use of the system band width if the slave is capable of handling a larger burst; however, this strategy prevents stalling and allows access for other masters in the system.
Sequential Addressing
An Avalon® -MM burst transfer includes a base address and a burstcount, which represents the number of words of data that are transferred, starting from the base address and incrementing sequentially. Burst transfers are common for processors, DMAs, and buffer processing accelerators; however, sometimes a master must access non-sequential addresses. Consequently, a bursting master must set the burstcount to the number of sequential addresses, and then reset the burstcount for the next location.
The arbitration share algorithm has no restrictions on addresses; therefore, your custom master can update the address it presents to the interconnect for every read or write transaction.
Burst Adapters
Platform Designer allows you to create systems that mix bursting and non-bursting master and slave interfaces. This design strategy allows you to connect bursting master and slave interfaces that support different maximum burst lengths, with Platform Designer generating burst adapters when appropriate.
Platform Designer inserts a burst adapter whenever a master interface burst length exceeds the burst length of the slave interface, or if the master issues a burst type that the slave cannot support. For example, if you connect an AXI master to an Avalon® slave, a burst adapter is inserted. Platform Designer assigns non-bursting masters and slave interfaces a burst length of one. The burst adapter divides long bursts into shorter bursts. As a result, the burst adapter adds logic to the address and burstcount paths between the master and slave interfaces.
Choosing Avalon -MM Interface Types
To avoid inefficient Avalon® -MM transfers, custom master or slave interfaces must use the appropriate simple, pipelined, or burst interfaces.
Simple Avalon -MM Interfaces
Simple interface transfers do not support pipelining or bursting for reads or writes; consequently, their performance is limited. Simple interfaces are appropriate for transfers between masters and infrequently used slave interfaces. In Platform Designer, the PIO, UART, and Timer include slave interfaces that use simple transfers.
Pipelined Avalon -MM Interfaces
Pipelined read transfers allow a pipelined master interface to start multiple read transfers in succession without waiting for prior transfers to complete. Pipelined transfers allow master-slave pairs to achieve higher throughput, even though the slave port may require one or more cycles of latency to return data for each transfer.
In many systems, read throughput becomes inadequate if simple reads are used and pipelined transfers can increase throughput. If you define a component with a fixed read latency, Platform Designer automatically provides the pipelining logic necessary to support pipelined reads. You can use fixed latency pipelining as the default design starting point for slave interfaces. If your slave interface has a variable latency response time, use the readdatavalid signal to indicate when valid data is available. The interconnect implements read response FIFO buffering to handle the maximum number of pending read requests.
To use components that support pipelined read transfers, and to use a pipelined system interconnect efficiently, your system must contain pipelined masters. You can use pipelined masters as the default starting point for new master components. Use the readdatavalid signal for these master interfaces.
Because master and slaves sometimes have mismatched pipeline latency, the interconnect contains logic to reconcile the differences.
Master | Slave | Pipeline Management Logic Structure |
---|---|---|
No pipeline | No pipeline | Platform Designer interconnect does not instantiate logic to handle pipeline latency. |
No pipeline | Pipelined with fixed or variable latency | Platform Designer interconnect forces the master to wait through any slave-side latency cycles. This master-slave pair gains no benefits from pipelining, because the master waits for each transfer to complete before beginning a new transfer. However, while the master is waiting, the slave can accept transfers from a different master. |
Pipelined | No pipeline | Platform Designer interconnect carries out the transfer as if neither master nor slave were pipelined, causing the master to wait until the slave returns data. An example of a non-pipeline slave is an asynchronous off-chip interface. |
Pipelined | Pipelined with fixed latency | Platform Designer interconnect allows the master to capture data at the exact clock cycle when data from the slave is valid, to enable maximum throughput. An example of a fixed latency slave is an on-chip memory. |
Pipelined | Pipelined with variable latency | The slave asserts a signal when its readdata is valid, and the master captures the data. The master-slave pair can achieve maximum throughput if the slave has variable latency. Examples of variable latency slaves include SDRAM and FIFO memories. |
Burst Avalon -MM Interfaces
Burst transfers are commonly used for latent memories such as SDRAM and off-chip communication interfaces, such as PCI Express. To use a burst-capable slave interface efficiently, you must connect to a bursting master. Components that require bursting to operate efficiently typically have an overhead penalty associated with short bursts or non-bursting transfers.
You can use a burst-capable slave interface if you know that your component requires sequential transfers to operate efficiently. Because SDRAM memories incur a penalty when switching banks or rows, performance improves when SDRAM memories are accessed sequentially with bursts.
Architectures that use the same signals to transfer address and data also benefit from bursting. Whenever an address is transferred over shared address and data signals, the throughput of the data transfer is reduced. Because the address phase adds overhead, using large bursts increases the throughput of the connection.
Avalon -MM Burst Master Example
The master performs word accesses and writes to sequential memory locations. When go is asserted, the start_address and transfer_length are registered. On the next clock cycle, the control logic asserts burst_begin, which synchronizes the internal control signals in addition to the master_address and master_burstcount presented to the interconnect. The timing of these two signals is important because during bursting write transfers byteenable and burstcount must be held constant for the entire burst.
To avoid inefficient writes, the master posts a burst when enough data is buffered in the FIFO. To maximize the burst efficiency, the master should stall only when a slave asserts waitrequest. In this example, the FIFO’s used signal tracks the number of words of data that are stored in the FIFO and determines when enough data has been buffered.
The address register increments after every word transfer, and the length register decrements after every word transfer. The address remains constant throughout the burst. Because a transfer is not guaranteed to complete on burst boundaries, additional logic is necessary to recognize the completion of short bursts and complete the transfer.
Reducing Logic Utilization
Minimizing Interconnect Logic to Reduce Logic Unitization
In Platform Designer, changes to the connections between master and slave reduce the amount of interconnect logic required in the system.
Creating Dedicated Master and Slave Connections to Minimize Interconnect Logic
You can create a system where a master interface connects to a single slave interface. This configuration eliminates address decoding, arbitration, and return data multiplexing, which simplifies the interconnect. Dedicated master-to-slave connections attain the same clock frequencies as Avalon® -ST connections.
Typically, these one-to-one connections include an Avalon memory-mapped bridge or hardware accelerator. For example, if you insert a pipeline bridge between a slave and all other master interfaces, the logic between the bridge master and slave interface is reduced to wires. If a hardware accelerator connects only to a dedicated memory, no system interconnect logic is generated between the master and slave pair.
Removing Unnecessary Connections to Minimize Interconnect Logic
The number of connections between master and slave interfaces affects the fMAX of your system. Every master interface that you connect to a slave interface increases the width of the multiplexer width. As a multiplexer width increases, so does the logic depth and width that implements the multiplexer in the FPGA. To improve system performance, connect masters and slaves only when necessary.
When you connect a master interface to many slave interfaces, the multiplexer for the read data signal grows. Avalon typically uses a readdata signal. AXI read data signals add a response status and last indicator to the read response channel using rdata, rresp, and rlast. Additionally, bridges help control the depth of multiplexers.
Simplifying Address Decode Logic
If address code logic is in the critical path, you may be able to change the address map to simplify the decode logic. Experiment with different address maps, including a one-hot encoding, to see if results improve.
Minimizing Arbitration Logic by Consolidating Multiple Interfaces
As the number of components in a design increases, the amount of logic required to implement the interconnect also increases. The number of arbitration blocks increases for every slave interface that is shared by multiple master interfaces. The width of the read data multiplexer increases as the number of slave interfaces supporting read transfers increases on a per master interface basis. For these reasons, consider implementing multiple blocks of logic as a single interface to reduce interconnect logic utilization.
Logic Consolidation Trade-Offs
You should consider the following trade-offs before making modifications to your system or interfaces:
- Consider the impact on concurrency that results when you consolidate components. When a system has four master components and four slave interfaces, it can initiate four concurrent accesses. If you consolidate the four slave interfaces into a single interface, then the four masters must compete for access. Consequently, you should only combine low priority interfaces such as low speed parallel I/O devices if the combination does not impact the performance.
- Determine whether consolidation introduces new decode and multiplexing logic for the slave interface that the interconnect previously included. If an interface contains multiple read and write address locations, the interface already contains the necessary decode and multiplexing logic. When you consolidate interfaces, you typically reuse the decoder and multiplexer blocks already present in one of the original interfaces; however, combining interfaces may simply move the decode and multiplexer logic, rather than eliminate duplication.
- Consider whether consolidating interfaces makes the design complicated. If so, you should not consolidate interfaces.
Consolidating Interfaces
In this example, we have a system with a mix of components, each having different burst capabilities: a Nios® II/e core, a Nios® II/f core, and an external processor, which off-loads some processing tasks to the Nios® II/f core.
The Nios® II/f core supports a maximum burst size of eight. The external processor interface supports a maximum burst length of 64. The Nios® II/e core does not support bursting. The memory in the system is SDRAM with an Avalon® maximum burst length of two.
Platform Designer automatically inserts burst adapters to compensate for burst length mismatches. The adapters reduce bursts to a single transfer, or the length of two transfers. For the external processor interface connecting to DDR SDRAM, a burst of 64 words is divided into 32 burst transfers, each with a burst length of two. When you generate a system, Platform Designer inserts burst adapters based on maximum burstcount values; consequently, the interconnect logic includes burst adapters between masters and slave pairs that do not require bursting, if the master is capable of bursts.
In this example, Platform Designer inserts a burst adapter between the Nios® II processors and the timer, system ID, and PIO peripherals. These components do not support bursting and the Nios® II processor performs a single word read and write accesses to these components.
To reduce the number of adapters, you can add pipeline bridges. The pipeline bridge, between the Nios® II/f core and the peripherals that do not support bursts, eliminates three burst adapters from the previous example. A second pipeline bridge between the Nios® II/f core and the DDR SDRAM, with its maximum burst size set to eight, eliminates another burst adapter, as shown below.
Reducing Logic Utilization With Multiple Clock Domains
You specify clock domains in Platform Designer on the System Contents tab. Clock sources can be driven by external input signals to Platform Designer, or by PLLs inside Platform Designer. Clock domains are differentiated based on the name of the clock. You can create multiple asynchronous clocks with the same frequency.
Platform Designer generates Clock Domain Crossing (CDC) logic that hides the details of interfacing components operating in different clock domains. The interconnect supports the memory-mapped protocol with each port independently, and therefore masters do not need to incorporate clock adapters in order to interface to slaves on a different domain. Platform Designer interconnect logic propagates transfers across clock domain boundaries automatically.
Clock-domain adapters provide the following benefits:
- Allows component interfaces to operate at different clock frequencies.
- Eliminates the need to design CDC hardware.
- Allows each memory-mapped port to operate in only one clock domain, which reduces design complexity of components.
- Enables masters to access any slave without communication with the slave clock domain.
- Allows you to focus performance optimization efforts on components that require fast clock speed.
A clock domain adapter consists of two finite state machines (FSM), one in each clock domain, that use a hand-shaking protocol to propagate transfer control signals (read_request, write_request, and the master waitrequest signals) across the clock boundary.
This example illustrates a clock domain adapter between one master and one slave. The synchronizer blocks use multiple stages of flipflops to eliminate the propagation of meta-stable events on the control signals that enter the handshake FSMs. The CDC logic works with any clock ratio.
The typical sequence of events for a transfer across the CDC logic is as follows:
- The master asserts address, data, and control signals.
- The master handshake FSM captures the control signals and immediately forces the master to wait. The FSM uses only the control signals, not address and data. For example, the master simply holds the address signal constant until the slave side has safely captured it.
- The master handshake FSM initiates a transfer request to the slave handshake FSM.
- The transfer request is synchronized to the slave clock domain.
- The slave handshake FSM processes the request, performing the requested transfer with the slave.
- When the slave transfer completes, the slave handshake FSM sends an acknowledge back to the master handshake FSM. The acknowledge is synchronized back to the master clock domain.
- The master handshake FSM completes the transaction by releasing the master from the wait condition.
Transfers proceed as normal on the slave and the master side, without a special protocol to handle crossing clock domains. From the perspective of a slave, there is nothing different about a transfer initiated by a master in a different clock domain. From the perspective of a master, a transfer across clock domains simply requires extra clock cycles. Similar to other transfer delay cases (for example, arbitration delay or wait states on the slave side), the Platform Designer forces the master to wait until the transfer terminates. As a result, pipeline master ports do not benefit from pipelining when performing transfers to a different clock domain.
Platform Designer automatically determines where to insert CDC logic based on the system and the connections between components, and places CDC logic to maintain the highest transfer rate for all components. Platform Designer evaluates the need for CDC logic for each master and slave pair independently, and generates CDC logic wherever necessary.
Duration of Transfers Crossing Clock Domains
CDC logic extends the duration of master transfers across clock domain boundaries. In the worst case, which is for reads, each transfer is extended by five master clock cycles and five slave clock cycles. Assuming the default value of 2 for the master domain synchronizer length and the slave domain synchronizer length, the components of this delay are the following:
- Four additional master clock cycles, due to the master-side clock synchronizer.
- Four additional slave clock cycles, due to the slave-side clock synchronizer.
- One additional clock in each direction, due to potential metastable events as the control signals cross clock domains.
Reducing Power Consumption
Reducing Power Consumption With Multiple Clock Domains
When you use multiple clock domains, you should put non-critical logic in the slower clock domain. Platform Designer automatically reconciles data crossing over asynchronous clock domains by inserting clock crossing logic (handshake or FIFO).
You can use clock crossing in Platform Designer to reduce the clock frequency of the logic that does not require a high frequency clock, which allows you to reduce power consumption. You can use either handshaking clock crossing bridges or handshaking clock crossing adapters to separate clock domains.
You can use the clock crossing bridge to connect master interfaces operating at a higher frequency to slave interfaces running at a lower frequency. Only connect low throughput or low priority components to a clock crossing bridge that operates at a reduced clock frequency. The following are examples of low throughput or low priority components:
- PIOs
- UARTs (JTAG or RS-232)
- System identification (SysID)
- Timers
- PLL (instantiated within Platform Designer)
- Serial peripheral interface (SPI)
- EPCS controller
- Tristate bridge and the components connected to the bridge
By reducing the clock frequency of the components connected to the bridge, you reduce the dynamic power consumption of the design. Dynamic power is a function of toggle rates and decreasing the clock frequency decreases the toggle rate.
Platform Designer automatically inserts clock crossing adapters between master and slave interfaces that operate at different clock frequencies. You can choose the type of clock crossing adapter in the Platform Designer Project Settings tab. Adapters do not appear in the Connections column because you do not insert them. The following clock crossing adapter types are available in Platform Designer:
- Handshake—Uses a simple handshaking protocol to propagate transfer control signals and responses across the clock boundary. This adapter uses fewer hardware resources because each transfer is safely propagated to the target domain before the next transfer begins. The Handshake adapter is appropriate for systems with low throughput requirements.
- FIFO—Uses dual-clock FIFOs for synchronization. The latency of the FIFO adapter is approximately two clock cycles more than the handshake clock crossing component, but the FIFO-based adapter can sustain higher throughput because it supports multiple transactions simultaneously. The FIFO adapter requires more resources, and is appropriate for memory-mapped transfers requiring high throughput across clock domains.
- Auto—Platform Designer specifies the appropriate FIFO adapter for bursting links and the Handshake adapter for all other links.
Because the clock crossing bridge uses FIFOs to implement the clock crossing logic, it buffers transfers and data. Clock crossing adapters are not pipelined, so that each transaction is blocking until the transaction completes. Blocking transactions may lower the throughput substantially; consequently, if you want to reduce power consumption without limiting the throughput significantly, you should use the clock crossing bridge or the FIFO clock crossing adapter. However, if the design requires single read transfers, a clock crossing adapter is preferable because the latency is lower.
The clock crossing bridge requires few logic resources other than on-chip memory. The number of on-chip memory blocks used is proportional to the address span, data width, buffering depth, and bursting capabilities of the bridge. The clock crossing adapter does not use on-chip memory and requires a moderate number of logic resources. The address span, data width, and the bursting capabilities of the clock crossing adapter determine the resource utilization of the device.
When you decide to use a clock crossing bridge or clock crossing adapter, you must consider the effects of throughput and memory utilization in the design. If on-chip memory resources are limited, you may be forced to choose the clock crossing adapter. Using the clock crossing bridge to reduce the power of a single component may not justify using more resources. However, if you can place all of the low priority components behind a single clock crossing bridge, you may reduce power consumption in the design.
Reducing Power Consumption by Minimizing Toggle Rates
A Platform Designer system consumes power whenever logic transitions between on and off states. When the state is held constant between clock edges, no charging or discharging occurs. You can use the following design methodologies to reduce the toggle rates of your design:
- Registering component boundaries
- Using clock enable signals
- Inserting bridges
Platform Designer interconnect is uniquely combinational when no adapters or bridges are present and there is no interconnect pipelining. When a slave interface is not selected by a master, various signals may toggle and propagate into the component. By registering the boundary of your component at the master or slave interface, you can minimize the toggling of the interconnect and your component. In addition, registering boundaries can improve operating frequency. When you register the signals at the interface level, you must ensure that the component continues to operate within the interface standard specification.
Avalon® -MM waitrequest is a difficult signal to synchronize when you add registers to your component. The waitrequest signal must be asserted during the same clock cycle that a master asserts read or write to in order to prolong the transfer. A master interface can read the waitrequest signal too early and post more reads and writes prematurely.
There is no direct AXI equivalent for waitrequest and burstcount, though the AMBA Protocol Specification implies that the AXI ready signal cannot depend combinatorially on the AXI valid signal. Therefore, Platform Designer typically buffers AXI component boundaries for the ready signal.
For slave interfaces, the interconnect manages the begintransfer signal, which is asserted during the first clock cycle of any read or write transfer. If the waitrequest is one clock cycle late, you can logically OR the waitrequest and the begintransfer signals to form a new waitrequest signal that is properly synchronized. Alternatively, the component can assert waitrequest before it is selected, guaranteeing that the waitrequest is already asserted during the first clock cycle of a transfer.
Using Clock Enables
You can use clock enables to hold the logic in a steady state, and the write and read signals as clock enables for slave components. Even if you add registers to your component boundaries, the interface can potentially toggle without the use of clock enables. You can also use the clock enable to disable combinational portions of the component.
For example, you can use an active high clock enable to mask the inputs into the combinational logic to prevent it from toggling when the component is inactive. Before preventing inactive logic from toggling, you must determine if the masking causes the circuit to function differently. If masking causes a functional failure, it may be possible to use a register stage to hold the combinational logic constant between clock cycles.
Inserting Bridges
You can use bridges to reduce toggle rates, if you do not want to modify the component by using boundary registers or clock enables. A bridge acts as a repeater where transfers to the slave interface are repeated on the master interface. If the bridge is not accessed, the components connected to its master interface are also not accessed. The master interface of the bridge remains idle until a master accesses the bridge slave interface.
Bridges can also reduce the toggle rates of signals that are inputs to other master interfaces. These signals are typically readdata, readdatavalid, and waitrequest. Slave interfaces that support read accesses drive the readdata, readdatavalid, and waitrequest signals. A bridge inserts either a register or clock crossing FIFO between the slave interface and the master to reduce the toggle rate of the master input signals.
Reducing Power Consumption by Disabling Logic
There are typically two types of low power modes: volatile and non-volatile. A volatile low power mode holds the component in a reset state. When the logic is reactivated, the previous operational state is lost. A non-volatile low power mode restores the previous operational state. You can use either software-controlled or hardware-controlled sleep modes to disable a component in order to reduce power consumption.
Software-Controlled Sleep Mode
To design a component that supports software-controlled sleep mode, create a single memory-mapped location that enables and disables logic by writing a zero or one. You can use the register’s output as a clock enable or reset, depending on whether the component has non-volatile requirements. The slave interface must remain active during sleep mode so that the enable bit is set when the component needs to be activated.
If multiple masters can access a component that supports sleep mode, you can use the mutex core to provide mutually exclusive accesses to your component. You can also build in the logic to re-enable the component on the very first access by any master in your system. If the component requires multiple clock cycles to re-activate, then it must assert a wait request to prolong the transfer as it exits sleep mode.
Hardware-Controlled Sleep Mode
Alternatively, you can implement a timer in your component that automatically causes the component to enter a sleep mode based on a timeout value specified in clock cycles between read or write accesses. Each access resets the timer to the timeout value. Each cycle with no accesses decrements the timeout value by one. If the counter reaches zero, the hardware enters sleep mode until the next access.
This example provides a schematic for the hardware-controlled sleep mode. If restoring the component to an active state takes a long time, use a long timeout value so that the component is not continuously entering and exiting sleep mode. The slave interface must remain functional while the rest of the component is in sleep mode. When the component exits sleep mode, the component must assert the waitrequest signal until it is ready for read or write accesses.
Reset Polarity and Synchronization in Platform Designer
You can view the polarity status of a reset signal by selecting the signal in the Hierarchy tab, and then view its expanded definition in the open Parameters and Block Symbol tabs. When you generate your component, Platform Designer interconnect automatically inverts polarities as needed.


- None—There is no synchronization on this reset.
- Both—The reset is synchronously asserted and deasserted with respect to the input clock.
- Deassert—The reset is synchronously asserted with respect to the input clock, and asynchronously deasserted.

You can combine multiple reset sources to reset a particular component.

When you generate your component, Platform Designer inserts adapters to synchronize or invert resets if there are mismatches in polarity or synchronization between the source and destination. You can view inserted adapters on the Memory-Mapped Interconnect tab with the System > Show System with Platform Designer Interconnect command.

Optimizing Platform Designer System Performance Design Examples
Avalon Pipelined Read Master Example
For a high throughput system using the Avalon® -MM standard, you can design a pipelined read master that allows a system to issue multiple read requests before data returns. Pipelined read masters hide the latency of read operations by posting reads as frequently as every clock cycle. You can use this type of master when the address logic is not dependent on the data returning.
Avalon Pipelined Read Master Example Design Requirements
You must carefully design the logic for the control and datapaths of pipelined read masters. The control logic must extend a read cycle whenever the waitrequest signal is asserted. This logic must also control the master address, byteenable, and read signals. To achieve maximum throughput, pipelined read masters should post reads continuously while waitrequest is deasserted. While read is asserted, the address presented to the interconnect is stored.
The datapath logic includes the readdata and readdatavalid signals. If your master can accept data on every clock cycle, you can register the data with the readdatavalid as an enable bit. If your master cannot process a continuous stream of read data, it must buffer the data in a FIFO. The control logic must stop issuing reads when the FIFO reaches a predetermined fill level to prevent FIFO overflow.
Expected Throughput Improvement
The throughput improvement that you can achieve with a pipelined read master is typically directly proportional to the pipeline depth of the interconnect and the slave interface. For example, if the total latency is two cycles, you can double the throughput by inserting a pipelined read master, assuming the slave interface also supports pipeline transfers. If either the master or slave does not support pipelined read transfers, then the interconnect asserts waitrequest until the transfer completes. You can also gain throughput when there are some cycles of overhead before a read response.
Where reads are not pipelined, the throughput is reduced. When both the master and slave interfaces support pipelined read transfers, data flows in a continuous stream after the initial latency. You can use a pipelined read master that stores data in a FIFO to implement a custom DMA, hardware accelerator, or off-chip communication interface.
This example shows a pipelined read master that stores data in a FIFO. The master performs word accesses that are word-aligned and reads from sequential memory addresses. The transfer length is a multiple of the word size.
When the go bit is asserted, the master registers the start_address and transfer_length signals. The master begins issuing reads continuously on the next clock cycle until the length register reaches zero. In this example, the word size is four bytes so that the address always increments by four, and the length decrements by four. The read signal remains asserted unless the FIFO fills to a predetermined level. The address register increments and the length register decrements if the length has not reached 0 and a read is posted.
The master posts a read transfer every time the read signal is asserted and the waitrequest is deasserted. The master issues reads until the entire buffer has been read or waitrequest is asserted. An optional tracking block monitors the done bit. When the length register reaches zero, some reads are outstanding. The tracking logic prevents assertion of done until the last read completes, and monitors the number of reads posted to the interconnect so that it does not exceed the space remaining in the readdata FIFO. This example includes a counter that verifies that the following conditions are met:
- If a read is posted and readdatavalid is deasserted, the counter increments.
- If a read is not posted and readdatavalid is asserted, the counter decrements.
When the length register and the tracking logic counter reach zero, all the reads have completed and the done bit is asserted. The done bit is important if a second master overwrites the memory locations that the pipelined read master accesses. This bit guarantees that the reads have completed before the original data is overwritten.
Multiplexer Examples
You can combine adapters with streaming components to create datapaths whose input and output streams have different properties. The following examples demonstrate datapaths in which the output stream exhibits higher performance than the input stream.
The diagram below illustrates a datapath that uses the dual clock version of the on-chip FIFO memory to boost the frequency of input data from 100 MHz to 110 MHz by sampling two input streams at differential rates. The on-chip FIFO memory has an input clock frequency of 100 MHz, and an output clock frequency of 110 MHz. The channel multiplexer runs at 110 MHz and samples one input stream 27.3 percent of the time, and the second 72.7 percent of the time. You must know what the typical and maximum input channel utilizations are before for this type of design. For example, if the first channel hits 50% utilization, the output stream exceeds 100% utilization.
The diagram below illustrates a datapath that uses a data format adapter and Avalon® -ST channel multiplexer to merge the 8-bit 100 MHz input from two streaming data sources into a single 16-bit 100 MHz streaming output. This example shows an output with double the throughput of each interface with a corresponding doubling of the data width.
The diagram below illustrates a datapath that uses the dual clock version of the on-chip FIFO memory and Avalon® -ST channel multiplexer to merge the 100 MHz input from two streaming data sources into a single 200 MHz streaming output. This example shows an output with double the throughput of each interface with a corresponding doubling of the clock frequency.
Optimizing Platform Designer System Performance Revision History
The following revision history applies to this chapter:
Document Version | Intel® Quartus® Prime Version | Changes |
---|---|---|
2018.09.24 | 18.1.0 | Initial release in Intel Quartus Prime Standard Edition User Guide. |
2017.11.06 | 17.1.0 |
|
2015.11.02 | 15.1.0 |
|
2015.05.04 | 15.0.0 | Multiplexer Examples, rearranged description text for the figures. |
May 2013 | 13.0.0 | AMBA APB support. |
November 2012 | 12.1.0 | AMBA AXI4 support. |
June 2012 | 12.0.0 | AMBA AXI3 support. |
November 2011 | 11.1.0 | New document release. |
Platform Designer Interconnect
Platform Designer supports Avalon® , AMBA* 3 AXI (version 1.0), AMBA* 4 AXI (version 2.0), AMBA* 4 AXI-Lite (version 2.0), AMBA* 4 AXI-Stream (version 1.0), and AMBA* 3 APB (version 1.0) interface specifications.
The video AMBA* AXI and Intel Avalon® Interoperation Using Platform Designer describes seamless integration of IP components using the AMBA* AXI and the Intel Avalon® interfaces.
Memory-Mapped Interfaces
Platform Designer interconnect transmits memory-mapped transactions between masters and slaves in packets. The command network transports read and write packets from master interfaces to slave interfaces. The response network transports response packets from slave interfaces to master interfaces.
For each component interface, Platform Designer interconnect manages memory-mapped transfers and interacts with signals on the connected interface. Master and slave interfaces can implement different signals based on interface parameterizations, and Platform Designer interconnect provides any necessary adaptation between them. In the path between master and slaves, Platform Designer interconnect may introduce registers for timing synchronization, finite state machines for event sequencing, or nothing at all, depending on the services required by the interfaces.
Platform Designer interconnect supports the following implementation scenarios:
- Any number of components with master and slave interfaces. The master‑to‑slave relationship can be one‑to‑one, one‑to‑many, many‑to‑one, or many‑to‑many.
- Masters and slaves of different data widths.
- Masters and slaves operating in different clock domains.
- IP Components with
different interface properties and signals. Platform Designer adapts the component interfaces so that interfaces with the
following differences can be connected:
- Avalon® and AXI interfaces that use active‑high and active‑low signaling. AXI signals are active high, except for the reset signal.
- Interfaces with different burst characteristics.
- Interfaces with different latencies.
- Interfaces with different data widths.
- Interfaces with
different optional interface signals. Note: Since interface connections between AMBA* 3 AXI and AMBA* 4 AXI declare a fixed set of signals with variable latency, there is no need for adapting between active-low and active-high signaling, burst characteristics, different latencies, or port signatures. Adaptation might be necessary between Avalon® interfaces.
In this example, there are two components mastering the system, a processor and a DMA controller, each with two master interfaces. The masters connect through the Platform Designer interconnect to slaves in the Platform Designer system.
The dark blue blocks represent interconnect components. The dark gray boxes indicate items outside of the Platform Designer system and the Intel® Quartus® Prime software design, and show how to export component interfaces and how to connect these interfaces to external devices.
Platform Designer Packet Format
The Platform Designer packet format supports Avalon® , AXI, and APB transactions. Memory-mapped transactions between masters and slaves are encapsulated in Platform Designer packets. For Avalon® systems without AXI or APB interfaces, some fields are ignored or removed.
Fields in the Platform Designer Packet Format
Command | Description |
---|---|
Address | Specifies the byte address for the lowest byte in the current cycle. There are no restrictions on address alignment. |
Size |
Encodes the run-time size of the transaction. In conjunction with address, this field describes the segment of the payload that contains valid data for a beat within the packet. |
Address Sideband |
Carries “address” sideband signals. The interconnect passes this field from master to slave. This field is valid for each beat in a packet, even though it is only produced and consumed by an address cycle. Up to 8-bit sideband signals are supported for both read and write address channels. |
Cache | Carries the AXI cache signals. |
Transaction (Exclusive) | Indicates whether the transaction has exclusive access. |
Transaction (Posted) | Used to indicate non-posted writes (writes that require responses). |
Data | For command packets, carries the data to be written. For read response packets, carries the data that has been read. |
Byteenable |
Specifies which symbols are valid. AXI can issue or accept any byteenable pattern. For compatibility with Avalon® , Intel recommends that you use the following legal values for 32-bit data transactions between Avalon® masters and slaves:
|
Source_ID | The ID of the master or slave that initiated the command or response. |
Destination_ID | The ID of the master or slave to which the command or response is directed. |
Response | Carries the AXI response signals. |
Thread ID | Carries the AXI transaction ID values. |
Byte count | The number of bytes remaining in the transaction, including this beat. Number of bytes requested by the packet. |
Burstwrap |
The burstwrap value specifies the wrapping behavior of the current burst. The burstwrap value is of the form 2<n> -1. The following types are defined:
For Avalon® masters, Platform Designer adaptation logic sets a hardwired value for the burstwrap field, according the declared master burst properties. For example, for a master that declares sequential bursting, the burstwrap field is set to ones. Similarly, masters that declare burst have their burstwrap field set to the appropriate constant value. AXI masters choose their burst type at run-time, depending on the value of the AW or ARBURST signal. The interconnect calculates the burstwrap value at run-time for AXI masters. |
Protection | Access level protection. When the lowest bit is 0, the packet has normal access. When the lowest bit is 1, the packet has privileged access. For Avalon® -MM interfaces, this field maps directly to the privileged access signal, which allows a memory-mapped master to write to an on‑chip memory ROM instance. The other bits in this field support AXI secure accesses and uses the same encoding, as described in the AXI specification. |
QoS |
QoS (Quality of Service Signaling) is a 4-bit field that is part of the AMBA* 4 AXI interface that carries QoS information for the packet from the AXI master to the AXI slave. Transactions from AMBA* 3 AXI and Avalon® masters have the default value 4'b0000, that indicates that they are not participating in the QoS scheme. QoS values are dropped for slaves that do not support QoS. |
Data sideband | Carries data sideband signals for the packet. On a write command, the data sideband directly maps to WUSER. On a read response, the data sideband directly maps to RUSER. On a write response, the data sideband directly maps to BUSER. |
Transaction Types for Memory-Mapped Interfaces
Bit | Name | Definition |
---|---|---|
0 | PKT_TRANS_READ | When asserted, indicates a read transaction. |
1 | PKT_TRANS_COMPRESSED_READ | For read transactions, specifies whether the read command can be expressed in a single cycle (all byteenables asserted on every cycle). |
2 | PKT_TRANS_WRITE | When asserted, indicates a write transaction. |
3 | PKT_TRANS_POSTED | When asserted, no response is required. |
4 | PKT_TRANS_LOCK | When asserted, indicates arbitration is locked. Applies to write packets. |
Platform Designer Transformations
Interconnect Domains
Using One Domain with Width Adaptation
Using Two Separate Domains
Master Network Interfaces
Avalon -MM Master Agent
Avalon -MM Master Translator
The Avalon® -MM Master translator performs the following functions:
- Translates active-low signaling to active-high signaling
- Inserts wait states to prevent an Avalon® ‑MM master from reading invalid data
- Translates word and symbol addresses
- Translates word and symbol burst counts
- Manages re-timing and re-sequencing bursts
- Removes unnecessary address bits
AXI Master Agent
AXI Translator
The AXI translator is inserted for both AMBA* 4 AXI masters and slaves and performs the following functions:
- Matches ID widths between the master and slave in 1x1 systems.
- Drives default values as defined in the AMBA* Protocol Specifications for missing signals.
- Performs lock transaction bit conversion when an AMBA* 3 AXI master connects to an AMBA* 4 AXI slave in 1x1 systems.
APB Master Agent
APB Slave Agent
APB Translator
The APB translator is inserted for both the master and slave and performs the following functions:
- Sets the response value default to OKAY if the APB slave does not have a pslverr signal.
- Turns on or off additional signals between the APB debug interface, which is used with HPS (Intel SoC’s Hard Processor System).
AHB Slave Agent
Memory-Mapped Router
Memory-Mapped Traffic Limiter
Slave Network Interfaces
Avalon -MM Slave Translator
An Avalon® -MM Slave Translator performs the following functions:
- Drives the beginbursttransfer and byteenable signals.
- Supports Avalon® -MM slaves that operate using fixed timing and or slaves that use the readdatavalid signal to identify valid data.
- Translates the read, write, and chipselect signals into the representation that the Avalon® ‑ST slave response network uses.
- Converts active low signals to active high signals.
- Translates word and symbol addresses and burstcounts.
- Handles burstcount timing and sequencing.
- Removes unnecessary address bits.
AXI Translator
The AXI translator is inserted for both AMBA* 4 AXI master and slave, and performs the following functions:
- Matches ID widths between master and slave in 1x1 systems.
- Drives default values as defined in the AMBA* Protocol Specifications for missing signals.
- Performs lock transaction bit conversion when an AMBA* 3 AXI master connects to an AMBA* 4 AXI slave in 1x1 systems.
Wait State Insertion
Avalon -MM Slave Agent
AXI Slave Agent
Arbitration
Round-Robin Arbitration
In a fairness-based arbitration protocol, each master has an integer value of transfer shares with respect to a slave. One share represents permission to perform one transfer. The default arbitration scheme is equal share round-robin that grants equal, sequential access to all requesting masters. You can change the arbitration scheme to weighted round-robin by specifying a relative number of arbitration shares to the masters that access a given slave. AXI slaves have separate arbitration for their independent read and write channels, and the Arbitration Shares setting affects both the read and write arbitration. To display arbitration settings, right-click an instance on the System Contents tab, and then click Show Arbitration Shares.

Fairness-Based Shares
Round-Robin Scheduling
Fixed Priority Arbitration
You can selectively apply fixed priority arbitration to any slave in a Platform Designer system. You can design Platform Designer systems where a subset of slaves use the default round-robin arbitration, and other slaves use fixed priority arbitration. Fixed priority arbitration uses a fixed priority algorithm to grant access to a slave amongst its connected masters.
To set up fixed priority arbitration, you must first designate a fixed priority slave in your Platform Designer system in the Interconnect Requirements tab. You can then assign an arbitration priority number for each master connected to a fixed priority slave in the System Contents tab, where the highest numeric value receives the highest priority. When multiple masters request access to a fixed priority arbitrated slave, the arbiter gives the master with the highest priority first access to the slave.
For example, when a fixed priority slave receives requests from three masters on the same cycle, the arbiter grants the master with highest assigned priority first access to the slave, and backpressures the other two masters.
Designate a Platform Designer Slave to Use Fixed Priority Arbitration
- In Platform Designer, navigate to the Interconnect Requirements tab.
- Click Add to add a new requirement.
- In the Identifier column, select the slave for fixed priority arbitration.
- In the Setting column, select qsys mm.arbitrationScheme.
-
In the Value column,
select fixed-priority.
- Navigate to the System Contents tab.
- In the System Contents tab, right-click the designated fixed priority slave, and then select Show Arbitration Shares.
-
For each master connected to the fixed priory arbitration
slave, type a numerical arbitration priority in the box that appears in place of
the connection circle.
- Right click the designated fixed priority slave and uncheck Show Arbitration Shares to return to the connection circles.
Fixed Priority Arbitration with AXI Masters and Avalon -MM Slaves
Since AXI masters have separate read and write channels, each channel appears as two separate masters to the Avalon® -MM slave. To support fairness between the AXI master’s read and write channels, the instantiated round-robin intermediary multiplexer arbitrates between simultaneous read and write commands from the AXI master to the fixed-priority Avalon® -MM slave.
When an AXI master is connected to a fixed priority AXI slave, the master’s read and write channels are directly connected to the AXI slave’s fixed-priority multiplexers. In this case, there is one multiplexer for the read command, and one multiplexer for the write command and therefore an intermediary multiplexer is not required.
The red circles indicate placement of the intermediary multiplexer between the AXI master and Avalon® -MM slave due to the separate read and write channels of the AXI master.

Memory-Mapped Arbiter
If you specify a Limit interconnect pipeline stages toparameter greater than zero, the output of the Arbiter is registered. Registering this output reduces the amount of combinational logic between the master and the interconnect, increasing the fMAX of the system.
Datapath Multiplexing Logic
Width Adaptation
Memory-Mapped Width Adapter
The memory-mapped width adapter accepts packets on its sink interface with one data width and produces output packets on its source interface with a different data width. The ratio of the narrow data width must be a power of two, such as 1:4, 1:8, and 1:16. The ratio of the wider data width to the narrower width must also be a power of two, such as 4:1, 8:1, and 16:1 These output packets may have a different size if the input size exceeds the output data bus width, or if data packing is enabled.
When the width adapter converts from narrow data to wide data, each input beat's data and byte enables are copied to the appropriate segment of the wider output data and byte enables signals.
AXI Wide-to-Narrow Adaptation
Burst Type |
Behavior |
---|---|
Incrementing |
If the transaction size is less than or equal to the output width, the burst is unmodified. Otherwise, it is converted to an incrementing burst with a larger length and size equal to the output width. If the resulting burst is unsuitable for the slave, the burst is converted to multiple sequential bursts of the largest allowable lengths. For example, for a 2:1 downsizing ratio, an INCR9 burst is converted into INCR16 + INCR2 bursts. This is true if the maximum burstcount a slave can accept is 16, which is the case for AMBA* 3 AXI slaves. Avalon® slaves have a maximum burstcount of 64. |
Wrapping |
If the transaction size is less than or equal to the output width, the burst is unmodified. Otherwise, it is converted to a wrapping burst with a larger length, with a size equal to the output width. If the resulting burst is unsuitable for the slave, the burst is converted to multiple sequential bursts of the largest allowable lengths; respecting wrap boundaries. For example, for a 2:1 downsizing ratio, a WRAP16 burst is converted into two or three INCR bursts, depending on the address. |
Fixed |
If the transaction size is less than or equal to the output width, the burst is unmodified. Otherwise, it is converted into repeated sequential bursts over the same addresses. For example, for a 2:1 downsizing ratio, a FIXED single burst is converted into an INCR2 burst. |
AXI Narrow-to-Wide Adaptation
Burst Type |
Behavior |
---|---|
Incrementing |
The burst (and its response) passes through unmodified. Data and write strobes are placed in the correct output segment. |
Wrapping |
The burst (and its response) passes through unmodified. |
Fixed |
The burst (and its response) passes through unmodified. |
Burst Adapter
The maximum burst length for each interface is a property of the interface and is independent of other interfaces in the system. Therefore, a specific master may be capable of initiating a burst longer than a slave’s maximum supported burst length. In this case, the burst adapter translates the large master burst into smaller bursts, or into individual slave transfers if the slave does not support bursting. Until the master completes the burst, arbiter logic prevents other masters from accessing the target slave. For example, if a master initiates a burst of 16 transfers to a slave with maximum burst length of 8, the burst adapter initiates 2 bursts of length 8 to the slave.
Avalon® -MM and AXI burst transactions allow a master uninterrupted access to a slave for a specified number of transfers. The master specifies the number of transfers when it initiates the burst. Once a burst begins between a master and slave, arbiter logic is locked until the burst completes. For burst masters, the length of the burst is the number of cycles that the master has access to the slave, and the selected arbitration shares have no effect.
Avalon® -MM masters always issue addresses that are aligned to the size of the transfer. However, when Platform Designer uses a narrow-to-wide width adaptation, the resulting address may be unaligned. For unaligned addresses, the burst adapter issues the maximum sized bursts with appropriate byte enables. This brings the burst-in-progress up to an aligned slave address. Then, it completes the burst on aligned addresses.
The burst adapter supports variable wrap or sequential burst types to accommodate different properties of memory-mapped masters. Some bursting masters can issue more than one burst type.
Burst adaptation is available for Avalon® to Avalon® , Avalon® to AXI, and AXI to Avalon® , and AXI to AXI connections. For information about AXI-to-AXI adaptation, refer to AXI Wide-to-Narrow Adaptation
Burst Adapter Implementation Options
To access the implementation options, you must select the Burst adapter implementation setting for the $system identifier.
- Generic converter (slower, lower area)—Default. Controls all burst conversions with a single converter that can adapt incoming burst types. This results in an adapter that has lower fMAX, but smaller area.
- Per-burst-type converter (faster, higher area)—Controls incoming bursts with a specific converter, depending on the burst type. This results in an adapter that has higher fMAX, but higher area. This setting is useful when you have AXI masters or slaves and you want a higher fMAX.
Burst Adaptation: AXI to Avalon
Burst Type |
Behavior |
---|---|
Incrementing |
Sequential Slave Bursts that exceed slave_max_burst_length are converted to multiple sequential bursts of a length less than or equal to the slave_max_burst_length. Otherwise, the burst is unconverted. For example, for an Avalon® slave with a maximum burst length of 4, an INCR7 burst is converted to INCR4 + INCR3. Wrapping Slave Bursts that exceed the slave_max_burst_length are converted to multiple sequential bursts of length less than or equal to the slave_max_burst_length. Bursts that exceed the wrapping boundary are converted to multiple sequential bursts that respect the slave's wrapping boundary. |
Wrapping |
Sequential Slave A WRAP burst is converted to multiple sequential bursts. The sequential bursts are less than or equal to the max_burst_length and respect the transaction's wrapping boundary Wrapping Slave If the WRAP transaction's boundary matches the slave's boundary, then the burst passes through. Otherwise, the burst is converted to sequential bursts that respect both the transaction and slave wrap boundaries. |
Fixed |
Fixed bursts are converted to sequential bursts of length 1 that repeatedly access the same address. |
Narrow |
All narrow-sized bursts are broken into multiple bursts of length 1. |
Burst Adaptation: Avalon to AXI
Burst Type | Definition |
---|---|
Sequential | Bursts of length greater than16 are converted to multiple INCR bursts of a length less than or equal to16. Bursts of length less than or equal to16 are not converted. |
Wrapping | Only Avalon® masters with alwaysBurstMaxBurst = true are supported. The WRAP burst is passed through if the length is less than or equal to16. Otherwise, it is converted to two or more INCR bursts that respect the transaction's wrap boundary. |
GENERIC_CONVERTER | Controls all burst conversions with a single converter that adapts all incoming burst types, resulting in an adapter that has smaller area, but lower fMAX. |
Waitrequest Allowance Adapter
The Waitrequest Allowance adapter provides the following features:
- The adapter is used in the memory-mapped domain and operates with signals on the memory-mapped interface.
- Signal widths and all properties other than waitrequestAllowance are identical on master and slave interfaces.
- The adapter does not modify any command properties such as data width, burst type, or burst count.
- The adapter is inserted by the Platform Designer interconnect software when a master and slave with different waitrequestAllowance property are connected.
When the slave has a waitrequestAllowance = n the master must deassert read or write signals after <n> transfers when waitrequest is asserted.
Master (m) / Slave (n) waitrequestAllowance | Adaptation Required | Description | Adapter Function |
---|---|---|---|
m = n | No |
The master waitrequestAllowance is equal to the slave's waitrequestAllowance. |
All signals are passed through. |
m = 0; n > 0 | Yes |
The master cannot send when waitrequest=1, but holds the value on the bus. This would result in the slave receiving multiple copies. Requires adaptation to prevent. |
The adapter deasserts valid when input waitrequest is asserted. |
m < n; m != 0 | No |
The master can send <m> transfers after waitrequest is asserted. The slave receives fewer than <n> transfers, which is acceptable. |
All signals are passed through. |
m > n; n = 0 | Yes |
The slave cannot accept transfers when waitrequest is asserted. Transfers sent when waitrequest=1 can be lost. Prevention requires adaptation in the form of transfer buffering. |
If the input waitrequest is asserted, the adapter buffers the input data. |
m > n; n > 0 | Yes |
The slave cannot accept more than <n> transfers after waitrequest is asserted, however the master can send up to <m> transfers. Transfers ( <m> – <n>) can be lost. Prevention requires adaptation in the form of transfer buffering. |
The adapter buffers the input data. |
Read and Write Responses
Platform Designer merges write responses if a write is converted (burst adapted) into multiple bursts. Platform Designer requires read response merging for a downsized (wide-to-narrow width adapted) read.
DECERR > SLVERR > OKAY > EXOKAY
Adaptation between a master with write responses and a slave without write responses can be costly, especially if there are multiple slaves, or if the slave supports bursts. To minimize the cost of logic between slaves, consider placing the slaves that do not have write responses behind a bridge so that the write response adaptation logic cost is only incurred once, at the bridge’s slave interface.
The following table describes what happens when there is a mismatch in response support between the master and slave.
Slave with Response | Slave Without Response | |
Master with Response |
Interconnect delivers response from the slave to the master. Response merging or duplication may be necessary for bus sizing. |
Interconnect delivers an OKAY response to the master |
Master without Response | Master ignores responses from the slave | No need for responses. Master, slave and interconnect operate without response support. |
If there is a bridge between the master and the endpoint slave, and the responses must come from the endpoint slave, ensure that the bridge passes the appropriate response signals through from the endpoint slave to the master.
If the bridge does not support responses, then the responses are generated by the interconnect at the slave interface of the bridge, and responses from the endpoint slave are ignored.
For the response case where the transaction violates security settings or uses an illegal address, the interconnect routes the transactions to the default slave. For information about Platform Designer system security, refer to Manage System Security. For information about specifying a default slave, refer to Error Response Slave in Platform Designer System Design Components.
Platform Designer Address Decoding
Address decoding logic simplifies component design in the following ways:
- The interconnect selects a slave whenever it is being addressed by a master. Slave components do not need to decode the address to determine when they are selected.
- Slave addresses are properly aligned to the slave interface.
- Changing the system memory map does not involve manually editing HDL.
Platform Designer controls the base addresses with the Base setting of active components on the System Contents tab. The base address of a slave component must be a multiple of the address span of the component. This restriction is part of the Platform Designer interconnect to allow the address decoding logic to be efficient, and to achieve the best possible fMAX.
Avalon Streaming Interfaces
In this example, there are the following connection pairs:
- Data source in the Rx Interface transfers data to the data sink in the FIFO.
- Data source in the FIFO transfers data to the Tx Interface data sink.
The memory-mapped interface allows a processor to access the data source, FIFO, or data sink to provide system control. If your source and sink interfaces have different formats, for example, a 32-bit source and an 8-bit sink, Platform Designer automatically inserts the necessary adapters. You can view the adapters on the System Contents tab by clicking System > Show System with Platform Designer Interconnect.
The IP Catalog includes Avalon® -ST components that you can use to create datapaths, including datapaths whose input and output streams have different properties. Generated systems that include memory-mapped master and slave components may also use these Avalon® -ST components because Platform Designer generation creates interconnect with a structure similar to a network topology, as described in Platform Designer Transformations. The following sections introduce the Avalon® -ST components.
Avalon -ST Adapters
After generation, you can view the inserted adapters selecting System > Show System With Platform Designer Interconnect. For each mismatched source-sink pair, Platform Designer inserts an Avalon® -ST Adapter. The adapter instantiates the necessary adaptation logic as sub-components. You can review the logic for each adapter instantiation in the Hierarchy view by expanding each adapter's source and sink interface and comparing the relevant ports. For example, to determine why a channel adapter is inserted, expand the channel adapter's sink and source interfaces and review the channel port properties for each interface.
You can turn off the auto-inserted adapters feature by adding the qsys_enable_avalon_streaming_transform=off command to the quartus.ini file. When you turn off the auto-inserted adapters feature, if mismatched interfaces are detected during system generation, Platform Designer does not insert adapters and reports the mismatched interface with validation error message.
Avalon -ST Adapter
Avalon -ST Adapter Parameters Common to Source and Sink Interfaces
Parameter Name | Description |
---|---|
Symbol Width | Width of a single symbol in bits. |
Use Packet | Indicates whether the source and sink interfaces connected to the adapter's source and sink interfaces include the startofpacket and endofpacket signals, and the optional empty signal. |
Avalon -ST Adapter Upstream Source Interface Parameters
Parameter Name | Description |
---|---|
Source Data Width | Controls the data width of the source interface data port. |
Source Top Channel | Maximum number of output channels allowed. |
Source Channel Port Width | Sets the bit width of the source interface channel port. If set to 0, there is no channel port on the sink interface. |
Source Error Port Width | Sets the bit width of the source interface error port. If set to 0, there is no error port on the sink interface. |
Source Error Descriptors | A list of strings that describe the error conditions for each bit of the source interface error signal. |
Source Uses Empty Port | Indicates whether the source interface includes the empty port, and whether the sink interface should also include the empty port. |
Source Empty Port Width | Indicates the bit width of the source interface empty port, and sets the bit width of the sink interface empty port. |
Source Uses Valid Port | Indicates whether the source interface connected to the sink interface uses the valid port, and if set, configures the sink interface to use the valid port. |
Source Uses Ready Port | Indicates whether the sink interface uses the ready port, and if set, configures the source interface to use the ready port. |
Source Ready Latency | Specifies what ready latency to expect from the source interface connected to the adapter's sink interface. |
Avalon -ST Adapter Downstream Sink Interface Parameters
Parameter Name | Description |
---|---|
Sink Data Width | Indicates the bit width of the data port on the sink interface connected to the source interface. |
Sink Top Channel | Maximum number of output channels allowed. |
Sink Channel Port Width | Indicates the bit width of the channel port on the sink interface connected the source interface. |
Sink Error Port Width | Indicates the bit width of the error port on the sink interface connected to the adapter's source interface. If set to zero, there is no error port on the source interface. |
Sink Error Descriptors | A list of strings that describe the error conditions for each bit of the error port on the sink interface connected to the source interface. |
Sink Uses Empty Port | Indicates whether the sink interface connected to the source interface uses the empty port, and whether the source interface should also use the empty port. |
Sink Empty Port Width | Indicates the bit width of the empty port on the sink interface connected to the source interface, and configures a corresponding empty port on the source interface. |
Sink Uses Valid Port | Indicates whether the sink interface connected to the source interface uses the valid port, and if set, configures the source interface to use the valid port. |
Sink Uses Ready Port | Indicates whether the ready port on the sink interface is connected to the source interface , and if set, configures the sink interface to use the ready port. |
Sink Ready Latency | Specifies what ready latency to expect from the source interface connected to the sink interface. |
Channel Adapter
Condition |
Description of Adapter Logic |
---|---|
The source uses channels, but the sink does not. |
Platform Designer gives a warning at generation time. The adapter provides a simulation error and signals an error for data for any channel from the source other than 0. |
The sink has channel, but the source does not. |
Platform Designer gives a warning at generation time, and the channel inputs to the sink are all tied to a logical 0. |
The source and sink both support channels, and the source's maximum channel number is less than the sink's maximum channel number. |
The source's channel is connected to the sink's channel unchanged. If the sink's channel signal has more bits, the higher bits are tied to a logical 0. |
The source and sink both support channels, but the source's maximum channel number is greater than the sink's maximum channel number. |
The source’s channel is connected to the sink’s channel unchanged. If the source’s channel signal has more bits, the higher bits are left unconnected. Platform Designer gives a warning that channel information may be lost. An adapter provides a simulation error message and an error indication if the value of channel from the source is greater than the sink's maximum number of channels. In addition, the valid signal to the sink is deasserted so that the sink never sees data for channels that are out of range. |
Avalon -ST Channel Adapter Input Interface Parameters
Parameter Name | Description |
---|---|
Channel Signal Width (bits) |
Width of the input channel signal in bits |
Max Channel |
Maximum number of input channels allowed. |
Avalon -ST Channel Adapter Output Interface Parameters
Parameter Name | Description |
---|---|
Channel Signal Width (bits) |
Width of the output channel signal in bits. |
Max Channel |
Maximum number of output channels allowed. |
Avalon -ST Channel Adapter Common to Input and Output Interface Parameters
Parameter Name | Description |
---|---|
Data Bits Per Symbol |
Number of bits for each symbol in a transfer. |
Include Packet Support |
When the Avalon® -ST Channel adapter supports packets, the startofpacket, endofpacket, and optional empty signals are included on its sink and source interfaces. |
Include Empty Signal |
Indicates whether an empty signal is required. |
Data Symbols Per Beat |
Number of symbols per transfer. |
Support Backpressure with the ready signal |
Indicates whether a ready signal is required. |
Ready Latency |
Specifies the ready latency to expect from the sink connected to the module's source interface. |
Error Signal Width (bits) |
Bit width of the error signal. |
Error Signal Description |
A list of strings that describes what each bit of the error signal represents. |
Data Format Adapter
Condition |
Description of Adapter Logic |
---|---|
The source and sink’s bits per symbol parameters are different. |
The connection cannot be made. |
The source and sink have a different number of symbols per beat. |
The adapter converts the source's width to the sink’s width. If the adaptation is from a wider to a narrower interface, a beat of data at the input corresponds to multiple beats of data at the output. If the input error signal is asserted for a single beat, it is asserted on output for multiple beats. If the adaptation is from a narrow to a wider interface, multiple input beats are required to fill a single output beat, and the output error is the logical OR of the input error signal. |
The source uses the empty signal, but the sink does not use the empty signal. |
Platform Designer cannot make the connection. |
Avalon -ST Data Format Adapter Input Interface Parameters
Parameter Name | Description |
---|---|
Data Symbols Per Beat |
Number of symbols per transfer. |
Include Empty Signal |
Indicates whether an empty signal is required. |
Avalon -ST Data Format Adapter Output Interface Parameters
Parameter Name | Description |
---|---|
Data Symbols Per Beat |
Number of symbols per transfer. |
Include Empty Signals |
Indicates whether an empty signal is required. |
Avalon -ST Data Format Adapter Common to Input and Output Interface Parameters
Parameter Name | Description |
---|---|
Data Bits Per Symbol |
Number of bits for each symbol in a transfer. |
Include Packet Support |
When the Avalon® -ST Data Format adapter supports packets, Platform Designer uses startofpacket, endofpacket, and empty signals. |
Channel Signal Width (bits) |
Width of the output channel signal in bits. |
Max Channel |
Maximum number of channels allowed. |
Read Latency |
Specifies the ready latency to expect from the sink connected to the module's source interface. |
Error Signal Width (bits) |
Width of the error signal output in bits. |
Error Signal Description |
A list of strings that describes what each bit of the error signal represents. |
Error Adapter
Avalon -ST Error Adapter Input Interface Parameters
Parameter Name | Description |
---|---|
Error Signal Width (bits) |
The width of the error signal. Valid values are 0–256 bits. Type 0 if the error signal is not used. |
Error Signal Description |
The description for each of the error bits. If scripting, separate the description fields by commas. For a successful connection, the description strings of the error bits in the source and sink must match and are case sensitive. |
Avalon -ST Error Adapter Output Interface Parameters
Parameter Name | Description |
---|---|
Error Signal Width (bits) |
The width of the error signal. Valid values are 0–256 bits. Type 0 if you do not need to send error values. |
Error Signal Description |
The description for each of the error bits. Separate the description fields by commas. For successful connection, the description of the error bits in the source and sink must match, and are case sensitive. |
Avalon -ST Error Adapter Common to Input and Output Interface Parameters
Parameter Name | Description |
---|---|
Support Backpressure with the ready signal |
Turn on this option to add the backpressure functionality to the interface. |
Ready Latency |
When the ready signal is used, the value for ready_latency indicates the number of cycles between when the ready signal is asserted and when valid data is driven. |
Channel Signal Width (bits) |
The width of the channel signal. A channel width of 4 allows up to 16 channels. The maximum width of the channel signal is eight bits. Set to 0 if channels are not used. |
Max Channel |
The maximum number of channels that the interface supports. Valid values are 0–255. |
Data Bits Per Symbol |
Number of bits per symbol. |
Data Symbols Per Beat |
Number of symbols per active transfer. |
Include Packet Support |
Turn on this option if the connected interfaces support a packet protocol, including the startofpacket, endofpacket and empty signals. |
Include Empty Signal |
Turn this option on if the cycle that includes the endofpacket signal can include empty symbols. This signal is not necessary if the number of symbols per beat is 1. |
Timing Adapter
Condition |
Adaptation |
---|---|
The source has ready, but the sink does not. |
In this case, the source can respond to backpressure, but the sink never needs to apply it. The ready input to the source interface is connected directly to logical 1. |
The source does not have ready, but the sink does. |
The sink may apply backpressure, but the source is unable to respond to it. There is no logic that the adapter can insert that prevents data loss when the source asserts valid but the sink is not ready. The adapter provides simulation time error messages if data is lost. The user is presented with a warning, and the connection is allowed. |
The source and sink both support backpressure, but the sink’s ready latency is greater than the source's. |
The source responds to ready assertion or deassertion faster than the sink requires it. The number of pipeline stages equal to the difference in ready latency are inserted in the ready path from the sink back to the source, causing the source and the sink to see the same cycles as ready cycles. |
The source and sink both support backpressure, but the sink’s ready latency is less than the source's. |
The source cannot respond to ready assertion or deassertion in time to satisfy the sink. A FIFO whose depth is equal to the difference in ready latency is inserted to compensate for the source’s inability to respond in time. |
Avalon -ST Timing Adapter Input Interface Parameters
Parameter Name | Description |
---|---|
Support Backpressure with the ready signal |
Indicates whether a ready signal is required. |
Read Latency |
Specifies the ready latency to expect from the sink connected to the module's source interface. |
Include Valid Signal |
Indicates whether the sink interface requires a valid signal. |
Avalon -ST Timing Adapter Output Interface Parameters
Parameter Name | Description |
---|---|
Support Backpressure with the ready signal |
Indicates whether a ready signal is required. |
Read Latency |
Specifies the ready latency to expect from the sink connected to the module's source interface. |
Include Valid Signal |
Indicates whether the sink interface requires a valid signal. |
Avalon -ST Timing Adapter Common to Input and Output Interface Parameters
Parameter Name | Description |
---|---|
Data Bits Per Symbol |
Number of bits for each symbol in a transfer. |
Include Packet Support |
Turn this option on if the connected interfaces support a packet protocol, including the startofpacket, endofpacket and empty signals. |
Include Empty Signal |
Turn this option on if the cycle that includes the endofpacket signal can include empty symbols. This signal is not necessary if the number of symbols per beat is 1. |
Data Symbols Per Beat |
Number of symbols per active transfer. |
Channel Signal Width (bits) |
Width of the output channel signal in bits. |
Max Channel |
Maximum number of output channels allowed. |
Error Signal Width (bits) |
Width of the output error signal in bits. |
Error Signal Description |
A list of strings that describes errors. |
Interrupt Interfaces
You can define the interrupt sender interface as asynchronous with no associated clock or reset interfaces. You can also define the interrupt receiver interface as asynchronous with no associated clock or reset interfaces. As a result, the receiver does its own synchronization internally. Platform Designer does not insert interrupt synchronizers for such receivers.
For clock crossing adaption on interrupts, Platform Designer inserts a synchronizer, which is clocked with the interrupt end point interface clock when the corresponding starting point interrupt interface has no clock or a different clock (than the end point). Platform Designer inserts the adapter if there is any kind of mismatch between the start and end points. Platform Designer does not insert the adapter if the interrupt receiver does not have an associated clock.
Individual Requests IRQ Scheme
Assigning IRQs in Platform Designer
IRQ Bridge
- set_interface_property <sender port> bridgesToReceiver <receiver port> — The <sender port> of the IP generates a signal that is received on the IP's <receiver port>. Sender ports are single bits. Receivers ports can be multiple bits. Platform Designer requires the bridgedReceiverOffset property to identify the <receiver port> bit that the <sender port> sends.
- set_interface_property <sender port> bridgedReceiverOffset <port number> — Indicates the <port number> of the receiver port that the <sender port> sends.
IRQ Mapper
By default, the interrupt sender connected to the receiver0 interface of the IRQ mapper is the highest priority, and sequential receivers are successively lower priority. You can modify the interrupt priority of each IRQ wire by modifying the IRQ priority number in Platform Designer under the IRQ column. The modified priority is reflected in the IRQ_MAP parameter for the auto-inserted IRQ Mapper.

IRQ Clock Crosser
Clock Interfaces
The Clock Source parameters allows you to set the following options:
- Clock frequency—The frequency of the output clock from this clock source.
-
Clock frequency is known— When turned on, the clock frequency is
known. When turned off, the frequency is set from outside the system.
Note: If turned off, system generation may fail because the components do not receive the necessary clock information. For best results, turn this option on before system generation.
-
Reset synchronous edges
- None—The reset is asserted and deasserted asynchronously. You can use this setting if you have internal synchronization circuitry that matches the reset required for the IP in the system.
- Both—The reset is asserted and deasserted synchronously.
- Deassert—The reset is deasserted synchronously and asserted asynchronously.
For more information about synchronous design practices, refer to Recommended Design Practices
(High Speed Serial Interface) HSSI Clock Interfaces
HSSI Serial Clock Interface
HSSI Serial Clock Source
You can instantiate the HSSI Serial Clock Source interface in the _hw.tcl file as:
add_interface <name> hssi_serial_clock start
You can connect the HSSI Serial Clock Source to multiple HSSI Serial Clock Sinks because the HSSI Serial Clock Source supports multiple fan-outs. This Interface has a single clk port role limited to a 1 bit width, and a clockRate parameter, which is the frequency of the clock driven by the HSSI Serial Clock Source interface.
An unconnected and unexported HSSI Serial Source is valid and does not generate error messages.
Name | Direction | Width | Description |
---|---|---|---|
clk | Output | 1 bit |
A single bit wide port role, which provides synchronization for internal logic. |
Name | Type | Default | Derived | Description |
---|---|---|---|---|
clockRate | long | 0 |
No |
The frequency of the clock driven byte HSSI Serial Clock Source interface. |
HSSI Serial Clock Sink
You can instantiate the HSSI Serial Clock Sink interface in the _hw.tcl file as:
add_interface <name> hssi_serial_clock end
You can connect the HSSI Serial Clock Sink interface to a single HSSI Serial Clock Source interface; you cannot connect it to multiple sources. This Interface has a single clk port role limited to a 1 bit width, and a clockRate parameter, which is the frequency of the clock driven by the HSSI Serial Clock Source interface.
An unconnected and unexported HSSI Serial Sink is invalid and generates error messages.
Name | Direction | Width | Description |
---|---|---|---|
clk | Output | 1 |
A single bit wide port role, which provides synchronization for internal logic |
Name | Type | Default | Derived | Description |
---|---|---|---|---|
clockRate | long | 0 |
No |
The frequency of the clock driven by the HSSI Serial Clock Source interface. When you specify a clockRate greater than 0, then this interface can be driven only at that rate. |
HSSI Serial Clock Connection
A valid HSSI Serial Clock Connection exists when all the following criteria are satisfied. If the following criteria are not satisfied, Platform Designer generates error messages and the connection is prohibited.
- The starting connection point is an HSSI Serial Clock Source with a single port role clk and maximum 1 bit in width. The direction of the starting port is Output.
- The ending connection point is an HSSI Serial Clock Sink with a single port role clk, and maximum 1 bit in width. The direction of the ending port is Input.
- If the parameter, clockRate of the HSSI Serial Clock Sink is greater than 0, the connection is only valid if the clockRate of the HSSI Serial Clock Source is the same as the clockRate of the HSSI Serial Clock Sink.
HSSI Serial Clock Example
HSSI Serial Clock Interface Example
You can make connections to declare the HSSI Serial Clock interfaces in the _hw.tcl.
package require -exact qsys 14.0 set_module_property name hssi_serial_component set_module_property ELABORATION_CALLBACK elaborate add_fileset QUARTUS_SYNTH QUARTUS_SYNTH generate add_fileset SIM_VERILOG SIM_VERILOG generate add_fileset SIM_VHDL SIM_VHDL generate set_fileset_property QUARTUS_SYNTH TOP_LEVEL \ "hssi_serial_component" set_fileset_property SIM_VERILOG TOP_LEVEL "hssi_serial_component" set_fileset_property SIM_VHDL TOP_LEVEL "hssi_serial_component" proc elaborate {} { # declaring HSSI Serial Clock Source add_interface my_clock_start hssi_serial_clock start set_interface_property my_clock_start ENABLED true add_interface_port my_clock_start hssi_serial_clock_port_out \ clk Output 1 # declaring HSSI Serial Clock Sink add_interface my_clock_end hssi_serial_clock end set_interface_property my_clock_end ENABLED true add_interface_port my_clock_end hssi_serial_clock_port_in clk \ Input 1 } proc generate { output_name } { add_fileset_file hssi_serial_component.v VERILOG PATH \ "hssi_serial_component.v" }
HSSI Serial Clock Instantiated in a Composed Component
If you use the components in a hierarchy, for example, instantiated in a composed component, you can declare the connections as illustrated in this example.
add_instance myinst1 hssi_serial_component add_instance myinst2 hssi_serial_component # add connection from source of myinst1 to sink of myinst2 add_connection myinst1.my_clock_start myinst2.my_clock_end \ hssi_serial_clock # adding connection from source of myinst2 to sink of myinst1 add_connection myinst2.my_clock_start myinst2.my_clock_end \ hssi_serial_clock
HSSI Bonded Clock Interface
HSSI Bonded Clock Source
You can instantiate the HSSI Bonded Clock Source interface in the _hw.tcl file as:
add_interface <name> hssi_bonded_clock start
You can connect the HSSI Bonded Clock Source to multiple HSSI Bonded Clock Sinks because the HSSI Serial Clock Source supports multiple fanouts. This Interface has a single clk port role limited to a width range of 1 to 1024 bits. The HSSI Bonded Clock Source interface has two parameters: clockRate and serializationFactor. clockRate is the frequency of the clock driven by the HSSI Bonded Clock Source interface, and the serializationFactor is the parallel data width that operates the HSSI TX serializer. The serialization factor determines the required frequency and phases of the individual clocks within the HSSI Bonded Clock interface
An unconnected and unexported HSSI Bonded Source is valid, and does not generate error messages.
Name | Direction | Width | Description |
---|---|---|---|
clk | Output | 1 to 24 bits | A multiple bit wide port role which provides synchronization for internal logic. |
Name | Type | Default | Derived | Description |
---|---|---|---|---|
clockRate | long | 0 | No | The frequency of the clock driven byte HSSI Serial Clock Source interface. |
serialization | long | 0 | No | The serialization factor is the parallel data width that operates the HSSI TX serializer. The serialization factor determines the necessary frequency and phases of the individual clocks within the HSSI Bonded Clock interface. |
HSSI Bonded Clock Sink
You can instantiate the HSSI Bonded Clock Sink interface in the _hw.tcl file as:
add_interface <name> hssi_bonded_clock end
You can connect the HSSI Bonded Clock Sink interface to a single HSSI Bonded Clock Source interface; you cannot connect it to multiple sources. This Interface has a single clk port role limited to a width range of 1 to 1024 bits. The HSSI Bonded Clock Source interface has two parameters: clockRate and serialzationFactor. clockRate is the frequency of the clock driven by the HSSI Bonded Clock Source interface, and the serialization factor is the parallel data width that operates the HSSI TX serializer. The serialization factor determines the required frequency and phases of the individual clocks within the HSSI Bonded Clock interface
An unconnected and unexported HSSI Bonded Sink is invalid and generates error messages.
Name | Direction | Width | Description |
---|---|---|---|
clk | Output | 1 to 24 bits |
A multiple bit wide port role which provides synchronization for internal logic. |
Name | Type | Default | Derived | Description |
---|---|---|---|---|
clockRate | long | 0 |
No |
The frequency of the clock driven byte HSSI Serial Clock Source interface. |
serialization | long | 0 |
No |
The serialization factor is the parallel data width that operates the HSSI TX serializer. The serialization factor determines the necessary frequency and phases of the individual clocks within the HSSI Bonded Clock interface. |
HSSI Bonded Clock Connection
A valid HSSI Bonded Clock Connection exists when all the following criteria are satisfied. If the following criteria are not satisfied, Platform Designer generates error messages and the connection is prohibited.
- The starting connection point is an HSSI Bonded Clock Source with a single port role clk with a width range of 1 to 24 bits. The direction of the starting port is Output.
- The ending connection point is an HSSI Bonded Clock Sink with a single port role clk with a width range of 1 to 24 bits. The direction of the ending port is Input.
- The width of the starting connection point clk must be the same as the width of the ending connection point.
- If the parameter, clockRate of the HSSI Bonded Clock Sink greater than 0, then the connection is only valid if the clockRate of the HSSI Bonded Clock Source is same as the clockRate of the HSSI Bonded Clock Sink.
- If the parameter, serializationFactor of the HSSI Bonded Clock Sink is greater than 0, Platform Designer generates a warning if the serializationFactor of HSSI Bonded Clock Source is not same as the serializationFactor of the HSSI Bonded Clock Sink.
HSSI Bonded Clock Example
HSSI Bonded Clock Interface Example
You can make connections to declare the HSSI Bonded Clock interfaces in the _hw.tcl file.
package require -exact qsys 14.0 set_module_property name hssi_bonded_component set_module_property ELABORATION_CALLBACK elaborate add_fileset synthesis QUARTUS_SYNTH generate add_fileset verilog_simulation SIM_VERILOG generate set_fileset_property synthesis TOP_LEVEL "hssi_bonded_component" set_fileset_property verilog_simulation TOP_LEVEL \ "hssi_bonded_component" proc elaborate {} { add_interface my_clock_start hssi_bonded_clock start set_interface_property my_clock_start ENABLED true add_interface_port my_clock_start hssi_bonded_clock_port_out \ clk Output 1024 add_interface my_clock_end hssi_bonded_clock end set_interface_property my_clock_end ENABLED true add_interface_port my_clock_end hssi_bonded_clock_port_in \ clk Input 1024 } proc generate { output_name } { add_fileset_file hssi_bonded_component.v VERILOG PATH \ "hssi_bonded_component.v"}
If you use the components in a hierarchy, for example, instantiated in a composed component, you can declare the connections as illustrated in this example.
HSII Bonded Clock Instantiated in a Composed Component
add_instance myinst1 hssi_bonded_component add_instance myinst2 hssi_bonded_component # add connection from source of myinst1 to sink of myinst2 add_connection myinst1.my_clock_start myinst2.my_clock_end \ hssi_bonded_clock # adding connection from source of myinst2 to sink of myinst1 add_connection myinst2.my_clock_start myinst2.my_clock_end \ hssi_bonded_clock
Reset Interfaces
You can choose to create a single global reset domain by selecting Create Global Reset Network on the System menu. If your design requires more than one reset domain, you can implement your own reset logic and connectivity. The IP Catalog includes a reset controller, reset sequencer, and a reset bridge to implement the reset functionality. You can also design your own reset logic.
Single Global Reset Signal Implemented by Platform Designer
The Platform Designer interconnect inserts the system‑wide reset under the following conditions:
- The global reset input to the Platform Designer system is asserted.
- Any component asserts its resetrequest signal.
Reset Controller
The Reset Controller has the following parameters that you can specify to customize its behavior:
- Number of inputs— Indicates the number of individual reset interfaces the controller ORs to create a signal reset output.
-
Output reset synchronous edges—Specifies the level of
synchronization. You can select one the following options:
- None—The reset is asserted and deasserted asynchronously. You can use this setting if you have designed internal synchronization circuitry that matches the reset style required for the IP in the system.
- Both—The reset is asserted and deasserted synchronously.
- Deassert—The reset is deasserted synchronously and asserted asynchronously.
- Synchronization depth—Specifies the number of register stages the synchronizer uses to eliminate the propagation of metastable events.
- Reset request—Enables reset request generation, which is an early signal that is asserted before reset assertion. The reset request is used by blocks that require protection from asynchronous inputs, for example, M20K blocks.
Platform Designer automatically inserts reset synchronizers under the following conditions:
- More than one reset source is connected to a reset sink
- There is a mismatch between the reset source’s synchronous edges and the reset sinks’ synchronous edges
Reset Bridge
The Reset Bridge parameters are used to describe the incoming reset and include the following options:
- Active low reset—When turned on, reset is asserted low.
-
Synchronous edges—Specifies the level of synchronization and
includes the following options:
- None—The reset is asserted and deasserted asynchronously. Use this setting if you have internal synchronization circuitry.
- Both—The reset is asserted and deasserted synchronously.
- Deassert—The reset is deasserted synchronously, and asserted asynchronously.
- Number of reset outputs—The number of reset interfaces that are exported.
Reset Sequencer
The Parameter Editor displays the expected assertion and deassertion sequences based on the current settings. You can connect multiple reset sources to the reset sequencer, and then connect the outputs of the Reset Sequencer to components in the system.
Reset Sequencer Parameters
Parameter | Description |
---|---|
Number of reset outputs | Sets the number of output resets to be sequenced, which is the number of output reset signals defined in the component with a range of 2 to 10. |
Number of reset inputs | Sets the number of input reset signals to be sequenced, which is the number of input reset signals defined in the component with a range of 1 to 10. |
Minimum reset assertion time | Specifies the minimum assertion cycles between the assertion of the last sequenced reset, and the deassertion of the first sequenced reset. The range is 0 to 1023. |
Enable Reset Sequencer CSR | Enables CSR functionality of the Reset Sequencer through an Avalon® interface. |
reset_out# | Lists the reset output signals. Set the parameters in the other columns for each reset signal in the table. |
ASRT Seq# | Determines the order of reset assertion. Enter the values 1, 2, 3, etc. to specify the required non-overlapping assertion order. This value determines the ASRT_REMAP value in the component HDL. |
ASRT Cycle# | Number of cycles to wait before assertion of the reset. The value set here corresponds to the ASRT_DELAY value in the component HDL. The range is 0 to 1023. |
DSRT Seq# | Determines the reset order of reset deassertion. Enter the values 1, 2, 3, etc. to specify the required non-overlapping deassertion order. This value determines the DSRT_REMAP value in the component HDL. |
DSRT Cycle#/Deglitch# | Number of cycles to wait before deasserting or deglitching the reset. If the USE_DRST_QUAL parameter is set to 0, specifies the number of cycles to wait before deasserting the reset. If USE_DSRT_QUAL is set to1, specifies the number of cycles to deglitch the input reset_dsrt_qual signal. This value determines either the DSRT_DELAY, or the DSRT_QUALCNT value in the component HDL, depending on the USE_DSRT_QUAL parameter setting. The range is 0 to 1023. |
USE_DSRT_QUAL | If you set USE_DSRT_QUAL to 1, the deassertion sequence waits for an external input signal for sequence qualification instead of waiting for a fixed delay count. To use a fixed delay count for deassertion, set this parameter to 0. |
Reset Sequencer Timing Diagrams


Reset Sequencer CSR Registers
The Reset Sequencer's CSR registers provide the following functionality:
-
Support reset logging
- Ability to identify which reset is asserted.
- Ability to determine whether any reset is currently active.
-
Support software
triggered resets
- Ability to generate reset by writing to the register.
- Ability to disable assertion or deassertion sequence.
-
Support software sequenced reset
- Ability for the software to fully control the assertion/deassertion sequence by writing to registers and stepping through the sequence.
-
Support reset override
- Ability to assert a specific component reset through software.
Register | Offset | Width | Reset Value | Description |
---|---|---|---|---|
Status Register | 0x00 | 32 | 0x0 | The Status register indicates which sources are allowed to cause a reset. |
Interrupt Enable Register | 0x04 | 32 | 0x0 | The Interrupt Enable register bits enable events triggering the IRQ of the reset sequencer. |
Control Register | 0x08 | 32 | 0x0 | The Control register allows you to control the Reset Sequencer. |
Software Sequenced Reset Assert Control Register | 0x0C | 32 | 0x3FF | You can program the Software Sequenced Reset Assert control register to control the reset assertion sequence. |
Software Sequenced Reset Deassert Control Register | 0x10 | 32 | 0x3FF | You can program the Software Sequenced Reset Deassert register to control the reset deassertion sequence. |
Software Direct Controlled Resets | 0x14 | 32 | 0X0 | You can write a bit to 1 to assert the reset_outN signal, and to 0 to deassert the reset_outN signal. |
Software Reset Masking | 0x18 | 32 | 0x0 | Masking off (writing 1) to a reset_outN "Reset Mask Enable" signal prevents the corresponding reset from being asserted. Writing a bit to 0 to a reset mask enable signal allows assertion of reset_outN. |
Reset Sequencer Status Register
You can clear bits by writing 1 to the bit location. The Reset Sequencer ignores attempts to write bits with a value of 0. If the sequencer is reset (power-on-reset), all bits are cleared, except the power-on-reset bit.
Bit | Attribute | Default | Description |
---|---|---|---|
31 | RO | 0 | Reset Active—Indicates that the sequencer is currently active in reset sequence (assertion or deassertion). |
30 | RW1C | 0 |
Reset Asserted and waiting for SW to proceed—Set when there is an active reset assertion, and the next sequence is waiting for the software to proceed. Only valid when the Enable SW sequenced reset assert option is turned on. |
29 | RW1C | 0 |
Reset Deasserted and waiting for
SW to proceed—Set when there is an active reset
deassertion, and the next sequence is waiting for the software to
proceed. Only valid when the Enable SW sequenced reset deassert option is turned on. |
28:26 | Reserved. | ||
25:16 | RW1C | 0 | Reset deassertion input qualification signal reset_dsrt_qual [9:0] status—Indicates that the reset deassertion's input signal qualification signal is set. This bit is set on the detection of assertion of the signal. |
15:12 | Reserved. | ||
11 | RW1C | 0 | reset_in9 was triggered—Indicates that reset_in9 triggered the reset. Software clears this bits by writing 1 to this location. |
10 | RW1C | 0 | reset_in8 was triggered—Indicates that reset_in8 triggered the reset. Software clears this bit by writing 1 to this location. |
9 | RW1C | 0 | reset_in7 was triggered—Indicates that reset_in7 triggered the reset. Software clears this bit by writing 1 to this location. |
8 | RW1C | 0 | reset_in6 was triggered—Indicates that reset_in6 triggered the reset. Software clears this bit by writing 1 to this location. |
7 | RW1C | 0 | reset_in5 was triggered—Indicates that reset_in5 triggered the reset. Software clears this bit by writing 1 to this location. |
6 | RW1C | 0 | reset_in4 was triggered—Indicates that reset_in4 triggered the reset. Software clears this bit by writing 1 to this location. |
5 | RW1C | 0 | reset_in3 was triggered—Indicates that reset_in3 triggered the reset. Software clears this bit by writing 1 to this location. |
4 | RW1C | 0 | reset_in2 was triggered—Indicates that reset_in2 triggered the reset. Software clears this bit by writing 1 to this location. |
3 | RW1C | 0 | reset_in1 was triggered—Indicates that reset_in1 triggered the reset. Software clears this bit by writing 1 to this location. |
2 | RW1C | 0 | reset_in0 was triggered—Indicates that reset_in0 triggered. Software clears this bit by writing 1 to this location. |
1 | RW1C | 0 | Software-triggered reset—Indicates that the software-triggered reset is set by the software, and triggering a reset. |
0 | RW1C | 0 | Power-on-reset was triggered—Asserted whenever the reset to the sequencer is triggered. This bit is NOT reset when sequencer is reset. Software clears this bit by writing 1 to this location. |
Reset Sequencer Interrupt Enable Register
Bit | Attribute | Default | Description |
---|---|---|---|
31 | Reserved. | ||
30 | RW | 0 | Interrupt on Reset Asserted and waiting for SW to proceed enable. When set, the IRQ is set when the sequencer is waiting for the software to proceed in an assertion sequence. |
29 | RW | 0 | Interrupt on Reset Deasserted and waiting for SW to proceed enable. When set, the IRQ is set when the sequencer is waiting for the software to proceed in a deassertion sequence. |
28:26 | Reserved. | ||
25:16 | RW | 0 | Interrupt on Reset deassertion input qualification signal reset_dsrt_qual_[9:0] status— When set, the IRQ is set when the reset_dsrt_qual[9:0] status bit (per bit enable) is set. |
15:12 | Reserved. | ||
11 | RW | 0 | Interrupt on reset_in9 Enable—When set, the IRQ is set when the reset_in9 trigger status bit is set. |
10 | RW | 0 | Interrupt on reset_in8 Enable—When set, the IRQ is set when the reset_in8 trigger status bit is set. |
9 | RW | 0 | Interrupt on reset_in7 Enable—When set, the IRQ is set when the reset_in7 trigger status bit is set. |
8 | RW | 0 | Interrupt on reset_in6 Enable—When set, the IRQ is set when the reset_in6 trigger status bit is set. |
7 | RW | 0 | Interrupt on reset_in5 Enable—When set, the IRQ is set when the reset_in5 trigger status bit is set. |
6 | RW | 0 | Interrupt on reset_in4 Enable—When set, the IRQ is set when the reset_in4 trigger status bit is set. |
5 | RW | 0 | Interrupt on reset_in3 Enable—When set, the IRQ is set when the reset_in3 trigger status bit is set. |
4 | RW | 0 | Interrupt on reset_in2 Enable—When set, the IRQ is set when the reset_in2 trigger status bit is set. |
3 | RW | 0 | Interrupt on reset_in1 Enable—When set, the IRQ is set when the reset_in1 trigger status bit is set. |
2 | RW | 0 | Interrupt on reset_in0 Enable—When set, the IRQ is set when the reset_in0 trigger status bit is set. |
1 | RW | 0 | Interrupt on Software triggered reset Enable—When set, the IRQ is set when the software triggered reset status bit is set. |
0 | RW | 0 | Interrupt on Power-On-Reset Enable—When set, the IRQ is set when the power-on-reset status bit is set. |
Reset Sequencer Control Register
Bit | Attribute | Default | Description |
---|---|---|---|
31:3 | Reserved. | ||
2 | RW | 0 | Enable SW sequenced reset assert—Enable a software sequenced reset assert sequence. Timer delays and input qualification are ignored, and only the software can sequence the assert. |
1 | RW | 0 | Enable SW sequenced reset deassert—Enable a software sequenced reset deassert sequence. Timer delays and input qualification are ignored, and only the software can sequence the deassert. |
0 | WO | 0 | Initiate Reset Sequence—To trigger the hardware sequenced warm reset, the Reset Sequencer writes this bit to 1 a single time. The Reset Sequencer verifies that Reset Active is 0 before setting this bit, and always reads the value 0. To monitor this sequence, verify that Reset Active is asserted, and then subsequently deasserted. |
Reset Sequencer Software Sequenced Reset Assert Control Register
When the corresponding enable bit is set, the sequencer stops when the desired reset asserts, and then sets the Reset Asserted and waiting for SW to proceed bit. The Reset Sequencer proceeds only after the Reset Asserted and waiting for SW to proceed bit is cleared.
Bit | Attribute | Default | Description |
---|---|---|---|
31:10 | Reserved. | ||
9:0 | RW | 0x3FF |
Per-reset SW sequenced reset assert enable—This is a per-bit enable for SW sequenced reset assert. If the register's bitN is set, the sequencer sets the bit30 of the status register when a resetN is asserted. It then waits for the bit30 of the status register to clear before proceeding with the sequence. By default, all bits are enabled (fully SW sequenced). |
Reset Sequencer Software Sequenced Reset Deassert Control Register
When the corresponding enable bit is set, the sequencer stops when the desired reset asserts, and then sets the Reset Deasserted and waiting for SW to proceed bit. The Reset Sequencer proceeds only after the Reset Deasserted and waiting for SW to proceed bit is cleared.
Bit | Attribute | Default | Description |
---|---|---|---|
31:10 | Reserved. | ||
9:0 | RW | 0x3FF | Per-reset SW sequenced reset deassert enable—This is a per-bit enable for SW-sequenced reset deassert. If bitN of this register is set, the sequencer sets bit29 of the Status Register when a resetN is asserted. It then waits for the bit29 of the status register to clear before proceeding with the sequence. By default, all bits are enabled (fully SW sequenced). |
Reset Sequencer Software Direct Controlled Resets
Bit | Attribute | Default | Description |
---|---|---|---|
31:26 | Reserved. | ||
25:16 | WO | 0 | Reset Overwrite Trigger Enable—This is a per-bit control trigger bit for the overwrite value to take effect. |
15:10 | Reserved. | ||
9:0 | WO | 0 | reset_outN Reset Overwrite Value—This is a per-bit control of the reset_out bit. The Reset Sequencer can use this to forcefully drive the reset to a specific value. A value of 1 sets the reset_out. A value of 0 clears the reset_out. A write to this register only takes effect if the corresponding trigger bit in this register is set. |
Reset Sequencer Software Reset Masking
Bit | Attribute | Default | Description |
---|---|---|---|
31:10 | Reserved. | ||
9:0 | RW | 0 | reset_outN "Reset Mask Enable"—This is a per-bit control to mask off the reset_outN bit. Software Reset Masking prevents the reset bit from being asserted during a reset assertion sequence. If reset_out is already asserted, it does not deassert the reset. |
Reset Sequencer Software Flows
Reset Sequencer (Software-Triggered) Flow
Reset Assert Flow
The following flow sequence occurs for a Reset Assert Flow:
- A reset is triggered either by the software, or when input resets to the Reset Sequencer are asserted.
- The IRQ is asserted, if the IRQ is enabled.
- Software reads the Status register to determine which reset was triggered.
Reset Deassert Flow
The following flow sequence occurs for a Reset Deassert Flow:
- When a reset source is deasserted, or when the reset assert sequence has completed without pending resets asserted, the deassertion flow is initiated.
- The IRQ is asserted, if the IRQ is enabled.
- Software reads the Status Register to determine which reset was triggered.
Reset Assert (Software Sequenced) Flow
Reset Deassert (Software Sequenced) Flow
The sequence and flow is similar to the Reset Assert (SW Sequenced) flow, though, this flow uses the reset deassert registers/bits instead of the reset assert registers/bits.
Conduits
The PCI Express-to-Ethernet example in Creating a System with Platform Designer is an example of using a conduit interface for export. You can declare an associated clock interface for conduit interfaces in the same way as memory-mapped interfaces with the associatedClock.
To connect two conduit interfaces inside Platform Designer, the following conditions must be met:
- The interfaces must match exactly with the same signal roles and widths.
- The interfaces must be the opposite directions.
- Clocked conduit connections must have matching associatedClocks on each of their endpoint interfaces.
Interconnect Pipelining
Pipeline stages increase a design's fMAX by reducing the combinational logic depth, at the cost of additional latency and logic.
The Limit interconnect pipeline stages to option in the Interconnect Requirements tab allows you to define the maximum Avalon® -ST pipeline stages that Platform Designer can insert during generation. You can specify between 0 to 4 pipeline stages, where 0 means that the interconnect has a combinational datapath. Choosing 3 or 4 pipeline stages may significantly increase the logic utilization of the system.
Platform Designer adds additional latency once on the command path, and once on the response path.
This setting is specific for each Platform Designer system or subsystem, so you can specify a unique interconnect pipeline stage value for each subsystem.
The insertion of pipeline stages depends upon the existence of certain interconnect components. For example, single-slave systems do not have multiplexers; therefore, multiplexer pipelining does not occur. In an extreme case, of a single-master to single-slave system, no pipelining occurs, regardless of the value of the Limit interconnect pipeline stages to option.
You can manually adjust number of pipeline stages in the Platform Designer Memory-Mapped Interconnect tab.
Manually Control Pipelining in the Platform Designer Interconnect
Access the Memory-Mapped Interconnect tab by clicking System > Show System With Platform Designer Interconnect
- In the Intel® Quartus® Prime software, compile the design and run timing analysis.
- From the timing analysis output, identify the critical path through the interconnect and determine the approximate mid-point.
- In Platform Designer, click System > Show System With Platform Designer Interconnect.
-
In the Memory-Mapped
Interconnect tab, select the interconnect module that contains
the critical path.
You can determine the name of the module from the hierarchical node names in the timing report.
- Click Show Pipelinable Locations. Platform Designer display all possible pipeline locations in the interconnect. Right-click the possible pipeline location to insert or remove a pipeline stage.
- Locate the possible pipeline location that is closest to the mid-point of the critical path. The names of the blocks in the memory-mapped interconnect tab correspond to the module instance names in the timing report.
- Right-click the location where you want to insert a pipeline, and then click Insert Pipeline.
- Regenerate the Platform Designer system, recompile the design, and then rerun timing analysis.
- If necessary, repeat the manual pipelining process again until the design meets the timing requirements.
Manual pipelining has the following limitations:
- If you make changes to the original system's connectivity after manually pipelining an interconnect, the inserted pipelines may become invalid. Platform Designer displays warning messages when you generate the system if invalid pipeline stages are detected. You can remove invalid pipeline stages with the Remove Stale Pipelines option in the Memory-Mapped Interconnect tab. Do not make changes to the system's connectivity after manual pipeline insertion.
- Review manually-inserted pipelines when upgrading to newer versions of Platform Designer. Manually-inserted pipelines in one version of Platform Designer may not be valid in a future version.
Error Correction Coding (ECC) in Platform Designer Interconnect
As transistors become smaller, computer hardware is more susceptible to data corruption. Data corruption causes Single Event Upsets (SEUs), and increases the probability of Failures in Time (FIT) rates in computer systems. SEU events without error notification can cause the system to be stuck in an unknown response status, and increase the FIT rate.
Before writing data to the memory device, the ECC logic encodes the data bus with a Hamming code. Then, the ECC logic decodes and performs error checking on the data output.
When you enable ECC, Platform Designer interconnect sends uncorrectable errors arising from memory as DECODEERROR (DECERR) on the Avalon® response bus.
AMBA 3 AXI Protocol Specification Support (version 1.0)
Channels
Read and Write Address Channels
Most signals are allowed. However, the following limitations are present in Platform Designer 14.0:
- Supports 64-bit addressing.
- ID width limited to 18-bits.
- HPS-FPGA master interface has a 12-bit ID.
Write Data, Write Response, and Read Data Channels
Most signals are allowed. However, the following limitations are present in Platform Designer 14.0:
- Data widths limited to a maximum of 1024-bits
- Limited to a fixed byte width of 8-bits
Low Power Channel
Cache Support
Bufferable
When connecting to Avalon® -MM slaves, since they do not have write responses, the following exceptions apply:
- For Avalon® -MM slaves, the write response are generated by the slave agent once the write transaction is accepted by the slave. The following limitation exists for an Avalon® bridge:
- For an Avalon® bridge, the response is generated before the write reaches the endpoint; users must be aware of this limitation and avoid multiple paths past the bridge to any endpoint slave, or only perform bufferable transactions to an Avalon® bridge.
Cacheable (Modifiable)
It does not change the address, burst length, or burst size of non-modifiable transactions, with the following exceptions:
- Platform Designer considers a wide transaction to a narrow slave as modifiable because the size requires reduction.
- Platform Designer may consider AXI read and write transactions as modifiable when the destination is an Avalon® slave. The AXI transaction may be split into multiple Avalon® transactions if the slave is unable to accept the transaction. This may occur because of burst lengths, narrow sizes, or burst types.
Platform Designer ignores all other bits, for example, read allocate or write allocate because the interconnect does not perform caching. By default, Platform Designer considers Avalon® master transactions as non-bufferable and non-cacheable, with the allocate bits tied low.
Security Support
The interconnect passes the AWPROT and ARPROT signals to the endpoint slave without modification. It does not use or modify the PROT bits.
Refer to Manage System Security in Creating a System with Platform Designer for more information about secure systems and the TrustZone* feature.
Atomic Accesses
Response Signaling
Ordering Model
To prevent reordering, for slaves that accept reordering depths greater than 0, Platform Designer does not transfer the transaction ID from the master, but provides a constant transaction ID of 0. For slaves that do not reorder, Platform Designer allows the transaction ID to be transferred to the slave. To avoid cyclic dependencies, Platform Designer supports a single outstanding slave scheme for both reads and writes. Changing the targeted slave before all responses have returned stalls the master, regardless of transaction ID.
AXI and Avalon Ordering
According to the AMBA* Protocol Specifications, there is no ordering requirement between reads and writes. However, Avalon® has an implicit ordering model that requires transactions from a master to the same slave to be in order.
In response to this potential risk, Avalon® interfaces provide a compile-time option to enforce strict order. When turned on, the Avalon® interface waits for outstanding write responses before issuing reads.
Data Buses
Unaligned Address Commands
Avalon and AXI Transaction Support
Transaction Cannot Cross 4KB Boundaries
Handling Read Side Effects
- For a 32-bit AXI master that issues a read command with an unaligned address starting at address 0x01, and a burstcount of 2 to a 32-bit Avalon® slave, the starting address is: 0x00.
- For a 32-bit AXI master that issues a read command with an unaligned address starting at address 0x01, with 4-bytes to an 8-bit AXI slave, the starting address is: 0x00.
AMBA 3 APB Protocol Specification Support (version 1.0)
Platform Designer allows connections between APB components, and AMBA* 3 AXI, AMBA* 4 AXI, and Avalon® memory-mapped interfaces. The following sections describe unique or exceptional APB support in the Platform Designer software.
Bridges
Intel recommends as an alternative that you instantiate the APB bridge and all the APB slaves in Platform Designer. You should then connect the slave side of the bridge to any high speed interface and connect the master side of the bridge to the APB slaves. Platform Designer creates the interconnect on either side of the APB bridge and creates only one PSEL signal.
Alternatively, you can connect a bridge to the APB bus outside of Platform Designer. Use an Avalon® /AXI bridge to export the Avalon® /AXI master to the top-level, and then connect this Avalon® /AXI interface to the slave side of the APB bridge. Alternatively, instantiate the APB bridge in Platform Designer and export APB master to the top- level, and from there connect to APB bus outside of Platform Designer.
Burst Adaptation
Width Adaptation
Error Response
AMBA 4 AXI Memory-Mapped Interface Support (version 2.0)
Burst Support
For narrow-sized transfers, bursts with Avalon® slaves as destinations are shortened to multiple non-bursting transactions in order to transmit the correct address to the slaves, since Avalon® slaves always perform full-sized datawidth transactions.
Bursts with AMBA* 3 AXI slaves as destinations are shortened to multiple bursts, with each burst length less than or equal to 16. Bursts with AMBA* 4 AXI slaves as destinations are not shortened.
QoS
Transactions from AMBA* 3 AXI and Avalon® masters have a default value of 4'b0000, which indicates that the transactions are not part of the QoS flow. QoS values are not used for slaves that do not support QoS.
For Platform Designer 14.0, there are no programmable QoS registers or compile-time QoS options for a master that overrides its real or default value.
Regions
Write Response Dependency
AWCACHE and ARCACHE
Width Adaptation and Data Packing in Platform Designer
The following rules apply:
- Data packing is supported when masters and slaves are Avalon® -MM.
- Data packing is not supported when any master or slave is an AMBA* 3 AXI, AMBA* 4 AXI, or APB component.
For example, for a read/write command with a 32-bit master connected to a 64-bit slave, and a transaction of 2 burstcounts, Platform Designer sends 2 separate read/write commands to access the 64-bit data width of the slave. Data packing is only supported if the system does not contain AMBA* 3 AXI, AMBA* 4 AXI, or APB masters or slaves.
Ordering Model
The following describes the required behavior for the device non-bufferable memory type:
- Write response must be obtained from the final destination.
- Read data must be obtained from the final destination.
- Transaction characteristics must not be modified.
- Reads must not be pre-fetched. Writes must not be merged.
- Non-modifiable read and write transactions.
(AWCACHE[1] = 0 or ARCACHE[1] = 0) from the same ID to the same slave must remain ordered. The interconnect always provides responses in the same order as the commands issued. Slaves that support reordering provide a constant transaction ID to prevent reordering. AXI slaves that do not reorder are provided with transaction IDs, which allows exclusive accesses to be used for such slaves.
Read and Write Allocate
Locked Transactions
Memory Types
Mismatched Attributes
Signals
AMBA 4 AXI Streaming Interface Support (version 1.0)
Connection Points
The connection is point-to-point without adaptation and must be between an axi4stream_master and axi4stream_slave. Connected interfaces must have the same port roles and widths.
Non matching master to slave connections, and multiple masters to multiple slaves connections are not supported.
AMBA 4 AXI Streaming Connection Point Parameters
Name | Type | Description |
---|---|---|
associatedClock | string | Name of associated clock interface. |
associatedReset | string | Name of associated reset interface |
AMBA 4 AXI Streaming Connection Point Signals
Port Role | Width | Master Direction | Slave Direction | Required |
---|---|---|---|---|
tvalid | 1 | Output | Input | Yes |
tready | 1 | Input | Output | No |
tdata 1 | 8:4096 | Output | Input | No |
tstrb | 1:512 | Output | Input | No |
tkeep | 1:512 | Output | Input | No |
tid 2 | 1:8 | Output | Input | No |
tdest 3 | 1:4 | Output | Input | No |
tuser 4 | 1:4096 | Output | Input | No |
tlast | 1 | Output | Input | No |
Adaptation
AMBA 4 AXI-Lite Protocol Specification Support (version 2.0)
Platform Designer 14.0 supports the following AMBA* 4 AXI-Lite features:
- Transactions with a burst length of 1.
- Data accesses use the full width of a data bus (32- bit or 64-bit) for data accesses, and no narrow-size transactions.
- Non-modifiable and non-bufferable accesses.
- No exclusive accesses.