Intel® oneAPI DPC++/C++ Compiler Developer Guide and Reference

ID 767253
Date 9/08/2022
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

Methods to Optimize Code Size

This section provides some guidance on how to achieve smaller object and smaller executable size when using the optimizing features of Intel compilers.

There are two compiler options that are designed to prioritize code size over performance:

Option Result Notes

Os

Favors size over speed

This option enables optimizations that do not increase code size; it produces smaller code size than option O2.

Option Os disables some optimizations that may increase code size for a small speed benefit.

O1

Minimizes code size

Compared to option Os, option O1 disables even more optimizations that are generally known to increase code size. Specifying option O1 implies option Os.

As an intermediate step in reducing code size, you can replace option O3 with option O2 before specifying option O1.

Option O1 may improve performance for applications with very large code size, many branches, and execution time not dominated by code within loops.

For more information about compiler options mentioned in this topic, see their full descriptions in the Compiler Reference.

The rest of this topic briefly discusses other methods that may help you further improve code size even when compared to the default behaviors of options Os and O1.

Things to remember:

  • Some of these methods may already be applied by default when options Os and O1 are specified. All the methods mentioned in this topic can be applied at higher optimization levels.

  • Some of the options referred to in this topic will not necessarily cause code size reduction, and they may provide varying results (good, bad, or neutral) based on the characteristics of the target code. Still, these are the recommended things to try to see if they cause your binaries to become smaller while maintaining acceptable performance.

Disable or Decrease the Amount of Inlining

Inlining replaces a call to a function with the body of the function. This lets the compiler optimize the code for the inlined function in the context of its caller, usually yielding more specialized and better performing code. This also removes the overhead of calling the function at runtime.

However, replacing a call to a function by the code for that function usually increases code size. The code size increase can be substantial. To eliminate this code size increase, at the cost of the potential performance improvement, inlining can be disabled.

  • Advantage: Disabling or reducing this optimization can reduce code size.
  • Disadvantage: Performance is likely to be sacrificed by disabling or reducing inlining especially for applications with many small functions.

Use options:

Linux

fno-inline

Windows

Ob0

Strip Symbols from Your Binaries

You can specify a compiler option to omit debugging and symbol information from the executable without sacrificing its operability.

  • Advantage: This method noticeably reduces the size of the binary.
  • Disadvantage: It may be very difficult to debug a stripped application.

Use options:

Linux

Wl, --strip-all

Windows

None

Dynamically Link Intel-provided Libraries

By default, some of the Intel support and performance libraries are linked statically into an executable. As a result, the library codes are linked into every executable being built. This means that codes are duplicated.

It may be more profitable to link them dynamically.

  • Advantage: Performance of the resulting executable is normally not significantly affected. Library codes that are otherwise linked in statically into every executable will not contribute to the code size of each executable with this option. These codes will be shared between all executables using them, and they will be available independent of those executables.
  • Disadvantage: The libraries on which the resulting executable depends must be re-distributed with the executable for it to work properly. When libraries are linked statically, only library content that is actually used is linked into the executable. Dynamic libraries contain all the library content. Therefore, it may not be beneficial to use this option if you only need to build and/or distribute a single executable. The executable itself may be much smaller when linked dynamically, compared to a statically linked executable. However, the total size of the executable plus shared libraries or DLLs may be much larger than the size of the statically linked executable.

Use Options:

Linux

shared-intel

Windows

MD

NOTE:
Option MD affects all libraries, not only the Intel-provided ones.

Exclude Unused Code and Data from the Executable

Programs often contain dead code or data that is not used during their execution. Even if no expensive whole-program inter-procedural analysis is made at compile time to identify dead code, there are compiler options you can specify to eliminate unused functions and data at link time.

This method is often referred to as function-level or data-level linking.

  • Advantage: Only the code that is referenced remains in an executable. Dead functions and data are stripped from the executable. For the options passed to the linker, they also enable the linker to reorder the sections for other possible optimization.
  • Disadvantage: The object codes may become slightly larger because each function or datum is put into a separate section. The overhead is eliminated at the linking stage. This method requires linker support to strip unused sections and may increase linking time.

Use Options:

Linux

-fdata-sections -ffunction-sections -Wl,--gc-sections

Windows

/Gw /Gy /link /OPT:REF

NOTE:
Option MD affects all libraries, not only the Intel-provided ones.

These options (from the use options example above) are passed to the linker:

Linux

Wl, --gc-sections

Windows

link /OPT:REF

Disable Recognition and Expansion of Intrinsic Functions

When recognized, intrinsic functions can get expanded inline or their faster implementation in a library may be assumed and linked in. By default, Inline expansion of intrinsic functions is enabled.

In some cases, disabling this behavior may noticeably improve the size of the produced object or binary.

  • Advantage: Both the size of the object files and the size of library codes brought into an executable can be reduced.
  • Disadvantage: This method can prevent various performance optimizations from happening. Slower standard library implementation will be used. The size of the final executable can be increased in cases when code pulled in statically from a library for an otherwise inlined intrinsic is large.

Use Options:

Linux

fno-builtin

Windows

Oi-

Additional information:

  • This option is already the default if you specify option O1.

  • For C++, you can specify Linux option nolib-inline to disable inline expansion of standard library or intrinsic functions.

  • Depending on code characteristics, this option can sometimes increase binary size.

Optimize Exception Handling Data

For SYCL, enabling and disabling of exception handling is supported for host compilation.

If a program requires support for exception handling, the compiler creates a special section containing DWARF directives that are used by the Linux runtime to unwind and catch an exception.

This information is found in the .eh_frame section and may be shrunk using the compiler options listed below.

  • Advantage:

    These options may shrink the size of the object or binary file by up to 15%, though the amount of the reduction depends on the target platform. These options control whether unwind information is precise at an instruction boundary or at a call boundary. For example, option fno-asynchronous-unwind-tables can be used for programs that may only throw or catch exceptions.

  • Disadvantage: Both options may change the program's behavior. Do not use option fno-exceptions for programs that require standard C++ handling for objects of classes with destructors. Do not use option fno-asynchronous-unwind-tables for functions compiled with option -fexceptions that contain calls to other functions that might throw exceptions or for C++ functions that declare objects with destructors.

Use Options:

Linux

fno-exceptions or fno-asynchronous-unwind-tables

Windows

None

Read the compiler option descriptions, which explain what the defaults and behavior are for each target platform.

Disable Loop Unrolling

Unrolling a loop increases the size of the loop proportionally to the unroll factor.

Disabling (or limiting) this optimization may help reduce code size at the expense of performance.

  • Advantage: Code size is reduced.
  • Disadvantage: Performance of otherwise unrolled loops may noticeably degrade because this limits other possible loop optimizations.

Use Options:

Linux

unroll=0

Windows

Qunroll:0

NOTE:
This Windows option is not available for SYCL.

Additional information:

This option is already the default if you specify option Os or option O1.

Disable Automatic Vectorization

The compiler finds possibilities to use SIMD (Intel® Streaming SIMD Extensions (Intel® SSE)/Intel® Advanced Vector Extensions (Intel® AVX)) instructions to improve performance of applications. This optimization is called automatic vectorization.

In most cases, this optimization involves transformation of loops and increases code size, in some cases significantly.

Disabling this optimization may help reduce code size at the expense of performance.

  • Advantage: Compile-time is also improved significantly.
  • Disadvantage: Performance of otherwise vectorized loops may suffer significantly. If you care about the performance of your application, you should use this option selectively to suppress vectorization on everything except performance-critical parts.

Use Options:

Linux

no-vec

Windows

Qvec-

Additional information:

Depending on code characteristics, this option can sometimes increase binary size.

Avoid References to Compiler-specific Libraries

While compiler-specific libraries are intended to improve the performance of your application, they increase the size of your binaries.

Certain compiler options may improve the code size.

  • Advantage: The compiler will not assume the presence of compiler-specific libraries. It will generate only calls that appear in the source code.
  • Disadvantage: This method may sacrifice performance if the library codes were in hotspots. Also, because we cannot assume any libraries, some compiler optimizations will be suppressed.

Use Options:

Linux

ffreestanding

Windows

Qfreestanding-

NOTE:
This Windows option is not available for SYCL.

Additional information:

  • This option implies option fno-builtin. You can override that default by explicitly specifying option fbuiltin.

  • Depending on code characteristics, this option can sometimes increase binary size.

Use Interprocedural Optimization

Using interprocedural optimization (IPO) may reduce code size. It enables dead code elimination and suppresses generation of code for functions that are always inlined or proven that they are never to be called during execution.

  • Advantage: Depending on the code characteristics, this optimization can reduce executable size and improve performance.
  • Disadvantage: Binary size can increase depending on code/application..

Use Options:

Linux

ipo

Windows

Qipo

NOTE:
This method is not recommended if you plan to ship object files as part of a final product.