Package riscv

import "cmd/internal/obj/riscv"
Overview
Index

Overview ▾

Package riscv implements the riscv64 assembler.

Register naming

The integer registers are named X0 through to X31, however X4 must be accessed through its RISC-V ABI name, TP, and X27, which holds a pointer to the Go routine structure, must be referred to as g. Additionally, when building in shared mode, X3 is unavailable and must be accessed via its RISC-V ABI name, GP.

The floating-point registers are named F0 through to F31.

The vector registers are named V0 through to V31.

Both integer and floating-point registers can be referred to by their RISC-V ABI names, e.g., A0 or FT0, with the exception that X27 cannot be referred to by its RISC-V ABI name, S11. It must be referred to as g.

Some of the integer registers are used by the Go runtime and assembler - X26 is the closure pointer, X27 points to the Go routine structure and X31 is a temporary register used by the Go assembler. Use of X31 should be avoided in hand written assembly code as its value could be altered by the instruction sequences emitted by the assembler.

Instruction naming

Many RISC-V instructions contain one or more suffixes in their names. In the RISC-V ISA Manual these suffixes are separated from themselves and the name of the instruction mnemonic with a dot ('.'). In the Go assembler, the separators are omitted and the suffixes are written in upper case.

Example:

FMVWX           <=>     fmv.w.x

Rounding modes

The Go toolchain does not set the FCSR register and requires the desired rounding mode to be explicitly encoded within floating-point instructions. The syntax the Go assembler uses to specify the rounding modes differs from the syntax in the RISC-V specifications. In the RISC-V ISA Manual the rounding mode is given as an extra operand at the end of an assembly language instruction. In the Go assembler, the rounding modes are converted to uppercase and follow the instruction mnemonic from which they are separated with a dot ('.').

Example:

FCVTLUS.RNE F0, X5      <=>     fcvt.lu.s x5, f0, rne

RTZ is assumed if the rounding mode is omitted.

RISC-V extensions

By default the Go compiler targets the rva20u64 profile. This profile mandates all the general RISC-V instructions, allowing Go to use integer, multiplication, division, floating-point and atomic instructions without having to perform compile time or runtime checks to verify that their use is appropriate for the target hardware. All widely available riscv64 devices support at least rva20u64. The Go toolchain can be instructed to target later RISC-V profiles, including, rva22u64 and rva23u64, via the GORISCV64 environment variable. Instructions that are provided by newer profiles cannot typically be used in handwritten assembly code without compile time guards (or runtime checks) that ensure they are hardware supported.

The file asm_riscv64.h defines macros for each RISC-V extension that is enabled by setting the GORISCV64 environment variable to a value other than rva20u64. For example, if GORISCV64=rva22u64 the macros hasZba, hasZbb and hasZbs will be defined. If GORISCV64=rva23u64 hasV will be defined in addition to hasZba, hasZbb and hasZbs. These macros can be used to determine whether it's safe to use an instruction in hand-written assembly.

It is not always necessary to include asm_riscv64.h and use #ifdefs in your code to safely take advantage of instructions present in the rva22u64 profile. In some cases the assembler can generate rva20u64 compatible code even when an rva22u64 instruction is used in an assembly source file. When GORISCV64=rva20u64 the assembler will synthesize certain rva22u64 instructions, e.g., ANDN, using multiple rva20u64 instructions. Instructions such as ANDN can then be freely used in assembly code without checking to see whether the instruction is supported by the target profile. When building a source file containing the ANDN instruction with GORISCV64=rva22u64 the assembler will emit the Zbb ANDN instruction directly. When building the same source file with GORISCV64=rva20u64 the assembler will emit multiple rva20u64 instructions to synthesize ANDN.

The assembler will also use rva22u64 instructions to implement the zero and sign extension instructions, e.g., MOVB and MOVHU, when GORISCV64=rva22u64 or greater.

The instructions not implemented in the default profile (rva20u64) that can be safely used in assembly code without compile time checks are:

  • ANDN
  • MAX
  • MAXU
  • MIN
  • MINU
  • MOVB
  • MOVH
  • MOVHU
  • MOVWU
  • ORN
  • ROL
  • ROLW
  • ROR
  • RORI
  • RORIW
  • RORW
  • XNOR

Operand ordering

The ordering used for instruction operands in the Go assembler differs from the ordering defined in the RISC-V ISA Manual.

1. R-Type instructions

R-Type instructions are written in the reverse order to that given in the RISC-V ISA Manual, with the register order being rs2, rs1, rd.

Examples:

ADD X10, X11, X12       <=>     add x12, x11, x10
FADDD F10, F11, F12     <=>     fadd.d f12, f11, f10

2. I-Type arithmetic instructions

I-Type arithmetic instructions (not loads, fences, ebreak, ecall) use the same ordering as the R-Type instructions, typically, imm12, rs1, rd.

Examples:

ADDI $1, X11, X12       <=>     add x12, x11, 1
SLTI $1, X11, X12       <=>     slti x12, x11, 1

3. Loads and Stores

Load instructions are written with the source operand (whether it be a register or a memory address), first followed by the destination operand.

Examples:

MOV 16(X2), X10         <=>     ld x10, 16(x2)
MOV X10, (X2)           <=>     sd x10, 0(x2)

4. Branch instructions

The branch instructions use the same operand ordering as is given in the RISC-V ISA Manual, e.g., rs1, rs2, label.

Example:

BLT X12, X23, loop1     <=>     blt x12, x23, loop1

BLT X12, X23, label will jump to label if X12 < X23. Note this is not the same ordering as is used for the SLT instructions.

5. FMA instructions

The Go assembler uses a different ordering for the RISC-V FMA operands to the ordering given in the RISC-V ISA Manual. The operands are rotated one place to the left, so that the destination operand comes last.

Example:

FMADDS  F1, F2, F3, F4  <=>     fmadd.s f4, f1, f2, f3

6. AMO instructions

The ordering used for the AMO operations is rs2, rs1, rd, i.e., the operands as specified in the RISC-V ISA Manual are rotated one place to the left.

Example:

AMOSWAPW X5, (X6), X7   <=>     amoswap.w x7, x5, (x6)

7. Vector instructions

The VSETVLI instruction uses the same symbolic names as the RISC-V ISA Manual to represent the components of vtype, with the exception that they are written in upper case. The ordering of the operands in the Go assembler differs from the RISC-V ISA Manual in that the operands are rotated one place to the left so that the destination register, the register that holds the new vl, is the last operand.

Example:

VSETVLI X10, E8, M1, TU, MU, X12        <=>     vsetvli x12, x10, e8, m1, tu, mu

Vector load and store instructions follow the pattern set by scalar loads and stores, i.e., the source is always the first operand and the destination the last. However, the ordering of the operands of these instructions is complicated by the optional mask register and, in some cases, the use of an additional stride or index register. In the Go assembler the index and stride registers appear as the second operand in indexed or strided loads and stores, while the mask register, if present, is always the penultimate operand.

Examples:

VLE8V (X10), V3                 <=>     vle8.v  v3, (x10)
VSE8V V3, (X10)                 <=>     vse8.v  v3, (x10)
VLE8V (X10), V0, V3             <=>     vle8.v  v3, (x10), v0.t
VSE8V V3, V0, (X10)             <=>     vse8.v  v3, (x10), v0.t
VLSE8V (X10), X11, V3           <=>     vlse8.v v3, (x10), x11
VSSE8V V3, X11, (X10)           <=>     vsse8.v v3, (x10), x11
VLSE8V (X10), X11, V0, V3       <=>     vlse8.v v3, (x10), x11, v0.t
VSSE8V V3, X11, V0, (X10)       <=>     vsse8.v v3, (x10), x11, v0.t
VLUXEI8V (X10), V2, V3          <=>     vluxei8.v v3, (x10), v2
VSUXEI8V V3, V2, (X10)          <=>     vsuxei8.v v3, (x10), v2
VLUXEI8V (X10), V2, V0, V3      <=>     vluxei8.v v3, (x10), v2, v0.t
VSUXEI8V V3, V2, V0, (X10)      <=>     vsuxei8.v v3, (x10), v2, v0.t
VL1RE8V (X10), V3               <=>     vl1re8.v v3, (x10)
VS1RV V3, (X11)                 <=>     vs1r.v  v3, (x11)

The ordering of operands for two and three argument vector arithmetic instructions is reversed in the Go assembler.

Examples:

VMVVV V2, V3                    <=> vmv.v.v v3, v2
VADDVV V1, V2, V3               <=> vadd.vv v3, v2, v1
VADDVX X10, V2, V3              <=> vadd.vx v3, v2, x10
VMADCVI $15, V2, V3             <=> vmadc.vi v3, v2, 15

The mask register, when specified, is always the penultimate operand in a vector arithmetic instruction, appearing before the destination register.

Examples:

VANDVV V1, V2, V0, V3           <=> vand.vv v3, v2, v1, v0.t

Ternary instructions

The Go assembler allows the second operand to be omitted from most ternary instructions if it matches the third (destination) operand.

Examples:

ADD X10, X12, X12       <=>     ADD X10, X12
ANDI $3, X12, X12       <=>     ANDI $3, X12

The use of this abbreviated syntax is encouraged.

Ordering of atomic instructions

It is not possible to specify the ordering bits in the FENCE, LR, SC or AMO instructions. The FENCE instruction is always emitted as a full fence, the acquire and release bits are always set for the AMO instructions, the acquire bit is always set for the LR instructions while the release bit is set for the SC instructions.

Immediate operands

In many cases, where an R-Type instruction has a corresponding I-Type instruction, the R-Type mnemonic can be used in place of the I-Type mnemonic. The assembler assumes that the immediate form of the instruction was intended when the first operand is given as an immediate value rather than a register.

Example:

AND $3, X12, X13        <=>     ANDI $3, X12, X13

Integer constant materialization

The MOV instruction can be used to set a register to the value of any 64 bit constant literal. The way this is achieved by the assembler varies depending on the value of the constant. Where possible the assembler will synthesize the constant using one or more RISC-V arithmetic instructions. If it is unable to easily materialize the constant it will load the 64 bit literal from memory.

A 32 bit constant literal can be specified as an argument to ADDI, ANDI, ORI and XORI. If the specified literal does not fit into 12 bits the assembler will generate extra instructions to synthesize it.

Integer constants provided as operands to all other instructions must fit into the number of bits allowed by the instructions' encodings for immediate values. Otherwise, an error will be generated.

Floating point constant materialization

The MOVF and MOVD instructions can be used to set a register to the value of any 32 bit or 64 bit floating point constant literal, respectively. Unless the constant literal is 0.0, MOVF and MOVD will be encoded as FLW and FLD instructions that load the constant from a location within the program's binary.

Compressed instructions

The Go assembler converts 32 bit RISC-V instructions to compressed instructions when generating machine code. This conversion happens automatically without the need for any direct involvement from the programmer, although judicious choice of registers can improve the compression rate for certain instructions (see the RISC-V ISA Manual for more details). This behaviour is enabled by default for all of the supported RISC-V profiles, i.e., it is not affected by the value of the GORISCV64 environment variable.

The use of compressed instructions can be disabled via a debug flag, compressinstructions:

  • Use -gcflags=all=-d=compressinstructions=0 to disable compressed instructions in Go code.
  • Use -asmflags=all=-d=compressinstructions=0 to disable compressed instructions in assembly code.

To completely disable automatic instruction compression in a Go binary both options must be specified.

The assembler also permits the use of compressed instructions in hand coded assembly language, but this should generally be avoided. Note that the compressinstructions flag only prevents the automatic compression of 32 bit instructions. It has no effect on compressed instructions that are hand coded directly into an assembly file.

Code generated by ./parse.py -go rv64_a rv64_c rv64_d rv64_f rv64_i rv64_m rv64_q rv64_zba rv64_zbb rv64_zbs rv_a rv_c rv_c_d rv_d rv_f rv_i rv_m rv_q rv_s rv_system rv_v rv_zba rv_zbb rv_zbs rv_zicond rv_zicsr; DO NOT EDIT.

Constants

const (
    // Base register numberings.
    REG_X0 = obj.RBaseRISCV + iota
    REG_X1
    REG_X2
    REG_X3
    REG_X4
    REG_X5
    REG_X6
    REG_X7
    REG_X8
    REG_X9
    REG_X10
    REG_X11
    REG_X12
    REG_X13
    REG_X14
    REG_X15
    REG_X16
    REG_X17
    REG_X18
    REG_X19
    REG_X20
    REG_X21
    REG_X22
    REG_X23
    REG_X24
    REG_X25
    REG_X26
    REG_X27
    REG_X28
    REG_X29
    REG_X30
    REG_X31

    // Floating Point register numberings.
    REG_F0
    REG_F1
    REG_F2
    REG_F3
    REG_F4
    REG_F5
    REG_F6
    REG_F7
    REG_F8
    REG_F9
    REG_F10
    REG_F11
    REG_F12
    REG_F13
    REG_F14
    REG_F15
    REG_F16
    REG_F17
    REG_F18
    REG_F19
    REG_F20
    REG_F21
    REG_F22
    REG_F23
    REG_F24
    REG_F25
    REG_F26
    REG_F27
    REG_F28
    REG_F29
    REG_F30
    REG_F31

    // Vector register numberings.
    REG_V0
    REG_V1
    REG_V2
    REG_V3
    REG_V4
    REG_V5
    REG_V6
    REG_V7
    REG_V8
    REG_V9
    REG_V10
    REG_V11
    REG_V12
    REG_V13
    REG_V14
    REG_V15
    REG_V16
    REG_V17
    REG_V18
    REG_V19
    REG_V20
    REG_V21
    REG_V22
    REG_V23
    REG_V24
    REG_V25
    REG_V26
    REG_V27
    REG_V28
    REG_V29
    REG_V30
    REG_V31

    // This marks the end of the register numbering.
    REG_END

    // General registers reassigned to ABI names.
    REG_ZERO = REG_X0
    REG_RA   = REG_X1 // aka REG_LR
    REG_SP   = REG_X2
    REG_GP   = REG_X3 // aka REG_SB
    REG_TP   = REG_X4
    REG_T0   = REG_X5
    REG_T1   = REG_X6
    REG_T2   = REG_X7
    REG_S0   = REG_X8
    REG_S1   = REG_X9
    REG_A0   = REG_X10
    REG_A1   = REG_X11
    REG_A2   = REG_X12
    REG_A3   = REG_X13
    REG_A4   = REG_X14
    REG_A5   = REG_X15
    REG_A6   = REG_X16
    REG_A7   = REG_X17
    REG_S2   = REG_X18
    REG_S3   = REG_X19
    REG_S4   = REG_X20
    REG_S5   = REG_X21
    REG_S6   = REG_X22
    REG_S7   = REG_X23
    REG_S8   = REG_X24
    REG_S9   = REG_X25
    REG_S10  = REG_X26 // aka REG_CTXT
    REG_S11  = REG_X27 // aka REG_G
    REG_T3   = REG_X28
    REG_T4   = REG_X29
    REG_T5   = REG_X30
    REG_T6   = REG_X31 // aka REG_TMP

    // Go runtime register names.
    REG_CTXT = REG_S10 // Context for closures.
    REG_G    = REG_S11 // G pointer.
    REG_LR   = REG_RA  // Link register.
    REG_TMP  = REG_T6  // Reserved for assembler use.

    // ABI names for floating point registers.
    REG_FT0  = REG_F0
    REG_FT1  = REG_F1
    REG_FT2  = REG_F2
    REG_FT3  = REG_F3
    REG_FT4  = REG_F4
    REG_FT5  = REG_F5
    REG_FT6  = REG_F6
    REG_FT7  = REG_F7
    REG_FS0  = REG_F8
    REG_FS1  = REG_F9
    REG_FA0  = REG_F10
    REG_FA1  = REG_F11
    REG_FA2  = REG_F12
    REG_FA3  = REG_F13
    REG_FA4  = REG_F14
    REG_FA5  = REG_F15
    REG_FA6  = REG_F16
    REG_FA7  = REG_F17
    REG_FS2  = REG_F18
    REG_FS3  = REG_F19
    REG_FS4  = REG_F20
    REG_FS5  = REG_F21
    REG_FS6  = REG_F22
    REG_FS7  = REG_F23
    REG_FS8  = REG_F24
    REG_FS9  = REG_F25
    REG_FS10 = REG_F26
    REG_FS11 = REG_F27
    REG_FT8  = REG_F28
    REG_FT9  = REG_F29
    REG_FT10 = REG_F30
    REG_FT11 = REG_F31

    // Names generated by the SSA compiler.
    REGSP = REG_SP
    REGG  = REG_G
)

Prog.Mark flags.

const (
    // USES_REG_TMP indicates that a machine instruction generated from the
    // corresponding *obj.Prog uses the temporary register.
    USES_REG_TMP = 1 << iota

    // NEED_JAL_RELOC is set on JAL instructions to indicate that a
    // R_RISCV_JAL relocation is needed.
    NEED_JAL_RELOC

    // NEED_CALL_RELOC is set on an AUIPC instruction to indicate that it
    // is the first instruction in an AUIPC + JAL pair that needs a
    // R_RISCV_CALL relocation.
    NEED_CALL_RELOC

    // NEED_PCREL_ITYPE_RELOC is set on AUIPC instructions to indicate that
    // it is the first instruction in an AUIPC + I-type pair that needs a
    // R_RISCV_PCREL_ITYPE relocation.
    NEED_PCREL_ITYPE_RELOC

    // NEED_PCREL_STYPE_RELOC is set on AUIPC instructions to indicate that
    // it is the first instruction in an AUIPC + S-type pair that needs a
    // R_RISCV_PCREL_STYPE relocation.
    NEED_PCREL_STYPE_RELOC

    // NEED_GOT_PCREL_ITYPE_RELOC is set on AUIPC instructions to indicate that
    // it is the first instruction in an AUIPC + I-type pair that needs a
    // R_RISCV_GOT_PCREL_ITYPE relocation.
    NEED_GOT_PCREL_ITYPE_RELOC
)

RISC-V mnemonics, as defined in the "opcodes" and "opcodes-pseudo" files at https://github.com/riscv/riscv-opcodes.

As well as some pseudo-mnemonics (e.g. MOV) used only in the assembler.

See also "The RISC-V Instruction Set Manual" at https://riscv.org/technical/specifications/.

If you modify this table, you MUST run 'go generate' to regenerate anames.go!

const (

    // 2.4: Integer Computational Instructions
    AADDI = obj.ABaseRISCV + obj.A_ARCHSPECIFIC + iota
    ASLTI
    ASLTIU
    AANDI
    AORI
    AXORI
    ASLLI
    ASRLI
    ASRAI
    ALUI
    AAUIPC
    AADD
    ASLT
    ASLTU
    AAND
    AOR
    AXOR
    ASLL
    ASRL
    ASUB
    ASRA

    // 2.5: Control Transfer Instructions
    AJAL
    AJALR
    ABEQ
    ABNE
    ABLT
    ABLTU
    ABGE
    ABGEU

    // 2.6: Load and Store Instructions
    ALW
    ALWU
    ALH
    ALHU
    ALB
    ALBU
    ASW
    ASH
    ASB

    // 2.7: Memory Ordering Instructions
    AFENCE

    // 4.2: Integer Computational Instructions (RV64I)
    AADDIW
    ASLLIW
    ASRLIW
    ASRAIW
    AADDW
    ASLLW
    ASRLW
    ASUBW
    ASRAW

    // 4.3: Load and Store Instructions (RV64I)
    ALD
    ASD

    // 7.1: CSR Instructions (Zicsr)
    ACSRRW
    ACSRRS
    ACSRRC
    ACSRRWI
    ACSRRSI
    ACSRRCI

    // 12.3: Integer Conditional Operations (Zicond)
    ACZEROEQZ
    ACZERONEZ

    // 13.1: Multiplication Operations
    AMUL
    AMULH
    AMULHU
    AMULHSU
    AMULW

    // 13.2: Division Operations
    ADIV
    ADIVU
    AREM
    AREMU
    ADIVW
    ADIVUW
    AREMW
    AREMUW

    // 14.2: Load-Reserved/Store-Conditional Instructions (Zalrsc)
    ALRD
    ASCD
    ALRW
    ASCW

    // 14.4: Atomic Memory Operations (Zaamo)
    AAMOSWAPD
    AAMOADDD
    AAMOANDD
    AAMOORD
    AAMOXORD
    AAMOMAXD
    AAMOMAXUD
    AAMOMIND
    AAMOMINUD
    AAMOSWAPW
    AAMOADDW
    AAMOANDW
    AAMOORW
    AAMOXORW
    AAMOMAXW
    AAMOMAXUW
    AAMOMINW
    AAMOMINUW

    // 20.5: Single-Precision Load and Store Instructions
    AFLW
    AFSW

    // 20.6: Single-Precision Floating-Point Computational Instructions
    AFADDS
    AFSUBS
    AFMULS
    AFDIVS
    AFMINS
    AFMAXS
    AFSQRTS
    AFMADDS
    AFMSUBS
    AFNMADDS
    AFNMSUBS

    // 20.7: Single-Precision Floating-Point Conversion and Move Instructions
    AFCVTWS
    AFCVTLS
    AFCVTSW
    AFCVTSL
    AFCVTWUS
    AFCVTLUS
    AFCVTSWU
    AFCVTSLU
    AFSGNJS
    AFSGNJNS
    AFSGNJXS
    AFMVXS
    AFMVSX
    AFMVXW
    AFMVWX

    // 20.8: Single-Precision Floating-Point Compare Instructions
    AFEQS
    AFLTS
    AFLES

    // 20.9: Single-Precision Floating-Point Classify Instruction
    AFCLASSS

    // 21.3: Double-Precision Load and Store Instructions
    AFLD
    AFSD

    // 21.4: Double-Precision Floating-Point Computational Instructions
    AFADDD
    AFSUBD
    AFMULD
    AFDIVD
    AFMIND
    AFMAXD
    AFSQRTD
    AFMADDD
    AFMSUBD
    AFNMADDD
    AFNMSUBD

    // 21.5: Double-Precision Floating-Point Conversion and Move Instructions
    AFCVTWD
    AFCVTLD
    AFCVTDW
    AFCVTDL
    AFCVTWUD
    AFCVTLUD
    AFCVTDWU
    AFCVTDLU
    AFCVTSD
    AFCVTDS
    AFSGNJD
    AFSGNJND
    AFSGNJXD
    AFMVXD
    AFMVDX

    // 21.6: Double-Precision Floating-Point Compare Instructions
    AFEQD
    AFLTD
    AFLED

    // 21.7: Double-Precision Floating-Point Classify Instruction
    AFCLASSD

    // 22.1 Quad-Precision Load and Store Instructions
    AFLQ
    AFSQ

    // 22.2: Quad-Precision Computational Instructions
    AFADDQ
    AFSUBQ
    AFMULQ
    AFDIVQ
    AFMINQ
    AFMAXQ
    AFSQRTQ
    AFMADDQ
    AFMSUBQ
    AFNMADDQ
    AFNMSUBQ

    // 22.3: Quad-Precision Convert and Move Instructions
    AFCVTWQ
    AFCVTLQ
    AFCVTSQ
    AFCVTDQ
    AFCVTQW
    AFCVTQL
    AFCVTQS
    AFCVTQD
    AFCVTWUQ
    AFCVTLUQ
    AFCVTQWU
    AFCVTQLU
    AFSGNJQ
    AFSGNJNQ
    AFSGNJXQ

    // 22.4: Quad-Precision Floating-Point Compare Instructions
    AFEQQ
    AFLEQ
    AFLTQ

    // 22.5: Quad-Precision Floating-Point Classify Instruction
    AFCLASSQ

    // 26.3.1: Compressed Stack-Pointer-Based Loads and Stores
    ACLWSP
    ACLDSP
    ACFLDSP
    ACSWSP
    ACSDSP
    ACFSDSP

    // 26.3.2: Compressed Register-Based Loads and Stores
    ACLW
    ACLD
    ACFLD
    ACSW
    ACSD
    ACFSD

    // 26.4: Compressed Control Transfer Instructions
    ACJ
    ACJR
    ACJALR
    ACBEQZ
    ACBNEZ

    // 26.5.1: Compressed Integer Constant-Generation Instructions
    ACLI
    ACLUI
    ACADDI
    ACADDIW
    ACADDI16SP
    ACADDI4SPN
    ACSLLI
    ACSRLI
    ACSRAI
    ACANDI

    // 26.5.3: Compressed Integer Register-Register Operations
    ACMV
    ACADD
    ACAND
    ACOR
    ACXOR
    ACSUB
    ACADDW
    ACSUBW

    // 26.5.5: Compressed NOP Instruction
    ACNOP

    // 26.5.6: Compressed Breakpoint Instruction
    ACEBREAK

    // 28.4.1: Address Generation Instructions (Zba)
    AADDUW
    ASH1ADD
    ASH1ADDUW
    ASH2ADD
    ASH2ADDUW
    ASH3ADD
    ASH3ADDUW
    ASLLIUW

    // 28.4.2: Basic Bit Manipulation (Zbb)
    AANDN
    AORN
    AXNOR
    ACLZ
    ACLZW
    ACTZ
    ACTZW
    ACPOP
    ACPOPW
    AMAX
    AMAXU
    AMIN
    AMINU
    ASEXTB
    ASEXTH
    AZEXTH

    // 28.4.3: Bitwise Rotation (Zbb)
    AROL
    AROLW
    AROR
    ARORI
    ARORIW
    ARORW
    AORCB
    AREV8

    // 28.4.4: Single-bit Instructions (Zbs)
    ABCLR
    ABCLRI
    ABEXT
    ABEXTI
    ABINV
    ABINVI
    ABSET
    ABSETI

    // 31.6: Configuration-Setting Instructions
    AVSETVLI
    AVSETIVLI
    AVSETVL

    // 31.7.4: Vector Unit-Stride Instructions
    AVLE8V
    AVLE16V
    AVLE32V
    AVLE64V
    AVSE8V
    AVSE16V
    AVSE32V
    AVSE64V
    AVLMV
    AVSMV

    // 31.7.5: Vector Strided Instructions
    AVLSE8V
    AVLSE16V
    AVLSE32V
    AVLSE64V
    AVSSE8V
    AVSSE16V
    AVSSE32V
    AVSSE64V

    // 31.7.6: Vector Indexed Instructions
    AVLUXEI8V
    AVLUXEI16V
    AVLUXEI32V
    AVLUXEI64V
    AVLOXEI8V
    AVLOXEI16V
    AVLOXEI32V
    AVLOXEI64V
    AVSUXEI8V
    AVSUXEI16V
    AVSUXEI32V
    AVSUXEI64V
    AVSOXEI8V
    AVSOXEI16V
    AVSOXEI32V
    AVSOXEI64V

    // 31.7.7: Unit-stride Fault-Only-First Loads
    AVLE8FFV
    AVLE16FFV
    AVLE32FFV
    AVLE64FFV

    // 31.7.8.1. Vector Unit-Stride Segment Loads and Stores
    AVLSEG2E8V
    AVLSEG3E8V
    AVLSEG4E8V
    AVLSEG5E8V
    AVLSEG6E8V
    AVLSEG7E8V
    AVLSEG8E8V
    AVLSEG2E16V
    AVLSEG3E16V
    AVLSEG4E16V
    AVLSEG5E16V
    AVLSEG6E16V
    AVLSEG7E16V
    AVLSEG8E16V
    AVLSEG2E32V
    AVLSEG3E32V
    AVLSEG4E32V
    AVLSEG5E32V
    AVLSEG6E32V
    AVLSEG7E32V
    AVLSEG8E32V
    AVLSEG2E64V
    AVLSEG3E64V
    AVLSEG4E64V
    AVLSEG5E64V
    AVLSEG6E64V
    AVLSEG7E64V
    AVLSEG8E64V

    AVSSEG2E8V
    AVSSEG3E8V
    AVSSEG4E8V
    AVSSEG5E8V
    AVSSEG6E8V
    AVSSEG7E8V
    AVSSEG8E8V
    AVSSEG2E16V
    AVSSEG3E16V
    AVSSEG4E16V
    AVSSEG5E16V
    AVSSEG6E16V
    AVSSEG7E16V
    AVSSEG8E16V
    AVSSEG2E32V
    AVSSEG3E32V
    AVSSEG4E32V
    AVSSEG5E32V
    AVSSEG6E32V
    AVSSEG7E32V
    AVSSEG8E32V
    AVSSEG2E64V
    AVSSEG3E64V
    AVSSEG4E64V
    AVSSEG5E64V
    AVSSEG6E64V
    AVSSEG7E64V
    AVSSEG8E64V

    AVLSEG2E8FFV
    AVLSEG3E8FFV
    AVLSEG4E8FFV
    AVLSEG5E8FFV
    AVLSEG6E8FFV
    AVLSEG7E8FFV
    AVLSEG8E8FFV
    AVLSEG2E16FFV
    AVLSEG3E16FFV
    AVLSEG4E16FFV
    AVLSEG5E16FFV
    AVLSEG6E16FFV
    AVLSEG7E16FFV
    AVLSEG8E16FFV
    AVLSEG2E32FFV
    AVLSEG3E32FFV
    AVLSEG4E32FFV
    AVLSEG5E32FFV
    AVLSEG6E32FFV
    AVLSEG7E32FFV
    AVLSEG8E32FFV
    AVLSEG2E64FFV
    AVLSEG3E64FFV
    AVLSEG4E64FFV
    AVLSEG5E64FFV
    AVLSEG6E64FFV
    AVLSEG7E64FFV
    AVLSEG8E64FFV

    // 31.7.8.2. Vector Strided Segment Loads and Stores
    AVLSSEG2E8V
    AVLSSEG3E8V
    AVLSSEG4E8V
    AVLSSEG5E8V
    AVLSSEG6E8V
    AVLSSEG7E8V
    AVLSSEG8E8V
    AVLSSEG2E16V
    AVLSSEG3E16V
    AVLSSEG4E16V
    AVLSSEG5E16V
    AVLSSEG6E16V
    AVLSSEG7E16V
    AVLSSEG8E16V
    AVLSSEG2E32V
    AVLSSEG3E32V
    AVLSSEG4E32V
    AVLSSEG5E32V
    AVLSSEG6E32V
    AVLSSEG7E32V
    AVLSSEG8E32V
    AVLSSEG2E64V
    AVLSSEG3E64V
    AVLSSEG4E64V
    AVLSSEG5E64V
    AVLSSEG6E64V
    AVLSSEG7E64V
    AVLSSEG8E64V

    AVSSSEG2E8V
    AVSSSEG3E8V
    AVSSSEG4E8V
    AVSSSEG5E8V
    AVSSSEG6E8V
    AVSSSEG7E8V
    AVSSSEG8E8V
    AVSSSEG2E16V
    AVSSSEG3E16V
    AVSSSEG4E16V
    AVSSSEG5E16V
    AVSSSEG6E16V
    AVSSSEG7E16V
    AVSSSEG8E16V
    AVSSSEG2E32V
    AVSSSEG3E32V
    AVSSSEG4E32V
    AVSSSEG5E32V
    AVSSSEG6E32V
    AVSSSEG7E32V
    AVSSSEG8E32V
    AVSSSEG2E64V
    AVSSSEG3E64V
    AVSSSEG4E64V
    AVSSSEG5E64V
    AVSSSEG6E64V
    AVSSSEG7E64V
    AVSSSEG8E64V

    // 31.7.8.3. Vector Indexed Segment Loads and Stores
    AVLOXSEG2EI8V
    AVLOXSEG3EI8V
    AVLOXSEG4EI8V
    AVLOXSEG5EI8V
    AVLOXSEG6EI8V
    AVLOXSEG7EI8V
    AVLOXSEG8EI8V
    AVLOXSEG2EI16V
    AVLOXSEG3EI16V
    AVLOXSEG4EI16V
    AVLOXSEG5EI16V
    AVLOXSEG6EI16V
    AVLOXSEG7EI16V
    AVLOXSEG8EI16V
    AVLOXSEG2EI32V
    AVLOXSEG3EI32V
    AVLOXSEG4EI32V
    AVLOXSEG5EI32V
    AVLOXSEG6EI32V
    AVLOXSEG7EI32V
    AVLOXSEG8EI32V
    AVLOXSEG2EI64V
    AVLOXSEG3EI64V
    AVLOXSEG4EI64V
    AVLOXSEG5EI64V
    AVLOXSEG6EI64V
    AVLOXSEG7EI64V
    AVLOXSEG8EI64V

    AVSOXSEG2EI8V
    AVSOXSEG3EI8V
    AVSOXSEG4EI8V
    AVSOXSEG5EI8V
    AVSOXSEG6EI8V
    AVSOXSEG7EI8V
    AVSOXSEG8EI8V
    AVSOXSEG2EI16V
    AVSOXSEG3EI16V
    AVSOXSEG4EI16V
    AVSOXSEG5EI16V
    AVSOXSEG6EI16V
    AVSOXSEG7EI16V
    AVSOXSEG8EI16V
    AVSOXSEG2EI32V
    AVSOXSEG3EI32V
    AVSOXSEG4EI32V
    AVSOXSEG5EI32V
    AVSOXSEG6EI32V
    AVSOXSEG7EI32V
    AVSOXSEG8EI32V
    AVSOXSEG2EI64V
    AVSOXSEG3EI64V
    AVSOXSEG4EI64V
    AVSOXSEG5EI64V
    AVSOXSEG6EI64V
    AVSOXSEG7EI64V
    AVSOXSEG8EI64V

    AVLUXSEG2EI8V
    AVLUXSEG3EI8V
    AVLUXSEG4EI8V
    AVLUXSEG5EI8V
    AVLUXSEG6EI8V
    AVLUXSEG7EI8V
    AVLUXSEG8EI8V
    AVLUXSEG2EI16V
    AVLUXSEG3EI16V
    AVLUXSEG4EI16V
    AVLUXSEG5EI16V
    AVLUXSEG6EI16V
    AVLUXSEG7EI16V
    AVLUXSEG8EI16V
    AVLUXSEG2EI32V
    AVLUXSEG3EI32V
    AVLUXSEG4EI32V
    AVLUXSEG5EI32V
    AVLUXSEG6EI32V
    AVLUXSEG7EI32V
    AVLUXSEG8EI32V
    AVLUXSEG2EI64V
    AVLUXSEG3EI64V
    AVLUXSEG4EI64V
    AVLUXSEG5EI64V
    AVLUXSEG6EI64V
    AVLUXSEG7EI64V
    AVLUXSEG8EI64V

    AVSUXSEG2EI8V
    AVSUXSEG3EI8V
    AVSUXSEG4EI8V
    AVSUXSEG5EI8V
    AVSUXSEG6EI8V
    AVSUXSEG7EI8V
    AVSUXSEG8EI8V
    AVSUXSEG2EI16V
    AVSUXSEG3EI16V
    AVSUXSEG4EI16V
    AVSUXSEG5EI16V
    AVSUXSEG6EI16V
    AVSUXSEG7EI16V
    AVSUXSEG8EI16V
    AVSUXSEG2EI32V
    AVSUXSEG3EI32V
    AVSUXSEG4EI32V
    AVSUXSEG5EI32V
    AVSUXSEG6EI32V
    AVSUXSEG7EI32V
    AVSUXSEG8EI32V
    AVSUXSEG2EI64V
    AVSUXSEG3EI64V
    AVSUXSEG4EI64V
    AVSUXSEG5EI64V
    AVSUXSEG6EI64V
    AVSUXSEG7EI64V
    AVSUXSEG8EI64V

    // 31.7.9: Vector Load/Store Whole Register Instructions
    AVL1RE8V
    AVL1RE16V
    AVL1RE32V
    AVL1RE64V
    AVL2RE8V
    AVL2RE16V
    AVL2RE32V
    AVL2RE64V
    AVL4RE8V
    AVL4RE16V
    AVL4RE32V
    AVL4RE64V
    AVL8RE8V
    AVL8RE16V
    AVL8RE32V
    AVL8RE64V
    AVS1RV
    AVS2RV
    AVS4RV
    AVS8RV

    // 31.11.1: Vector Single-Width Integer Add and Subtract
    AVADDVV
    AVADDVX
    AVADDVI
    AVSUBVV
    AVSUBVX
    AVRSUBVX
    AVRSUBVI

    // 31.11.2: Vector Widening Integer Add/Subtract
    AVWADDUVV
    AVWADDUVX
    AVWSUBUVV
    AVWSUBUVX
    AVWADDVV
    AVWADDVX
    AVWSUBVV
    AVWSUBVX
    AVWADDUWV
    AVWADDUWX
    AVWSUBUWV
    AVWSUBUWX
    AVWADDWV
    AVWADDWX
    AVWSUBWV
    AVWSUBWX

    // 31.11.3: Vector Integer Extension
    AVZEXTVF2
    AVSEXTVF2
    AVZEXTVF4
    AVSEXTVF4
    AVZEXTVF8
    AVSEXTVF8

    // 31.11.4: Vector Integer Add-with-Carry / Subtract-with-Borrow Instructions
    AVADCVVM
    AVADCVXM
    AVADCVIM
    AVMADCVVM
    AVMADCVXM
    AVMADCVIM
    AVMADCVV
    AVMADCVX
    AVMADCVI
    AVSBCVVM
    AVSBCVXM
    AVMSBCVVM
    AVMSBCVXM
    AVMSBCVV
    AVMSBCVX

    // 31.11.5: Vector Bitwise Logical Instructions
    AVANDVV
    AVANDVX
    AVANDVI
    AVORVV
    AVORVX
    AVORVI
    AVXORVV
    AVXORVX
    AVXORVI

    // 31.11.6: Vector Single-Width Shift Instructions
    AVSLLVV
    AVSLLVX
    AVSLLVI
    AVSRLVV
    AVSRLVX
    AVSRLVI
    AVSRAVV
    AVSRAVX
    AVSRAVI

    // 31.11.7: Vector Narrowing Integer Right Shift Instructions
    AVNSRLWV
    AVNSRLWX
    AVNSRLWI
    AVNSRAWV
    AVNSRAWX
    AVNSRAWI

    // 31.11.8: Vector Integer Compare Instructions
    AVMSEQVV
    AVMSEQVX
    AVMSEQVI
    AVMSNEVV
    AVMSNEVX
    AVMSNEVI
    AVMSLTUVV
    AVMSLTUVX
    AVMSLTVV
    AVMSLTVX
    AVMSLEUVV
    AVMSLEUVX
    AVMSLEUVI
    AVMSLEVV
    AVMSLEVX
    AVMSLEVI
    AVMSGTUVX
    AVMSGTUVI
    AVMSGTVX
    AVMSGTVI

    // 31.11.9: Vector Integer Min/Max Instructions
    AVMINUVV
    AVMINUVX
    AVMINVV
    AVMINVX
    AVMAXUVV
    AVMAXUVX
    AVMAXVV
    AVMAXVX

    // 31.11.10: Vector Single-Width Integer Multiply Instructions
    AVMULVV
    AVMULVX
    AVMULHVV
    AVMULHVX
    AVMULHUVV
    AVMULHUVX
    AVMULHSUVV
    AVMULHSUVX

    // 31.11.11: Vector Integer Divide Instructions
    AVDIVUVV
    AVDIVUVX
    AVDIVVV
    AVDIVVX
    AVREMUVV
    AVREMUVX
    AVREMVV
    AVREMVX

    // 31.11.12: Vector Widening Integer Multiply Instructions
    AVWMULVV
    AVWMULVX
    AVWMULUVV
    AVWMULUVX
    AVWMULSUVV
    AVWMULSUVX

    // 31.11.13: Vector Single-Width Integer Multiply-Add Instructions
    AVMACCVV
    AVMACCVX
    AVNMSACVV
    AVNMSACVX
    AVMADDVV
    AVMADDVX
    AVNMSUBVV
    AVNMSUBVX

    // 31.11.14: Vector Widening Integer Multiply-Add Instructions
    AVWMACCUVV
    AVWMACCUVX
    AVWMACCVV
    AVWMACCVX
    AVWMACCSUVV
    AVWMACCSUVX
    AVWMACCUSVX

    // 31.11.15: Vector Integer Merge Instructions
    AVMERGEVVM
    AVMERGEVXM
    AVMERGEVIM

    // 31.11.16: Vector Integer Move Instructions
    AVMVVV
    AVMVVX
    AVMVVI

    // 31.12.1: Vector Single-Width Saturating Add and Subtract
    AVSADDUVV
    AVSADDUVX
    AVSADDUVI
    AVSADDVV
    AVSADDVX
    AVSADDVI
    AVSSUBUVV
    AVSSUBUVX
    AVSSUBVV
    AVSSUBVX

    // 31.12.2: Vector Single-Width Averaging Add and Subtract
    AVAADDUVV
    AVAADDUVX
    AVAADDVV
    AVAADDVX
    AVASUBUVV
    AVASUBUVX
    AVASUBVV
    AVASUBVX

    // 31.12.3: Vector Single-Width Fractional Multiply with Rounding and Saturation
    AVSMULVV
    AVSMULVX

    // 31.12.4: Vector Single-Width Scaling Shift Instructions
    AVSSRLVV
    AVSSRLVX
    AVSSRLVI
    AVSSRAVV
    AVSSRAVX
    AVSSRAVI

    // 31.12.5: Vector Narrowing Fixed-Point Clip Instructions
    AVNCLIPUWV
    AVNCLIPUWX
    AVNCLIPUWI
    AVNCLIPWV
    AVNCLIPWX
    AVNCLIPWI

    // 31.13.2: Vector Single-Width Floating-Point Add/Subtract Instructions
    AVFADDVV
    AVFADDVF
    AVFSUBVV
    AVFSUBVF
    AVFRSUBVF

    // 31.13.3: Vector Widening Floating-Point Add/Subtract Instructions
    AVFWADDVV
    AVFWADDVF
    AVFWSUBVV
    AVFWSUBVF
    AVFWADDWV
    AVFWADDWF
    AVFWSUBWV
    AVFWSUBWF

    // 31.13.4: Vector Single-Width Floating-Point Multiply/Divide Instructions
    AVFMULVV
    AVFMULVF
    AVFDIVVV
    AVFDIVVF
    AVFRDIVVF

    // 31.13.5: Vector Widening Floating-Point Multiply
    AVFWMULVV
    AVFWMULVF

    // 31.13.6: Vector Single-Width Floating-Point Fused Multiply-Add Instructions
    AVFMACCVV
    AVFMACCVF
    AVFNMACCVV
    AVFNMACCVF
    AVFMSACVV
    AVFMSACVF
    AVFNMSACVV
    AVFNMSACVF
    AVFMADDVV
    AVFMADDVF
    AVFNMADDVV
    AVFNMADDVF
    AVFMSUBVV
    AVFMSUBVF
    AVFNMSUBVV
    AVFNMSUBVF

    // 31.13.7: Vector Widening Floating-Point Fused Multiply-Add Instructions
    AVFWMACCVV
    AVFWMACCVF
    AVFWNMACCVV
    AVFWNMACCVF
    AVFWMSACVV
    AVFWMSACVF
    AVFWNMSACVV
    AVFWNMSACVF

    // 31.13.8: Vector Floating-Point Square-Root Instruction
    AVFSQRTV

    // 31.13.9: Vector Floating-Point Reciprocal Square-Root Estimate Instruction
    AVFRSQRT7V

    // 31.13.10: Vector Floating-Point Reciprocal Estimate Instruction
    AVFREC7V

    // 31.13.11: Vector Floating-Point MIN/MAX Instructions
    AVFMINVV
    AVFMINVF
    AVFMAXVV
    AVFMAXVF

    // 31.13.12: Vector Floating-Point Sign-Injection Instructions
    AVFSGNJVV
    AVFSGNJVF
    AVFSGNJNVV
    AVFSGNJNVF
    AVFSGNJXVV
    AVFSGNJXVF

    // 31.13.13: Vector Floating-Point Compare Instructions
    AVMFEQVV
    AVMFEQVF
    AVMFNEVV
    AVMFNEVF
    AVMFLTVV
    AVMFLTVF
    AVMFLEVV
    AVMFLEVF
    AVMFGTVF
    AVMFGEVF

    // 31.13.14: Vector Floating-Point Classify Instruction
    AVFCLASSV

    // 31.13.15: Vector Floating-Point Merge Instruction
    AVFMERGEVFM

    // 31.13.16: Vector Floating-Point Move Instruction
    AVFMVVF

    // 31.13.17: Single-Width Floating-Point/Integer Type-Convert Instructions
    AVFCVTXUFV
    AVFCVTXFV
    AVFCVTRTZXUFV
    AVFCVTRTZXFV
    AVFCVTFXUV
    AVFCVTFXV

    // 31.13.18: Widening Floating-Point/Integer Type-Convert Instructions
    AVFWCVTXUFV
    AVFWCVTXFV
    AVFWCVTRTZXUFV
    AVFWCVTRTZXFV
    AVFWCVTFXUV
    AVFWCVTFXV
    AVFWCVTFFV

    // 31.13.19: Narrowing Floating-Point/Integer Type-Convert Instructions
    AVFNCVTXUFW
    AVFNCVTXFW
    AVFNCVTRTZXUFW
    AVFNCVTRTZXFW
    AVFNCVTFXUW
    AVFNCVTFXW
    AVFNCVTFFW
    AVFNCVTRODFFW

    // 31.14.1: Vector Single-Width Integer Reduction Instructions
    AVREDSUMVS
    AVREDMAXUVS
    AVREDMAXVS
    AVREDMINUVS
    AVREDMINVS
    AVREDANDVS
    AVREDORVS
    AVREDXORVS

    // 31.14.2: Vector Widening Integer Reduction Instructions
    AVWREDSUMUVS
    AVWREDSUMVS

    // 31.14.3: Vector Single-Width Floating-Point Reduction Instructions
    AVFREDOSUMVS
    AVFREDUSUMVS
    AVFREDMAXVS
    AVFREDMINVS

    // 31.14.4: Vector Widening Floating-Point Reduction Instructions
    AVFWREDOSUMVS
    AVFWREDUSUMVS

    // 31.15: Vector Mask Instructions
    AVMANDMM
    AVMNANDMM
    AVMANDNMM
    AVMXORMM
    AVMORMM
    AVMNORMM
    AVMORNMM
    AVMXNORMM
    AVCPOPM
    AVFIRSTM
    AVMSBFM
    AVMSIFM
    AVMSOFM
    AVIOTAM
    AVIDV

    // 31.16.1: Integer Scalar Move Instructions
    AVMVXS
    AVMVSX

    // 31.16.2: Floating-Point Scalar Move Instructions
    AVFMVFS
    AVFMVSF

    // 31.16.3: Vector Slide Instructions
    AVSLIDEUPVX
    AVSLIDEUPVI
    AVSLIDEDOWNVX
    AVSLIDEDOWNVI
    AVSLIDE1UPVX
    AVFSLIDE1UPVF
    AVSLIDE1DOWNVX
    AVFSLIDE1DOWNVF

    // 31.16.4: Vector Register Gather Instructions
    AVRGATHERVV
    AVRGATHEREI16VV
    AVRGATHERVX
    AVRGATHERVI

    // 31.16.5: Vector Compress Instruction
    AVCOMPRESSVM

    // 31.16.6: Whole Vector Register Move
    AVMV1RV
    AVMV2RV
    AVMV4RV
    AVMV8RV

    // 3.3.1: Environment Call and Breakpoint
    AECALL
    ASCALL
    AEBREAK
    ASBREAK

    // 3.3.2: Trap-Return Instructions
    AMRET
    ASRET
    ADRET

    // 3.3.3: Wait for Interrupt
    AWFI

    // 10.2: Supervisor Memory-Management Fence Instruction
    ASFENCEVMA

    // The escape hatch. Inserts a single 32-bit word.
    AWORD

    // Pseudo-instructions.  These get translated by the assembler into other
    // instructions, based on their operands.
    ABEQZ
    ABGEZ
    ABGT
    ABGTU
    ABGTZ
    ABLE
    ABLEU
    ABLEZ
    ABLTZ
    ABNEZ
    AFABSD
    AFABSS
    AFNED
    AFNEGD
    AFNEGS
    AFNES
    AMOV
    AMOVB
    AMOVBU
    AMOVD
    AMOVF
    AMOVH
    AMOVHU
    AMOVW
    AMOVWU
    ANEG
    ANEGW
    ANOT
    ARDCYCLE
    ARDINSTRET
    ARDTIME
    ASEQZ
    ASNEZ
    AVFABSV
    AVFNEGV
    AVL1RV
    AVL2RV
    AVL4RV
    AVL8RV
    AVMCLRM
    AVMFGEVV
    AVMFGTVV
    AVMMVM
    AVMNOTM
    AVMSETM
    AVMSGEUVI
    AVMSGEUVV
    AVMSGEVI
    AVMSGEVV
    AVMSGTUVV
    AVMSGTVV
    AVMSLTUVI
    AVMSLTVI
    AVNCVTXXW
    AVNEGV
    AVNOTV
    AVWCVTUXXV
    AVWCVTXXV

    // End marker
    ALAST
)
const (
    RM_RNE uint8 = iota // Round to Nearest, ties to Even
    RM_RTZ              // Round towards Zero
    RM_RDN              // Round Down
    RM_RUP              // Round Up
    RM_RMM              // Round to Nearest, ties to Max Magnitude
)

Instruction encoding masks.

const (
    // BTypeImmMask is a mask including only the immediate portion of
    // B-type instructions.
    BTypeImmMask = 0xfe000f80

    // CBTypeImmMask is a mask including only the immediate portion of
    // CB-type instructions.
    CBTypeImmMask = 0x1c7c

    // CJTypeImmMask is a mask including only the immediate portion of
    // CJ-type instructions.
    CJTypeImmMask = 0x1f7c

    // ITypeImmMask is a mask including only the immediate portion of
    // I-type instructions.
    ITypeImmMask = 0xfff00000

    // JTypeImmMask is a mask including only the immediate portion of
    // J-type instructions.
    JTypeImmMask = 0xfffff000

    // STypeImmMask is a mask including only the immediate portion of
    // S-type instructions.
    STypeImmMask = 0xfe000f80

    // UTypeImmMask is a mask including only the immediate portion of
    // U-type instructions.
    UTypeImmMask = 0xfffff000
)
const NEED_RELOC = NEED_JAL_RELOC | NEED_CALL_RELOC | NEED_PCREL_ITYPE_RELOC |
    NEED_PCREL_STYPE_RELOC | NEED_GOT_PCREL_ITYPE_RELOC

Variables

var Anames = []string{
    obj.A_ARCHSPECIFIC: "ADDI",
    "SLTI",
    "SLTIU",
    "ANDI",
    "ORI",
    "XORI",
    "SLLI",
    "SRLI",
    "SRAI",
    "LUI",
    "AUIPC",
    "ADD",
    "SLT",
    "SLTU",
    "AND",
    "OR",
    "XOR",
    "SLL",
    "SRL",
    "SUB",
    "SRA",
    "JAL",
    "JALR",
    "BEQ",
    "BNE",
    "BLT",
    "BLTU",
    "BGE",
    "BGEU",
    "LW",
    "LWU",
    "LH",
    "LHU",
    "LB",
    "LBU",
    "SW",
    "SH",
    "SB",
    "FENCE",
    "ADDIW",
    "SLLIW",
    "SRLIW",
    "SRAIW",
    "ADDW",
    "SLLW",
    "SRLW",
    "SUBW",
    "SRAW",
    "LD",
    "SD",
    "CSRRW",
    "CSRRS",
    "CSRRC",
    "CSRRWI",
    "CSRRSI",
    "CSRRCI",
    "CZEROEQZ",
    "CZERONEZ",
    "MUL",
    "MULH",
    "MULHU",
    "MULHSU",
    "MULW",
    "DIV",
    "DIVU",
    "REM",
    "REMU",
    "DIVW",
    "DIVUW",
    "REMW",
    "REMUW",
    "LRD",
    "SCD",
    "LRW",
    "SCW",
    "AMOSWAPD",
    "AMOADDD",
    "AMOANDD",
    "AMOORD",
    "AMOXORD",
    "AMOMAXD",
    "AMOMAXUD",
    "AMOMIND",
    "AMOMINUD",
    "AMOSWAPW",
    "AMOADDW",
    "AMOANDW",
    "AMOORW",
    "AMOXORW",
    "AMOMAXW",
    "AMOMAXUW",
    "AMOMINW",
    "AMOMINUW",
    "FLW",
    "FSW",
    "FADDS",
    "FSUBS",
    "FMULS",
    "FDIVS",
    "FMINS",
    "FMAXS",
    "FSQRTS",
    "FMADDS",
    "FMSUBS",
    "FNMADDS",
    "FNMSUBS",
    "FCVTWS",
    "FCVTLS",
    "FCVTSW",
    "FCVTSL",
    "FCVTWUS",
    "FCVTLUS",
    "FCVTSWU",
    "FCVTSLU",
    "FSGNJS",
    "FSGNJNS",
    "FSGNJXS",
    "FMVXS",
    "FMVSX",
    "FMVXW",
    "FMVWX",
    "FEQS",
    "FLTS",
    "FLES",
    "FCLASSS",
    "FLD",
    "FSD",
    "FADDD",
    "FSUBD",
    "FMULD",
    "FDIVD",
    "FMIND",
    "FMAXD",
    "FSQRTD",
    "FMADDD",
    "FMSUBD",
    "FNMADDD",
    "FNMSUBD",
    "FCVTWD",
    "FCVTLD",
    "FCVTDW",
    "FCVTDL",
    "FCVTWUD",
    "FCVTLUD",
    "FCVTDWU",
    "FCVTDLU",
    "FCVTSD",
    "FCVTDS",
    "FSGNJD",
    "FSGNJND",
    "FSGNJXD",
    "FMVXD",
    "FMVDX",
    "FEQD",
    "FLTD",
    "FLED",
    "FCLASSD",
    "FLQ",
    "FSQ",
    "FADDQ",
    "FSUBQ",
    "FMULQ",
    "FDIVQ",
    "FMINQ",
    "FMAXQ",
    "FSQRTQ",
    "FMADDQ",
    "FMSUBQ",
    "FNMADDQ",
    "FNMSUBQ",
    "FCVTWQ",
    "FCVTLQ",
    "FCVTSQ",
    "FCVTDQ",
    "FCVTQW",
    "FCVTQL",
    "FCVTQS",
    "FCVTQD",
    "FCVTWUQ",
    "FCVTLUQ",
    "FCVTQWU",
    "FCVTQLU",
    "FSGNJQ",
    "FSGNJNQ",
    "FSGNJXQ",
    "FEQQ",
    "FLEQ",
    "FLTQ",
    "FCLASSQ",
    "CLWSP",
    "CLDSP",
    "CFLDSP",
    "CSWSP",
    "CSDSP",
    "CFSDSP",
    "CLW",
    "CLD",
    "CFLD",
    "CSW",
    "CSD",
    "CFSD",
    "CJ",
    "CJR",
    "CJALR",
    "CBEQZ",
    "CBNEZ",
    "CLI",
    "CLUI",
    "CADDI",
    "CADDIW",
    "CADDI16SP",
    "CADDI4SPN",
    "CSLLI",
    "CSRLI",
    "CSRAI",
    "CANDI",
    "CMV",
    "CADD",
    "CAND",
    "COR",
    "CXOR",
    "CSUB",
    "CADDW",
    "CSUBW",
    "CNOP",
    "CEBREAK",
    "ADDUW",
    "SH1ADD",
    "SH1ADDUW",
    "SH2ADD",
    "SH2ADDUW",
    "SH3ADD",
    "SH3ADDUW",
    "SLLIUW",
    "ANDN",
    "ORN",
    "XNOR",
    "CLZ",
    "CLZW",
    "CTZ",
    "CTZW",
    "CPOP",
    "CPOPW",
    "MAX",
    "MAXU",
    "MIN",
    "MINU",
    "SEXTB",
    "SEXTH",
    "ZEXTH",
    "ROL",
    "ROLW",
    "ROR",
    "RORI",
    "RORIW",
    "RORW",
    "ORCB",
    "REV8",
    "BCLR",
    "BCLRI",
    "BEXT",
    "BEXTI",
    "BINV",
    "BINVI",
    "BSET",
    "BSETI",
    "VSETVLI",
    "VSETIVLI",
    "VSETVL",
    "VLE8V",
    "VLE16V",
    "VLE32V",
    "VLE64V",
    "VSE8V",
    "VSE16V",
    "VSE32V",
    "VSE64V",
    "VLMV",
    "VSMV",
    "VLSE8V",
    "VLSE16V",
    "VLSE32V",
    "VLSE64V",
    "VSSE8V",
    "VSSE16V",
    "VSSE32V",
    "VSSE64V",
    "VLUXEI8V",
    "VLUXEI16V",
    "VLUXEI32V",
    "VLUXEI64V",
    "VLOXEI8V",
    "VLOXEI16V",
    "VLOXEI32V",
    "VLOXEI64V",
    "VSUXEI8V",
    "VSUXEI16V",
    "VSUXEI32V",
    "VSUXEI64V",
    "VSOXEI8V",
    "VSOXEI16V",
    "VSOXEI32V",
    "VSOXEI64V",
    "VLE8FFV",
    "VLE16FFV",
    "VLE32FFV",
    "VLE64FFV",
    "VLSEG2E8V",
    "VLSEG3E8V",
    "VLSEG4E8V",
    "VLSEG5E8V",
    "VLSEG6E8V",
    "VLSEG7E8V",
    "VLSEG8E8V",
    "VLSEG2E16V",
    "VLSEG3E16V",
    "VLSEG4E16V",
    "VLSEG5E16V",
    "VLSEG6E16V",
    "VLSEG7E16V",
    "VLSEG8E16V",
    "VLSEG2E32V",
    "VLSEG3E32V",
    "VLSEG4E32V",
    "VLSEG5E32V",
    "VLSEG6E32V",
    "VLSEG7E32V",
    "VLSEG8E32V",
    "VLSEG2E64V",
    "VLSEG3E64V",
    "VLSEG4E64V",
    "VLSEG5E64V",
    "VLSEG6E64V",
    "VLSEG7E64V",
    "VLSEG8E64V",
    "VSSEG2E8V",
    "VSSEG3E8V",
    "VSSEG4E8V",
    "VSSEG5E8V",
    "VSSEG6E8V",
    "VSSEG7E8V",
    "VSSEG8E8V",
    "VSSEG2E16V",
    "VSSEG3E16V",
    "VSSEG4E16V",
    "VSSEG5E16V",
    "VSSEG6E16V",
    "VSSEG7E16V",
    "VSSEG8E16V",
    "VSSEG2E32V",
    "VSSEG3E32V",
    "VSSEG4E32V",
    "VSSEG5E32V",
    "VSSEG6E32V",
    "VSSEG7E32V",
    "VSSEG8E32V",
    "VSSEG2E64V",
    "VSSEG3E64V",
    "VSSEG4E64V",
    "VSSEG5E64V",
    "VSSEG6E64V",
    "VSSEG7E64V",
    "VSSEG8E64V",
    "VLSEG2E8FFV",
    "VLSEG3E8FFV",
    "VLSEG4E8FFV",
    "VLSEG5E8FFV",
    "VLSEG6E8FFV",
    "VLSEG7E8FFV",
    "VLSEG8E8FFV",
    "VLSEG2E16FFV",
    "VLSEG3E16FFV",
    "VLSEG4E16FFV",
    "VLSEG5E16FFV",
    "VLSEG6E16FFV",
    "VLSEG7E16FFV",
    "VLSEG8E16FFV",
    "VLSEG2E32FFV",
    "VLSEG3E32FFV",
    "VLSEG4E32FFV",
    "VLSEG5E32FFV",
    "VLSEG6E32FFV",
    "VLSEG7E32FFV",
    "VLSEG8E32FFV",
    "VLSEG2E64FFV",
    "VLSEG3E64FFV",
    "VLSEG4E64FFV",
    "VLSEG5E64FFV",
    "VLSEG6E64FFV",
    "VLSEG7E64FFV",
    "VLSEG8E64FFV",
    "VLSSEG2E8V",
    "VLSSEG3E8V",
    "VLSSEG4E8V",
    "VLSSEG5E8V",
    "VLSSEG6E8V",
    "VLSSEG7E8V",
    "VLSSEG8E8V",
    "VLSSEG2E16V",
    "VLSSEG3E16V",
    "VLSSEG4E16V",
    "VLSSEG5E16V",
    "VLSSEG6E16V",
    "VLSSEG7E16V",
    "VLSSEG8E16V",
    "VLSSEG2E32V",
    "VLSSEG3E32V",
    "VLSSEG4E32V",
    "VLSSEG5E32V",
    "VLSSEG6E32V",
    "VLSSEG7E32V",
    "VLSSEG8E32V",
    "VLSSEG2E64V",
    "VLSSEG3E64V",
    "VLSSEG4E64V",
    "VLSSEG5E64V",
    "VLSSEG6E64V",
    "VLSSEG7E64V",
    "VLSSEG8E64V",
    "VSSSEG2E8V",
    "VSSSEG3E8V",
    "VSSSEG4E8V",
    "VSSSEG5E8V",
    "VSSSEG6E8V",
    "VSSSEG7E8V",
    "VSSSEG8E8V",
    "VSSSEG2E16V",
    "VSSSEG3E16V",
    "VSSSEG4E16V",
    "VSSSEG5E16V",
    "VSSSEG6E16V",
    "VSSSEG7E16V",
    "VSSSEG8E16V",
    "VSSSEG2E32V",
    "VSSSEG3E32V",
    "VSSSEG4E32V",
    "VSSSEG5E32V",
    "VSSSEG6E32V",
    "VSSSEG7E32V",
    "VSSSEG8E32V",
    "VSSSEG2E64V",
    "VSSSEG3E64V",
    "VSSSEG4E64V",
    "VSSSEG5E64V",
    "VSSSEG6E64V",
    "VSSSEG7E64V",
    "VSSSEG8E64V",
    "VLOXSEG2EI8V",
    "VLOXSEG3EI8V",
    "VLOXSEG4EI8V",
    "VLOXSEG5EI8V",
    "VLOXSEG6EI8V",
    "VLOXSEG7EI8V",
    "VLOXSEG8EI8V",
    "VLOXSEG2EI16V",
    "VLOXSEG3EI16V",
    "VLOXSEG4EI16V",
    "VLOXSEG5EI16V",
    "VLOXSEG6EI16V",
    "VLOXSEG7EI16V",
    "VLOXSEG8EI16V",
    "VLOXSEG2EI32V",
    "VLOXSEG3EI32V",
    "VLOXSEG4EI32V",
    "VLOXSEG5EI32V",
    "VLOXSEG6EI32V",
    "VLOXSEG7EI32V",
    "VLOXSEG8EI32V",
    "VLOXSEG2EI64V",
    "VLOXSEG3EI64V",
    "VLOXSEG4EI64V",
    "VLOXSEG5EI64V",
    "VLOXSEG6EI64V",
    "VLOXSEG7EI64V",
    "VLOXSEG8EI64V",
    "VSOXSEG2EI8V",
    "VSOXSEG3EI8V",
    "VSOXSEG4EI8V",
    "VSOXSEG5EI8V",
    "VSOXSEG6EI8V",
    "VSOXSEG7EI8V",
    "VSOXSEG8EI8V",
    "VSOXSEG2EI16V",
    "VSOXSEG3EI16V",
    "VSOXSEG4EI16V",
    "VSOXSEG5EI16V",
    "VSOXSEG6EI16V",
    "VSOXSEG7EI16V",
    "VSOXSEG8EI16V",
    "VSOXSEG2EI32V",
    "VSOXSEG3EI32V",
    "VSOXSEG4EI32V",
    "VSOXSEG5EI32V",
    "VSOXSEG6EI32V",
    "VSOXSEG7EI32V",
    "VSOXSEG8EI32V",
    "VSOXSEG2EI64V",
    "VSOXSEG3EI64V",
    "VSOXSEG4EI64V",
    "VSOXSEG5EI64V",
    "VSOXSEG6EI64V",
    "VSOXSEG7EI64V",
    "VSOXSEG8EI64V",
    "VLUXSEG2EI8V",
    "VLUXSEG3EI8V",
    "VLUXSEG4EI8V",
    "VLUXSEG5EI8V",
    "VLUXSEG6EI8V",
    "VLUXSEG7EI8V",
    "VLUXSEG8EI8V",
    "VLUXSEG2EI16V",
    "VLUXSEG3EI16V",
    "VLUXSEG4EI16V",
    "VLUXSEG5EI16V",
    "VLUXSEG6EI16V",
    "VLUXSEG7EI16V",
    "VLUXSEG8EI16V",
    "VLUXSEG2EI32V",
    "VLUXSEG3EI32V",
    "VLUXSEG4EI32V",
    "VLUXSEG5EI32V",
    "VLUXSEG6EI32V",
    "VLUXSEG7EI32V",
    "VLUXSEG8EI32V",
    "VLUXSEG2EI64V",
    "VLUXSEG3EI64V",
    "VLUXSEG4EI64V",
    "VLUXSEG5EI64V",
    "VLUXSEG6EI64V",
    "VLUXSEG7EI64V",
    "VLUXSEG8EI64V",
    "VSUXSEG2EI8V",
    "VSUXSEG3EI8V",
    "VSUXSEG4EI8V",
    "VSUXSEG5EI8V",
    "VSUXSEG6EI8V",
    "VSUXSEG7EI8V",
    "VSUXSEG8EI8V",
    "VSUXSEG2EI16V",
    "VSUXSEG3EI16V",
    "VSUXSEG4EI16V",
    "VSUXSEG5EI16V",
    "VSUXSEG6EI16V",
    "VSUXSEG7EI16V",
    "VSUXSEG8EI16V",
    "VSUXSEG2EI32V",
    "VSUXSEG3EI32V",
    "VSUXSEG4EI32V",
    "VSUXSEG5EI32V",
    "VSUXSEG6EI32V",
    "VSUXSEG7EI32V",
    "VSUXSEG8EI32V",
    "VSUXSEG2EI64V",
    "VSUXSEG3EI64V",
    "VSUXSEG4EI64V",
    "VSUXSEG5EI64V",
    "VSUXSEG6EI64V",
    "VSUXSEG7EI64V",
    "VSUXSEG8EI64V",
    "VL1RE8V",
    "VL1RE16V",
    "VL1RE32V",
    "VL1RE64V",
    "VL2RE8V",
    "VL2RE16V",
    "VL2RE32V",
    "VL2RE64V",
    "VL4RE8V",
    "VL4RE16V",
    "VL4RE32V",
    "VL4RE64V",
    "VL8RE8V",
    "VL8RE16V",
    "VL8RE32V",
    "VL8RE64V",
    "VS1RV",
    "VS2RV",
    "VS4RV",
    "VS8RV",
    "VADDVV",
    "VADDVX",
    "VADDVI",
    "VSUBVV",
    "VSUBVX",
    "VRSUBVX",
    "VRSUBVI",
    "VWADDUVV",
    "VWADDUVX",
    "VWSUBUVV",
    "VWSUBUVX",
    "VWADDVV",
    "VWADDVX",
    "VWSUBVV",
    "VWSUBVX",
    "VWADDUWV",
    "VWADDUWX",
    "VWSUBUWV",
    "VWSUBUWX",
    "VWADDWV",
    "VWADDWX",
    "VWSUBWV",
    "VWSUBWX",
    "VZEXTVF2",
    "VSEXTVF2",
    "VZEXTVF4",
    "VSEXTVF4",
    "VZEXTVF8",
    "VSEXTVF8",
    "VADCVVM",
    "VADCVXM",
    "VADCVIM",
    "VMADCVVM",
    "VMADCVXM",
    "VMADCVIM",
    "VMADCVV",
    "VMADCVX",
    "VMADCVI",
    "VSBCVVM",
    "VSBCVXM",
    "VMSBCVVM",
    "VMSBCVXM",
    "VMSBCVV",
    "VMSBCVX",
    "VANDVV",
    "VANDVX",
    "VANDVI",
    "VORVV",
    "VORVX",
    "VORVI",
    "VXORVV",
    "VXORVX",
    "VXORVI",
    "VSLLVV",
    "VSLLVX",
    "VSLLVI",
    "VSRLVV",
    "VSRLVX",
    "VSRLVI",
    "VSRAVV",
    "VSRAVX",
    "VSRAVI",
    "VNSRLWV",
    "VNSRLWX",
    "VNSRLWI",
    "VNSRAWV",
    "VNSRAWX",
    "VNSRAWI",
    "VMSEQVV",
    "VMSEQVX",
    "VMSEQVI",
    "VMSNEVV",
    "VMSNEVX",
    "VMSNEVI",
    "VMSLTUVV",
    "VMSLTUVX",
    "VMSLTVV",
    "VMSLTVX",
    "VMSLEUVV",
    "VMSLEUVX",
    "VMSLEUVI",
    "VMSLEVV",
    "VMSLEVX",
    "VMSLEVI",
    "VMSGTUVX",
    "VMSGTUVI",
    "VMSGTVX",
    "VMSGTVI",
    "VMINUVV",
    "VMINUVX",
    "VMINVV",
    "VMINVX",
    "VMAXUVV",
    "VMAXUVX",
    "VMAXVV",
    "VMAXVX",
    "VMULVV",
    "VMULVX",
    "VMULHVV",
    "VMULHVX",
    "VMULHUVV",
    "VMULHUVX",
    "VMULHSUVV",
    "VMULHSUVX",
    "VDIVUVV",
    "VDIVUVX",
    "VDIVVV",
    "VDIVVX",
    "VREMUVV",
    "VREMUVX",
    "VREMVV",
    "VREMVX",
    "VWMULVV",
    "VWMULVX",
    "VWMULUVV",
    "VWMULUVX",
    "VWMULSUVV",
    "VWMULSUVX",
    "VMACCVV",
    "VMACCVX",
    "VNMSACVV",
    "VNMSACVX",
    "VMADDVV",
    "VMADDVX",
    "VNMSUBVV",
    "VNMSUBVX",
    "VWMACCUVV",
    "VWMACCUVX",
    "VWMACCVV",
    "VWMACCVX",
    "VWMACCSUVV",
    "VWMACCSUVX",
    "VWMACCUSVX",
    "VMERGEVVM",
    "VMERGEVXM",
    "VMERGEVIM",
    "VMVVV",
    "VMVVX",
    "VMVVI",
    "VSADDUVV",
    "VSADDUVX",
    "VSADDUVI",
    "VSADDVV",
    "VSADDVX",
    "VSADDVI",
    "VSSUBUVV",
    "VSSUBUVX",
    "VSSUBVV",
    "VSSUBVX",
    "VAADDUVV",
    "VAADDUVX",
    "VAADDVV",
    "VAADDVX",
    "VASUBUVV",
    "VASUBUVX",
    "VASUBVV",
    "VASUBVX",
    "VSMULVV",
    "VSMULVX",
    "VSSRLVV",
    "VSSRLVX",
    "VSSRLVI",
    "VSSRAVV",
    "VSSRAVX",
    "VSSRAVI",
    "VNCLIPUWV",
    "VNCLIPUWX",
    "VNCLIPUWI",
    "VNCLIPWV",
    "VNCLIPWX",
    "VNCLIPWI",
    "VFADDVV",
    "VFADDVF",
    "VFSUBVV",
    "VFSUBVF",
    "VFRSUBVF",
    "VFWADDVV",
    "VFWADDVF",
    "VFWSUBVV",
    "VFWSUBVF",
    "VFWADDWV",
    "VFWADDWF",
    "VFWSUBWV",
    "VFWSUBWF",
    "VFMULVV",
    "VFMULVF",
    "VFDIVVV",
    "VFDIVVF",
    "VFRDIVVF",
    "VFWMULVV",
    "VFWMULVF",
    "VFMACCVV",
    "VFMACCVF",
    "VFNMACCVV",
    "VFNMACCVF",
    "VFMSACVV",
    "VFMSACVF",
    "VFNMSACVV",
    "VFNMSACVF",
    "VFMADDVV",
    "VFMADDVF",
    "VFNMADDVV",
    "VFNMADDVF",
    "VFMSUBVV",
    "VFMSUBVF",
    "VFNMSUBVV",
    "VFNMSUBVF",
    "VFWMACCVV",
    "VFWMACCVF",
    "VFWNMACCVV",
    "VFWNMACCVF",
    "VFWMSACVV",
    "VFWMSACVF",
    "VFWNMSACVV",
    "VFWNMSACVF",
    "VFSQRTV",
    "VFRSQRT7V",
    "VFREC7V",
    "VFMINVV",
    "VFMINVF",
    "VFMAXVV",
    "VFMAXVF",
    "VFSGNJVV",
    "VFSGNJVF",
    "VFSGNJNVV",
    "VFSGNJNVF",
    "VFSGNJXVV",
    "VFSGNJXVF",
    "VMFEQVV",
    "VMFEQVF",
    "VMFNEVV",
    "VMFNEVF",
    "VMFLTVV",
    "VMFLTVF",
    "VMFLEVV",
    "VMFLEVF",
    "VMFGTVF",
    "VMFGEVF",
    "VFCLASSV",
    "VFMERGEVFM",
    "VFMVVF",
    "VFCVTXUFV",
    "VFCVTXFV",
    "VFCVTRTZXUFV",
    "VFCVTRTZXFV",
    "VFCVTFXUV",
    "VFCVTFXV",
    "VFWCVTXUFV",
    "VFWCVTXFV",
    "VFWCVTRTZXUFV",
    "VFWCVTRTZXFV",
    "VFWCVTFXUV",
    "VFWCVTFXV",
    "VFWCVTFFV",
    "VFNCVTXUFW",
    "VFNCVTXFW",
    "VFNCVTRTZXUFW",
    "VFNCVTRTZXFW",
    "VFNCVTFXUW",
    "VFNCVTFXW",
    "VFNCVTFFW",
    "VFNCVTRODFFW",
    "VREDSUMVS",
    "VREDMAXUVS",
    "VREDMAXVS",
    "VREDMINUVS",
    "VREDMINVS",
    "VREDANDVS",
    "VREDORVS",
    "VREDXORVS",
    "VWREDSUMUVS",
    "VWREDSUMVS",
    "VFREDOSUMVS",
    "VFREDUSUMVS",
    "VFREDMAXVS",
    "VFREDMINVS",
    "VFWREDOSUMVS",
    "VFWREDUSUMVS",
    "VMANDMM",
    "VMNANDMM",
    "VMANDNMM",
    "VMXORMM",
    "VMORMM",
    "VMNORMM",
    "VMORNMM",
    "VMXNORMM",
    "VCPOPM",
    "VFIRSTM",
    "VMSBFM",
    "VMSIFM",
    "VMSOFM",
    "VIOTAM",
    "VIDV",
    "VMVXS",
    "VMVSX",
    "VFMVFS",
    "VFMVSF",
    "VSLIDEUPVX",
    "VSLIDEUPVI",
    "VSLIDEDOWNVX",
    "VSLIDEDOWNVI",
    "VSLIDE1UPVX",
    "VFSLIDE1UPVF",
    "VSLIDE1DOWNVX",
    "VFSLIDE1DOWNVF",
    "VRGATHERVV",
    "VRGATHEREI16VV",
    "VRGATHERVX",
    "VRGATHERVI",
    "VCOMPRESSVM",
    "VMV1RV",
    "VMV2RV",
    "VMV4RV",
    "VMV8RV",
    "ECALL",
    "SCALL",
    "EBREAK",
    "SBREAK",
    "MRET",
    "SRET",
    "DRET",
    "WFI",
    "SFENCEVMA",
    "WORD",
    "BEQZ",
    "BGEZ",
    "BGT",
    "BGTU",
    "BGTZ",
    "BLE",
    "BLEU",
    "BLEZ",
    "BLTZ",
    "BNEZ",
    "FABSD",
    "FABSS",
    "FNED",
    "FNEGD",
    "FNEGS",
    "FNES",
    "MOV",
    "MOVB",
    "MOVBU",
    "MOVD",
    "MOVF",
    "MOVH",
    "MOVHU",
    "MOVW",
    "MOVWU",
    "NEG",
    "NEGW",
    "NOT",
    "RDCYCLE",
    "RDINSTRET",
    "RDTIME",
    "SEQZ",
    "SNEZ",
    "VFABSV",
    "VFNEGV",
    "VL1RV",
    "VL2RV",
    "VL4RV",
    "VL8RV",
    "VMCLRM",
    "VMFGEVV",
    "VMFGTVV",
    "VMMVM",
    "VMNOTM",
    "VMSETM",
    "VMSGEUVI",
    "VMSGEUVV",
    "VMSGEVI",
    "VMSGEVV",
    "VMSGTUVV",
    "VMSGTVV",
    "VMSLTUVI",
    "VMSLTVI",
    "VNCVTXXW",
    "VNEGV",
    "VNOTV",
    "VWCVTUXXV",
    "VWCVTXXV",
    "LAST",
}
var CSRs map[uint16]string = csrs
var LinkRISCV64 = obj.LinkArch{
    Arch:           sys.ArchRISCV64,
    Init:           buildop,
    Preprocess:     preprocess,
    Assemble:       assemble,
    Progedit:       progedit,
    UnaryDst:       unaryDst,
    DWARFRegisters: RISCV64DWARFRegisters,
}

https://github.com/riscv-non-isa/riscv-elf-psabi-doc/blob/master/riscv-dwarf.adoc#dwarf-register-numbers

var RISCV64DWARFRegisters = map[int16]int16{

    REG_X0:  0,
    REG_X1:  1,
    REG_X2:  2,
    REG_X3:  3,
    REG_X4:  4,
    REG_X5:  5,
    REG_X6:  6,
    REG_X7:  7,
    REG_X8:  8,
    REG_X9:  9,
    REG_X10: 10,
    REG_X11: 11,
    REG_X12: 12,
    REG_X13: 13,
    REG_X14: 14,
    REG_X15: 15,
    REG_X16: 16,
    REG_X17: 17,
    REG_X18: 18,
    REG_X19: 19,
    REG_X20: 20,
    REG_X21: 21,
    REG_X22: 22,
    REG_X23: 23,
    REG_X24: 24,
    REG_X25: 25,
    REG_X26: 26,
    REG_X27: 27,
    REG_X28: 28,
    REG_X29: 29,
    REG_X30: 30,
    REG_X31: 31,

    REG_F0:  32,
    REG_F1:  33,
    REG_F2:  34,
    REG_F3:  35,
    REG_F4:  36,
    REG_F5:  37,
    REG_F6:  38,
    REG_F7:  39,
    REG_F8:  40,
    REG_F9:  41,
    REG_F10: 42,
    REG_F11: 43,
    REG_F12: 44,
    REG_F13: 45,
    REG_F14: 46,
    REG_F15: 47,
    REG_F16: 48,
    REG_F17: 49,
    REG_F18: 50,
    REG_F19: 51,
    REG_F20: 52,
    REG_F21: 53,
    REG_F22: 54,
    REG_F23: 55,
    REG_F24: 56,
    REG_F25: 57,
    REG_F26: 58,
    REG_F27: 59,
    REG_F28: 60,
    REG_F29: 61,
    REG_F30: 62,
    REG_F31: 63,
}

func EncodeBImmediate

func EncodeBImmediate(imm int64) (int64, error)

func EncodeCBImmediate

func EncodeCBImmediate(imm int64) (int64, error)

func EncodeCJImmediate

func EncodeCJImmediate(imm int64) (int64, error)

func EncodeIImmediate

func EncodeIImmediate(imm int64) (int64, error)

func EncodeJImmediate

func EncodeJImmediate(imm int64) (int64, error)

func EncodeSImmediate

func EncodeSImmediate(imm int64) (int64, error)

func EncodeUImmediate

func EncodeUImmediate(imm int64) (int64, error)

func EncodeVectorType

func EncodeVectorType(vsew, vlmul, vtail, vmask int64) (int64, error)

func InvertBranch

func InvertBranch(as obj.As) obj.As

InvertBranch inverts the condition of a conditional branch.

func ParseSuffix

func ParseSuffix(prog *obj.Prog, cond string) (err error)

func RegName

func RegName(r int) string

func Split32BitImmediate

func Split32BitImmediate(imm int64) (low, high int64, err error)

Split32BitImmediate splits a signed 32-bit immediate into a signed 20-bit upper immediate and a signed 12-bit lower immediate to be added to the upper result. For example, high may be used in LUI and low in a following ADDI to generate a full 32-bit constant.

type SpecialOperand

type SpecialOperand int
const (
    SPOP_BEGIN SpecialOperand = obj.SpecialOperandRISCVBase
    SPOP_RVV_BEGIN

    // Vector mask policy.
    SPOP_MA SpecialOperand = obj.SpecialOperandRISCVBase + iota - 2
    SPOP_MU

    // Vector tail policy.
    SPOP_TA
    SPOP_TU

    // Vector register group multiplier (VLMUL).
    SPOP_M1
    SPOP_M2
    SPOP_M4
    SPOP_M8
    SPOP_MF2
    SPOP_MF4
    SPOP_MF8

    // Vector selected element width (VSEW).
    SPOP_E8
    SPOP_E16
    SPOP_E32
    SPOP_E64
    SPOP_RVV_END

    // CSR names.  4096 special operands are reserved for RISC-V CSR names.
    SPOP_CSR_BEGIN = SPOP_RVV_END
    SPOP_CSR_END   = SPOP_CSR_BEGIN + 4096

    SPOP_END = SPOP_CSR_END + 1
)

func (SpecialOperand) String

func (so SpecialOperand) String() string

String returns the textual representation of a SpecialOperand.