🔬This is a nightly-only experimental API. (
stdsimd #27731)Available on ARM only.
Expand description
Platform-specific intrinsics for the arm platform.
See the module documentation for more details.
Modules
- dspExperimentalReferences:
Structs
- APSRExperimentalApplication Program Status Register
- SYExperimentalFull system is the required shareability domain, reads and writes are the required access types
- float32x2x2_tExperimentalARM-specific type containing twofloat32x2_tvectors.
- float32x2x3_tExperimentalARM-specific type containing threefloat32x2_tvectors.
- float32x2x4_tExperimentalARM-specific type containing fourfloat32x2_tvectors.
- float32x4x2_tExperimentalARM-specific type containing twofloat32x4_tvectors.
- float32x4x3_tExperimentalARM-specific type containing threefloat32x4_tvectors.
- float32x4x4_tExperimentalARM-specific type containing fourfloat32x4_tvectors.
- int8x4_tExperimentalARM-specific 32-bit wide vector of four packedi8.
- int8x8x2_tExperimentalARM-specific type containing twoint8x8_tvectors.
- int8x8x3_tExperimentalARM-specific type containing threeint8x8_tvectors.
- int8x8x4_tExperimentalARM-specific type containing fourint8x8_tvectors.
- int8x16x2_tExperimentalARM-specific type containing twoint8x16_tvectors.
- int8x16x3_tExperimentalARM-specific type containing threeint8x16_tvectors.
- int8x16x4_tExperimentalARM-specific type containing fourint8x16_tvectors.
- int16x2_tExperimentalARM-specific 32-bit wide vector of two packedi16.
- int16x4x2_tExperimentalARM-specific type containing twoint16x4_tvectors.
- int16x4x3_tExperimentalARM-specific type containing threeint16x4_tvectors.
- int16x4x4_tExperimentalARM-specific type containing fourint16x4_tvectors.
- int16x8x2_tExperimentalARM-specific type containing twoint16x8_tvectors.
- int16x8x3_tExperimentalARM-specific type containing threeint16x8_tvectors.
- int16x8x4_tExperimentalARM-specific type containing fourint16x8_tvectors.
- int32x2x2_tExperimentalARM-specific type containing twoint32x2_tvectors.
- int32x2x3_tExperimentalARM-specific type containing threeint32x2_tvectors.
- int32x2x4_tExperimentalARM-specific type containing fourint32x2_tvectors.
- int32x4x2_tExperimentalARM-specific type containing twoint32x4_tvectors.
- int32x4x3_tExperimentalARM-specific type containing threeint32x4_tvectors.
- int32x4x4_tExperimentalARM-specific type containing fourint32x4_tvectors.
- int64x1x2_tExperimentalARM-specific type containing fourint64x1_tvectors.
- int64x1x3_tExperimentalARM-specific type containing fourint64x1_tvectors.
- int64x1x4_tExperimentalARM-specific type containing fourint64x1_tvectors.
- int64x2x2_tExperimentalARM-specific type containing fourint64x2_tvectors.
- int64x2x3_tExperimentalARM-specific type containing fourint64x2_tvectors.
- int64x2x4_tExperimentalARM-specific type containing fourint64x2_tvectors.
- poly8x8x2_tExperimentalARM-specific type containing twopoly8x8_tvectors.
- poly8x8x3_tExperimentalARM-specific type containing threepoly8x8_tvectors.
- poly8x8x4_tExperimentalARM-specific type containing fourpoly8x8_tvectors.
- poly8x16x2_tExperimentalARM-specific type containing twopoly8x16_tvectors.
- poly8x16x3_tExperimentalARM-specific type containing threepoly8x16_tvectors.
- poly8x16x4_tExperimentalARM-specific type containing fourpoly8x16_tvectors.
- poly16x4x2_tExperimentalARM-specific type containing twopoly16x4_tvectors.
- poly16x4x3_tExperimentalARM-specific type containing threepoly16x4_tvectors.
- poly16x4x4_tExperimentalARM-specific type containing fourpoly16x4_tvectors.
- poly16x8x2_tExperimentalARM-specific type containing twopoly16x8_tvectors.
- poly16x8x3_tExperimentalARM-specific type containing threepoly16x8_tvectors.
- poly16x8x4_tExperimentalARM-specific type containing fourpoly16x8_tvectors.
- poly64x1x2_tExperimentalARM-specific type containing fourpoly64x1_tvectors.
- poly64x1x3_tExperimentalARM-specific type containing fourpoly64x1_tvectors.
- poly64x1x4_tExperimentalARM-specific type containing fourpoly64x1_tvectors.
- poly64x2x2_tExperimentalARM-specific type containing fourpoly64x2_tvectors.
- poly64x2x3_tExperimentalARM-specific type containing fourpoly64x2_tvectors.
- poly64x2x4_tExperimentalARM-specific type containing fourpoly64x2_tvectors.
- uint8x4_tExperimentalARM-specific 32-bit wide vector of four packedu8.
- uint8x8x2_tExperimentalARM-specific type containing twouint8x8_tvectors.
- uint8x8x3_tExperimentalARM-specific type containing threeuint8x8_tvectors.
- uint8x8x4_tExperimentalARM-specific type containing fouruint8x8_tvectors.
- uint8x16x2_tExperimentalARM-specific type containing twouint8x16_tvectors.
- uint8x16x3_tExperimentalARM-specific type containing threeuint8x16_tvectors.
- uint8x16x4_tExperimentalARM-specific type containing fouruint8x16_tvectors.
- uint16x2_tExperimentalARM-specific 32-bit wide vector of two packedu16.
- uint16x4x2_tExperimentalARM-specific type containing twouint16x4_tvectors.
- uint16x4x3_tExperimentalARM-specific type containing threeuint16x4_tvectors.
- uint16x4x4_tExperimentalARM-specific type containing fouruint16x4_tvectors.
- uint16x8x2_tExperimentalARM-specific type containing twouint16x8_tvectors.
- uint16x8x3_tExperimentalARM-specific type containing threeuint16x8_tvectors.
- uint16x8x4_tExperimentalARM-specific type containing fouruint16x8_tvectors.
- uint32x2x2_tExperimentalARM-specific type containing twouint32x2_tvectors.
- uint32x2x3_tExperimentalARM-specific type containing threeuint32x2_tvectors.
- uint32x2x4_tExperimentalARM-specific type containing fouruint32x2_tvectors.
- uint32x4x2_tExperimentalARM-specific type containing twouint32x4_tvectors.
- uint32x4x3_tExperimentalARM-specific type containing threeuint32x4_tvectors.
- uint32x4x4_tExperimentalARM-specific type containing fouruint32x4_tvectors.
- uint64x1x2_tExperimentalARM-specific type containing fouruint64x1_tvectors.
- uint64x1x3_tExperimentalARM-specific type containing fouruint64x1_tvectors.
- uint64x1x4_tExperimentalARM-specific type containing fouruint64x1_tvectors.
- uint64x2x2_tExperimentalARM-specific type containing fouruint64x2_tvectors.
- uint64x2x3_tExperimentalARM-specific type containing fouruint64x2_tvectors.
- uint64x2x4_tExperimentalARM-specific type containing fouruint64x2_tvectors.
- ARM-specific 64-bit wide vector of two packedf32.
- ARM-specific 128-bit wide vector of four packedf32.
- ARM-specific 64-bit wide vector of eight packedi8.
- ARM-specific 128-bit wide vector of sixteen packedi8.
- ARM-specific 64-bit wide vector of four packedi16.
- ARM-specific 128-bit wide vector of eight packedi16.
- ARM-specific 64-bit wide vector of two packedi32.
- ARM-specific 128-bit wide vector of four packedi32.
- ARM-specific 64-bit wide vector of one packedi64.
- ARM-specific 128-bit wide vector of two packedi64.
- ARM-specific 64-bit wide polynomial vector of eight packedp8.
- ARM-specific 128-bit wide vector of sixteen packedp8.
- ARM-specific 64-bit wide vector of four packedp16.
- ARM-specific 128-bit wide vector of eight packedp16.
- ARM-specific 64-bit wide vector of one packedp64.
- ARM-specific 128-bit wide vector of two packedp64.
- ARM-specific 64-bit wide vector of eight packedu8.
- ARM-specific 128-bit wide vector of sixteen packedu8.
- ARM-specific 64-bit wide vector of four packedu16.
- ARM-specific 128-bit wide vector of eight packedu16.
- ARM-specific 64-bit wide vector of two packedu32.
- ARM-specific 128-bit wide vector of four packedu32.
- ARM-specific 64-bit wide vector of one packedu64.
- ARM-specific 128-bit wide vector of two packedu64.
Functions
- __breakpoint⚠ExperimentalInserts a breakpoint instruction.
- __clrex⚠ExperimentalRemoves the exclusive lock created by LDREX
- CRC32 single round checksum for bytes (8 bits).
- CRC32-C single round checksum for bytes (8 bits).
- CRC32-C single round checksum for half words (16 bits).
- CRC32-C single round checksum for words (32 bits).
- CRC32 single round checksum for half words (16 bits).
- CRC32 single round checksum for words (32 bits).
- __dbg⚠ExperimentalGenerates a DBG instruction.
- __dmb⚠ExperimentalGenerates a DMB (data memory barrier) instruction or equivalent CP15 instruction.
- __dsb⚠ExperimentalGenerates a DSB (data synchronization barrier) instruction or equivalent CP15 instruction.
- __isb⚠ExperimentalGenerates an ISB (instruction synchronization barrier) instruction or equivalent CP15 instruction.
- __ldrex⚠ExperimentalExecutes an exclusive LDR instruction for 32 bit value.
- __ldrexb⚠ExperimentalExecutes an exclusive LDR instruction for 8 bit value.
- __ldrexh⚠ExperimentalExecutes an exclusive LDR instruction for 16 bit value.
- __nop⚠ExperimentalGenerates an unspecified no-op instruction.
- __qadd⚠ExperimentalSigned saturating addition
- __qadd8⚠ExperimentalSaturating four 8-bit integer additions
- __qadd16⚠ExperimentalSaturating two 16-bit integer additions
- __qasx⚠ExperimentalReturns the 16-bit signed saturated equivalent of
- __qdbl⚠ExperimentalInsert a QADD instruction
- __qsax⚠ExperimentalReturns the 16-bit signed saturated equivalent of
- __qsub⚠ExperimentalSigned saturating subtraction
- __qsub8⚠ExperimentalSaturating two 8-bit integer subtraction
- __qsub16⚠ExperimentalSaturating two 16-bit integer subtraction
- __rsr⚠ExperimentalReads a 32-bit system register
- __rsrp⚠ExperimentalReads a system register containing an address
- __sadd8⚠ExperimentalReturns the 8-bit signed saturated equivalent of
- __sadd16⚠ExperimentalReturns the 16-bit signed saturated equivalent of
- __sasx⚠ExperimentalReturns the 16-bit signed equivalent of
- __sel⚠ExperimentalSelect bytes from each operand according to APSR GE flags
- __sev⚠ExperimentalGenerates a SEV (send a global event) hint instruction.
- __sevl⚠ExperimentalGenerates a send a local event hint instruction.
- __shadd8⚠ExperimentalSigned halving parallel byte-wise addition.
- __shadd16⚠ExperimentalSigned halving parallel halfword-wise addition.
- __shsub8⚠ExperimentalSigned halving parallel byte-wise subtraction.
- __shsub16⚠ExperimentalSigned halving parallel halfword-wise subtraction.
- __smlabb⚠ExperimentalInsert a SMLABB instruction
- __smlabt⚠ExperimentalInsert a SMLABT instruction
- __smlad⚠ExperimentalDual 16-bit Signed Multiply with Addition of products and 32-bit accumulation.
- __smlatb⚠ExperimentalInsert a SMLATB instruction
- __smlatt⚠ExperimentalInsert a SMLATT instruction
- __smlawb⚠ExperimentalInsert a SMLAWB instruction
- __smlawt⚠ExperimentalInsert a SMLAWT instruction
- __smlsd⚠ExperimentalDual 16-bit Signed Multiply with Subtraction of products and 32-bit accumulation and overflow detection.
- __smuad⚠ExperimentalSigned Dual Multiply Add.
- __smuadx⚠ExperimentalSigned Dual Multiply Add Reversed.
- __smulbb⚠ExperimentalInsert a SMULBB instruction
- __smulbt⚠ExperimentalInsert a SMULTB instruction
- __smultb⚠ExperimentalInsert a SMULTB instruction
- __smultt⚠ExperimentalInsert a SMULTT instruction
- __smulwb⚠ExperimentalInsert a SMULWB instruction
- __smulwt⚠ExperimentalInsert a SMULWT instruction
- __smusd⚠ExperimentalSigned Dual Multiply Subtract.
- __smusdx⚠ExperimentalSigned Dual Multiply Subtract Reversed.
- __ssub8⚠ExperimentalInserts aSSUB8instruction.
- __strex⚠ExperimentalExecutes an exclusive STR instruction for 32 bit values
- __strexb⚠ExperimentalExecutes an exclusive STR instruction for 8 bit values
- __usad8⚠ExperimentalSum of 8-bit absolute differences.
- __usada8⚠ExperimentalSum of 8-bit absolute differences and constant.
- __usub8⚠ExperimentalInserts aUSUB8instruction.
- __wfe⚠ExperimentalGenerates a WFE (wait for event) hint instruction, or nothing.
- __wfi⚠ExperimentalGenerates a WFI (wait for interrupt) hint instruction, or nothing.
- __wsr⚠ExperimentalWrites a 32-bit system register
- __wsrp⚠ExperimentalWrites a system register containing an address
- __yield⚠ExperimentalGenerates a YIELD hint instruction.
- _clz_u8⚠ExperimentalCount Leading Zeros.
- _clz_u16⚠ExperimentalCount Leading Zeros.
- _clz_u32⚠ExperimentalCount Leading Zeros.
- _rbit_u32⚠ExperimentalReverse the bit order.
- _rev_u16⚠ExperimentalReverse the order of the bytes.
- _rev_u32⚠ExperimentalReverse the order of the bytes.
- Absolute value (wrapping).
- Absolute value (wrapping).
- Absolute value (wrapping).
- Absolute value (wrapping).
- Absolute value (wrapping).
- Absolute value (wrapping).
- Vector add.
- Vector add.
- Vector add.
- Vector add.
- Vector add.
- Vector add.
- Vector add.
- Add returning High Narrow (high half).
- Add returning High Narrow (high half).
- Add returning High Narrow (high half).
- Add returning High Narrow (high half).
- Add returning High Narrow (high half).
- Add returning High Narrow (high half).
- Add returning High Narrow.
- Add returning High Narrow.
- Add returning High Narrow.
- Add returning High Narrow.
- Add returning High Narrow.
- Add returning High Narrow.
- Signed Add Long (vector, high half).
- Signed Add Long (vector, high half).
- Signed Add Long (vector, high half).
- Unsigned Add Long (vector, high half).
- Unsigned Add Long (vector, high half).
- Unsigned Add Long (vector, high half).
- Signed Add Long (vector).
- Signed Add Long (vector).
- Signed Add Long (vector).
- Unsigned Add Long (vector).
- Unsigned Add Long (vector).
- Unsigned Add Long (vector).
- Vector add.
- Vector add.
- Vector add.
- Vector add.
- Vector add.
- Vector add.
- Vector add.
- Vector add.
- Vector add.
- Signed Add Wide (high half).
- Signed Add Wide (high half).
- Signed Add Wide (high half).
- Unsigned Add Wide (high half).
- Unsigned Add Wide (high half).
- Unsigned Add Wide (high half).
- Signed Add Wide.
- Signed Add Wide.
- Signed Add Wide.
- Unsigned Add Wide.
- Unsigned Add Wide.
- Unsigned Add Wide.
- AES single round decryption.
- AES single round encryption.
- AES inverse mix columns.
- AES mix columns.
- Vector bitwise bit clear
- Vector bitwise bit clear
- Vector bitwise bit clear
- Vector bitwise bit clear
- Vector bitwise bit clear
- Vector bitwise bit clear
- Vector bitwise bit clear
- Vector bitwise bit clear
- Vector bitwise bit clear
- Vector bitwise bit clear
- Vector bitwise bit clear
- Vector bitwise bit clear
- Vector bitwise bit clear
- Vector bitwise bit clear
- Vector bitwise bit clear
- Vector bitwise bit clear
- Bitwise Select.
- Bitwise Select.
- Bitwise Select.
- Bitwise Select instructions. This instruction sets each bit in the destination SIMD&FP register to the corresponding bit from the first source SIMD&FP register when the original destination bit was 1, otherwise from the second source SIMD&FP register. Bitwise Select.
- Bitwise Select.
- Bitwise Select.
- Bitwise Select.
- Bitwise Select.
- Bitwise Select.
- Bitwise Select.
- Bitwise Select.
- Bitwise Select. (128-bit)
- Bitwise Select. (128-bit)
- Bitwise Select. (128-bit)
- Bitwise Select. (128-bit)
- Bitwise Select. (128-bit)
- Bitwise Select. (128-bit)
- Bitwise Select. (128-bit)
- Bitwise Select. (128-bit)
- Bitwise Select. (128-bit)
- Bitwise Select. (128-bit)
- Bitwise Select. (128-bit)
- Population count per byte.
- Population count per byte.
- Population count per byte.
- Population count per byte.
- Population count per byte.
- Population count per byte.
- Vector combine
- Vector combine
- Vector combine
- Vector combine
- Vector combine
- Vector combine
- Vector combine
- Vector combine
- Vector combine
- Floating-point Convert to Signed fixed-point, rounding toward Zero (vector)
- Floating-point Convert to Unsigned fixed-point, rounding toward Zero (vector)
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Extract vector from pair of vectors
- Extract vector from pair of vectors
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Move vector element to general-purpose register
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load multiple single-element structures to one, two, three, or four registers.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load one single-element structure and Replicate to all lanes (of one register).
- Load multiple single-element structures to one, two, three, or four registers.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load one single-element structure to one lane of one register.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load multiple single-element structures to one, two, three, or four registers.
- Load SIMD&FP register (immediate offset)
- 8-bit integer matrix multiply-accumulate
- 8-bit integer matrix multiply-accumulate
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Vector long move.
- Vector long move.
- Vector long move.
- Vector long move.
- Vector long move.
- Vector long move.
- Vector narrow integer.
- Vector narrow integer.
- Vector narrow integer.
- Vector narrow integer.
- Vector narrow integer.
- Vector narrow integer.
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Duplicate vector element to vector or scalar
- Vector bitwise not.
- Vector bitwise not.
- Vector bitwise not.
- Vector bitwise not.
- Vector bitwise not.
- Vector bitwise not.
- Vector bitwise not.
- Vector bitwise not.
- Vector bitwise not.
- Vector bitwise not.
- Vector bitwise not.
- Vector bitwise not.
- Vector bitwise not.
- Vector bitwise not.
- Vector bitwise inclusive OR NOT
- Vector bitwise inclusive OR NOT
- Vector bitwise inclusive OR NOT
- Vector bitwise inclusive OR NOT
- Vector bitwise inclusive OR NOT
- Vector bitwise inclusive OR NOT
- Vector bitwise inclusive OR NOT
- Vector bitwise inclusive OR NOT
- Vector bitwise inclusive OR NOT
- Vector bitwise inclusive OR NOT
- Vector bitwise inclusive OR NOT
- Vector bitwise inclusive OR NOT
- Vector bitwise inclusive OR NOT
- Vector bitwise inclusive OR NOT
- Vector bitwise inclusive OR NOT
- Vector bitwise inclusive OR NOT
- Signed Add and Accumulate Long Pairwise.
- Signed Add and Accumulate Long Pairwise.
- Signed Add and Accumulate Long Pairwise.
- Unsigned Add and Accumulate Long Pairwise.
- Unsigned Add and Accumulate Long Pairwise.
- Unsigned Add and Accumulate Long Pairwise.
- Signed Add and Accumulate Long Pairwise.
- Signed Add and Accumulate Long Pairwise.
- Signed Add and Accumulate Long Pairwise.
- Unsigned Add and Accumulate Long Pairwise.
- Unsigned Add and Accumulate Long Pairwise.
- Unsigned Add and Accumulate Long Pairwise.
- Add pairwise.
- Add pairwise.
- Add pairwise.
- Add pairwise.
- Add pairwise.
- Add pairwise.
- Signed Add Long Pairwise.
- Signed Add Long Pairwise.
- Signed Add Long Pairwise.
- Unsigned Add Long Pairwise.
- Unsigned Add Long Pairwise.
- Unsigned Add Long Pairwise.
- Signed Add Long Pairwise.
- Signed Add Long Pairwise.
- Signed Add Long Pairwise.
- Unsigned Add Long Pairwise.
- Unsigned Add Long Pairwise.
- Unsigned Add Long Pairwise.
- Folding maximum of adjacent pairs
- Folding maximum of adjacent pairs
- Folding maximum of adjacent pairs
- Folding maximum of adjacent pairs
- Folding maximum of adjacent pairs
- Folding maximum of adjacent pairs
- Folding maximum of adjacent pairs
- Folding minimum of adjacent pairs
- Folding minimum of adjacent pairs
- Folding minimum of adjacent pairs
- Folding minimum of adjacent pairs
- Folding minimum of adjacent pairs
- Folding minimum of adjacent pairs
- Folding minimum of adjacent pairs
- Rounding Add returning High Narrow (high half).
- Rounding Add returning High Narrow (high half).
- Rounding Add returning High Narrow (high half).
- Rounding Add returning High Narrow (high half).
- Rounding Add returning High Narrow (high half).
- Rounding Add returning High Narrow (high half).
- Rounding Add returning High Narrow.
- Rounding Add returning High Narrow.
- Rounding Add returning High Narrow.
- Rounding Add returning High Narrow.
- Rounding Add returning High Narrow.
- Rounding Add returning High Narrow.
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- Reversing vector elements (swap endianness)
- SHA1 hash update accelerator, choose.
- SHA1 fixed rotate.
- SHA1 hash update accelerator, majority.
- SHA1 hash update accelerator, parity.
- SHA1 schedule update accelerator, first part.
- SHA1 schedule update accelerator, second part.
- SHA256 hash update accelerator, upper part.
- SHA256 hash update accelerator.
- SHA256 schedule update accelerator, first part.
- SHA256 schedule update accelerator, second part.
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Left and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Shift Right and Insert (immediate)
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store multiple single-element structures from one, two, three, or four registers.
- Store SIMD&FP register (immediate offset)
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Extended table look-up
- Unsigned and signed 8-bit integer matrix multiply-accumulate
- vabal_s8⚠neonSigned Absolute difference and Accumulate Long
- vabal_s16⚠neonSigned Absolute difference and Accumulate Long
- vabal_s32⚠neonSigned Absolute difference and Accumulate Long
- vabal_u8⚠neonUnsigned Absolute difference and Accumulate Long
- vabal_u16⚠neonUnsigned Absolute difference and Accumulate Long
- vabal_u32⚠neonUnsigned Absolute difference and Accumulate Long
- vabd_f32⚠neonAbsolute difference between the arguments of Floating
- vabd_s8⚠neonAbsolute difference between the arguments
- vabd_s16⚠neonAbsolute difference between the arguments
- vabd_s32⚠neonAbsolute difference between the arguments
- vabd_u8⚠neonAbsolute difference between the arguments
- vabd_u16⚠neonAbsolute difference between the arguments
- vabd_u32⚠neonAbsolute difference between the arguments
- vabdl_s8⚠neonSigned Absolute difference Long
- vabdl_s16⚠neonSigned Absolute difference Long
- vabdl_s32⚠neonSigned Absolute difference Long
- vabdl_u8⚠neonUnsigned Absolute difference Long
- vabdl_u16⚠neonUnsigned Absolute difference Long
- vabdl_u32⚠neonUnsigned Absolute difference Long
- vabdq_f32⚠neonAbsolute difference between the arguments of Floating
- vabdq_s8⚠neonAbsolute difference between the arguments
- vabdq_s16⚠neonAbsolute difference between the arguments
- vabdq_s32⚠neonAbsolute difference between the arguments
- vabdq_u8⚠neonAbsolute difference between the arguments
- vabdq_u16⚠neonAbsolute difference between the arguments
- vabdq_u32⚠neonAbsolute difference between the arguments
- vabs_f32⚠neonFloating-point absolute value
- vabsq_f32⚠neonFloating-point absolute value
- vadd_p8⚠neonBitwise exclusive OR
- vadd_p16⚠neonBitwise exclusive OR
- vadd_p64⚠neonBitwise exclusive OR
- vaddq_p8⚠neonBitwise exclusive OR
- vaddq_p16⚠neonBitwise exclusive OR
- vaddq_p64⚠neonBitwise exclusive OR
- vaddq_p128⚠neonBitwise exclusive OR
- vand_s8⚠neonVector bitwise and
- vand_s16⚠neonVector bitwise and
- vand_s32⚠neonVector bitwise and
- vand_s64⚠neonVector bitwise and
- vand_u8⚠neonVector bitwise and
- vand_u16⚠neonVector bitwise and
- vand_u32⚠neonVector bitwise and
- vand_u64⚠neonVector bitwise and
- vandq_s8⚠neonVector bitwise and
- vandq_s16⚠neonVector bitwise and
- vandq_s32⚠neonVector bitwise and
- vandq_s64⚠neonVector bitwise and
- vandq_u8⚠neonVector bitwise and
- vandq_u16⚠neonVector bitwise and
- vandq_u32⚠neonVector bitwise and
- vandq_u64⚠neonVector bitwise and
- vcage_f32⚠neonFloating-point absolute compare greater than or equal
- vcageq_f32⚠neonFloating-point absolute compare greater than or equal
- vcagt_f32⚠neonFloating-point absolute compare greater than
- vcagtq_f32⚠neonFloating-point absolute compare greater than
- vcale_f32⚠neonFloating-point absolute compare less than or equal
- vcaleq_f32⚠neonFloating-point absolute compare less than or equal
- vcalt_f32⚠neonFloating-point absolute compare less than
- vcaltq_f32⚠neonFloating-point absolute compare less than
- vceq_f32⚠neonFloating-point compare equal
- vceq_p8⚠neonCompare bitwise Equal (vector)
- vceq_s8⚠neonCompare bitwise Equal (vector)
- vceq_s16⚠neonCompare bitwise Equal (vector)
- vceq_s32⚠neonCompare bitwise Equal (vector)
- vceq_u8⚠neonCompare bitwise Equal (vector)
- vceq_u16⚠neonCompare bitwise Equal (vector)
- vceq_u32⚠neonCompare bitwise Equal (vector)
- vceqq_f32⚠neonFloating-point compare equal
- vceqq_p8⚠neonCompare bitwise Equal (vector)
- vceqq_s8⚠neonCompare bitwise Equal (vector)
- vceqq_s16⚠neonCompare bitwise Equal (vector)
- vceqq_s32⚠neonCompare bitwise Equal (vector)
- vceqq_u8⚠neonCompare bitwise Equal (vector)
- vceqq_u16⚠neonCompare bitwise Equal (vector)
- vceqq_u32⚠neonCompare bitwise Equal (vector)
- vcge_f32⚠neonFloating-point compare greater than or equal
- vcge_s8⚠neonCompare signed greater than or equal
- vcge_s16⚠neonCompare signed greater than or equal
- vcge_s32⚠neonCompare signed greater than or equal
- vcge_u8⚠neonCompare unsigned greater than or equal
- vcge_u16⚠neonCompare unsigned greater than or equal
- vcge_u32⚠neonCompare unsigned greater than or equal
- vcgeq_f32⚠neonFloating-point compare greater than or equal
- vcgeq_s8⚠neonCompare signed greater than or equal
- vcgeq_s16⚠neonCompare signed greater than or equal
- vcgeq_s32⚠neonCompare signed greater than or equal
- vcgeq_u8⚠neonCompare unsigned greater than or equal
- vcgeq_u16⚠neonCompare unsigned greater than or equal
- vcgeq_u32⚠neonCompare unsigned greater than or equal
- vcgt_f32⚠neonFloating-point compare greater than
- vcgt_s8⚠neonCompare signed greater than
- vcgt_s16⚠neonCompare signed greater than
- vcgt_s32⚠neonCompare signed greater than
- vcgt_u8⚠neonCompare unsigned highe
- vcgt_u16⚠neonCompare unsigned highe
- vcgt_u32⚠neonCompare unsigned highe
- vcgtq_f32⚠neonFloating-point compare greater than
- vcgtq_s8⚠neonCompare signed greater than
- vcgtq_s16⚠neonCompare signed greater than
- vcgtq_s32⚠neonCompare signed greater than
- vcgtq_u8⚠neonCompare unsigned highe
- vcgtq_u16⚠neonCompare unsigned highe
- vcgtq_u32⚠neonCompare unsigned highe
- vcle_f32⚠neonFloating-point compare less than or equal
- vcle_s8⚠neonCompare signed less than or equal
- vcle_s16⚠neonCompare signed less than or equal
- vcle_s32⚠neonCompare signed less than or equal
- vcle_u8⚠neonCompare unsigned less than or equal
- vcle_u16⚠neonCompare unsigned less than or equal
- vcle_u32⚠neonCompare unsigned less than or equal
- vcleq_f32⚠neonFloating-point compare less than or equal
- vcleq_s8⚠neonCompare signed less than or equal
- vcleq_s16⚠neonCompare signed less than or equal
- vcleq_s32⚠neonCompare signed less than or equal
- vcleq_u8⚠neonCompare unsigned less than or equal
- vcleq_u16⚠neonCompare unsigned less than or equal
- vcleq_u32⚠neonCompare unsigned less than or equal
- vcls_s8⚠neonCount leading sign bits
- vcls_s16⚠neonCount leading sign bits
- vcls_s32⚠neonCount leading sign bits
- vcls_u8⚠neonCount leading sign bits
- vcls_u16⚠neonCount leading sign bits
- vcls_u32⚠neonCount leading sign bits
- vclsq_s8⚠neonCount leading sign bits
- vclsq_s16⚠neonCount leading sign bits
- vclsq_s32⚠neonCount leading sign bits
- vclsq_u8⚠neonCount leading sign bits
- vclsq_u16⚠neonCount leading sign bits
- vclsq_u32⚠neonCount leading sign bits
- vclt_f32⚠neonFloating-point compare less than
- vclt_s8⚠neonCompare signed less than
- vclt_s16⚠neonCompare signed less than
- vclt_s32⚠neonCompare signed less than
- vclt_u8⚠neonCompare unsigned less than
- vclt_u16⚠neonCompare unsigned less than
- vclt_u32⚠neonCompare unsigned less than
- vcltq_f32⚠neonFloating-point compare less than
- vcltq_s8⚠neonCompare signed less than
- vcltq_s16⚠neonCompare signed less than
- vcltq_s32⚠neonCompare signed less than
- vcltq_u8⚠neonCompare unsigned less than
- vcltq_u16⚠neonCompare unsigned less than
- vcltq_u32⚠neonCompare unsigned less than
- vclz_s8⚠neonCount leading zero bits
- vclz_s16⚠neonCount leading zero bits
- vclz_s32⚠neonCount leading zero bits
- vclz_u8⚠neonCount leading zero bits
- vclz_u16⚠neonCount leading zero bits
- vclz_u32⚠neonCount leading zero bits
- vclzq_s8⚠neonCount leading zero bits
- vclzq_s16⚠neonCount leading zero bits
- vclzq_s32⚠neonCount leading zero bits
- vclzq_u8⚠neonCount leading zero bits
- vclzq_u16⚠neonCount leading zero bits
- vclzq_u32⚠neonCount leading zero bits
- vcombine_f32⚠neonVector combine
- vcombine_p8⚠neonVector combine
- vcombine_p16⚠neonVector combine
- vcreate_f32⚠neonInsert vector element from another vector element
- vcreate_p8⚠neonInsert vector element from another vector element
- vcreate_p16⚠neonInsert vector element from another vector element
- vcreate_p64⚠neon,aesInsert vector element from another vector element
- vcreate_s8⚠neonInsert vector element from another vector element
- vcreate_s16⚠neonInsert vector element from another vector element
- vcreate_s32⚠neonInsert vector element from another vector element
- vcreate_s64⚠neonInsert vector element from another vector element
- vcreate_u8⚠neonInsert vector element from another vector element
- vcreate_u16⚠neonInsert vector element from another vector element
- vcreate_u32⚠neonInsert vector element from another vector element
- vcreate_u64⚠neonInsert vector element from another vector element
- vcvt_f32_s32⚠neonFixed-point convert to floating-point
- vcvt_f32_u32⚠neonFixed-point convert to floating-point
- vcvt_s32_f32⚠neonFloating-point convert to signed fixed-point, rounding toward zero
- vcvt_u32_f32⚠neonFloating-point convert to unsigned fixed-point, rounding toward zero
- vcvtq_f32_s32⚠neonFixed-point convert to floating-point
- vcvtq_f32_u32⚠neonFixed-point convert to floating-point
- vcvtq_s32_f32⚠neonFloating-point convert to signed fixed-point, rounding toward zero
- vcvtq_u32_f32⚠neonFloating-point convert to unsigned fixed-point, rounding toward zero
- vdup_lane_f32⚠neonSet all vector lanes to the same value
- vdup_lane_p8⚠neonSet all vector lanes to the same value
- vdup_lane_p16⚠neonSet all vector lanes to the same value
- vdup_lane_s8⚠neonSet all vector lanes to the same value
- vdup_lane_s16⚠neonSet all vector lanes to the same value
- vdup_lane_s32⚠neonSet all vector lanes to the same value
- vdup_lane_s64⚠neonSet all vector lanes to the same value
- vdup_lane_u8⚠neonSet all vector lanes to the same value
- vdup_lane_u16⚠neonSet all vector lanes to the same value
- vdup_lane_u32⚠neonSet all vector lanes to the same value
- vdup_lane_u64⚠neonSet all vector lanes to the same value
- vdup_laneq_f32⚠neonSet all vector lanes to the same value
- vdup_laneq_p8⚠neonSet all vector lanes to the same value
- vdup_laneq_p16⚠neonSet all vector lanes to the same value
- vdup_laneq_s8⚠neonSet all vector lanes to the same value
- vdup_laneq_s16⚠neonSet all vector lanes to the same value
- vdup_laneq_s32⚠neonSet all vector lanes to the same value
- vdup_laneq_s64⚠neonSet all vector lanes to the same value
- vdup_laneq_u8⚠neonSet all vector lanes to the same value
- vdup_laneq_u16⚠neonSet all vector lanes to the same value
- vdup_laneq_u32⚠neonSet all vector lanes to the same value
- vdup_laneq_u64⚠neonSet all vector lanes to the same value
- vdupq_lane_f32⚠neonSet all vector lanes to the same value
- vdupq_lane_p8⚠neonSet all vector lanes to the same value
- vdupq_lane_p16⚠neonSet all vector lanes to the same value
- vdupq_lane_s8⚠neonSet all vector lanes to the same value
- vdupq_lane_s16⚠neonSet all vector lanes to the same value
- vdupq_lane_s32⚠neonSet all vector lanes to the same value
- vdupq_lane_s64⚠neonSet all vector lanes to the same value
- vdupq_lane_u8⚠neonSet all vector lanes to the same value
- vdupq_lane_u16⚠neonSet all vector lanes to the same value
- vdupq_lane_u32⚠neonSet all vector lanes to the same value
- vdupq_lane_u64⚠neonSet all vector lanes to the same value
- vdupq_laneq_f32⚠neonSet all vector lanes to the same value
- vdupq_laneq_p8⚠neonSet all vector lanes to the same value
- vdupq_laneq_p16⚠neonSet all vector lanes to the same value
- vdupq_laneq_s8⚠neonSet all vector lanes to the same value
- vdupq_laneq_s16⚠neonSet all vector lanes to the same value
- vdupq_laneq_s32⚠neonSet all vector lanes to the same value
- vdupq_laneq_s64⚠neonSet all vector lanes to the same value
- vdupq_laneq_u8⚠neonSet all vector lanes to the same value
- vdupq_laneq_u16⚠neonSet all vector lanes to the same value
- vdupq_laneq_u32⚠neonSet all vector lanes to the same value
- vdupq_laneq_u64⚠neonSet all vector lanes to the same value
- veor_s8⚠neonVector bitwise exclusive or (vector)
- veor_s16⚠neonVector bitwise exclusive or (vector)
- veor_s32⚠neonVector bitwise exclusive or (vector)
- veor_s64⚠neonVector bitwise exclusive or (vector)
- veor_u8⚠neonVector bitwise exclusive or (vector)
- veor_u16⚠neonVector bitwise exclusive or (vector)
- veor_u32⚠neonVector bitwise exclusive or (vector)
- veor_u64⚠neonVector bitwise exclusive or (vector)
- veorq_s8⚠neonVector bitwise exclusive or (vector)
- veorq_s16⚠neonVector bitwise exclusive or (vector)
- veorq_s32⚠neonVector bitwise exclusive or (vector)
- veorq_s64⚠neonVector bitwise exclusive or (vector)
- veorq_u8⚠neonVector bitwise exclusive or (vector)
- veorq_u16⚠neonVector bitwise exclusive or (vector)
- veorq_u32⚠neonVector bitwise exclusive or (vector)
- veorq_u64⚠neonVector bitwise exclusive or (vector)
- vext_f32⚠neonExtract vector from pair of vectors
- vext_p8⚠neonExtract vector from pair of vectors
- vext_p16⚠neonExtract vector from pair of vectors
- vext_s8⚠neonExtract vector from pair of vectors
- vext_s16⚠neonExtract vector from pair of vectors
- vext_s32⚠neonExtract vector from pair of vectors
- vext_u8⚠neonExtract vector from pair of vectors
- vext_u16⚠neonExtract vector from pair of vectors
- vext_u32⚠neonExtract vector from pair of vectors
- vextq_f32⚠neonExtract vector from pair of vectors
- vextq_p8⚠neonExtract vector from pair of vectors
- vextq_p16⚠neonExtract vector from pair of vectors
- vextq_s8⚠neonExtract vector from pair of vectors
- vextq_s16⚠neonExtract vector from pair of vectors
- vextq_s32⚠neonExtract vector from pair of vectors
- vextq_s64⚠neonExtract vector from pair of vectors
- vextq_u8⚠neonExtract vector from pair of vectors
- vextq_u16⚠neonExtract vector from pair of vectors
- vextq_u32⚠neonExtract vector from pair of vectors
- vextq_u64⚠neonExtract vector from pair of vectors
- vfma_f32⚠neonFloating-point fused Multiply-Add to accumulator(vector)
- vfma_n_f32⚠neonFloating-point fused Multiply-Add to accumulator(vector)
- vfmaq_f32⚠neonFloating-point fused Multiply-Add to accumulator(vector)
- vfmaq_n_f32⚠neonFloating-point fused Multiply-Add to accumulator(vector)
- vfms_f32⚠neonFloating-point fused multiply-subtract from accumulator
- vfms_n_f32⚠neonFloating-point fused Multiply-subtract to accumulator(vector)
- vfmsq_f32⚠neonFloating-point fused multiply-subtract from accumulator
- vfmsq_n_f32⚠neonFloating-point fused Multiply-subtract to accumulator(vector)
- vhadd_s8⚠neonHalving add
- vhadd_s16⚠neonHalving add
- vhadd_s32⚠neonHalving add
- vhadd_u8⚠neonHalving add
- vhadd_u16⚠neonHalving add
- vhadd_u32⚠neonHalving add
- vhaddq_s8⚠neonHalving add
- vhaddq_s16⚠neonHalving add
- vhaddq_s32⚠neonHalving add
- vhaddq_u8⚠neonHalving add
- vhaddq_u16⚠neonHalving add
- vhaddq_u32⚠neonHalving add
- vhsub_s8⚠neonSigned halving subtract
- vhsub_s16⚠neonSigned halving subtract
- vhsub_s32⚠neonSigned halving subtract
- vhsub_u8⚠neonSigned halving subtract
- vhsub_u16⚠neonSigned halving subtract
- vhsub_u32⚠neonSigned halving subtract
- vhsubq_s8⚠neonSigned halving subtract
- vhsubq_s16⚠neonSigned halving subtract
- vhsubq_s32⚠neonSigned halving subtract
- vhsubq_u8⚠neonSigned halving subtract
- vhsubq_u16⚠neonSigned halving subtract
- vhsubq_u32⚠neonSigned halving subtract
- vld1_f32_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_f32_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_f32_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_p8_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_p8_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_p8_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_p16_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_p16_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_p16_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_p64_x2⚠neon,aesLoad multiple single-element structures to one, two, three, or four registers
- vld1_p64_x3⚠neon,aesLoad multiple single-element structures to one, two, three, or four registers
- vld1_p64_x4⚠neon,aesLoad multiple single-element structures to one, two, three, or four registers
- vld1_s8_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s8_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s8_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s16_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s16_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s16_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s32_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s32_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s32_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s64_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s64_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_s64_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u8_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u8_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u8_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u16_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u16_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u16_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u32_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u32_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u32_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u64_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u64_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1_u64_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_f32_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_f32_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_f32_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p8_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p8_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p8_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p16_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p16_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p16_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p64_x2⚠neon,aesLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p64_x3⚠neon,aesLoad multiple single-element structures to one, two, three, or four registers
- vld1q_p64_x4⚠neon,aesLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s8_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s8_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s8_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s16_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s16_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s16_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s32_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s32_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s32_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s64_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s64_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_s64_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u8_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u8_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u8_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u16_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u16_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u16_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u32_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u32_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u32_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u64_x2⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u64_x3⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld1q_u64_x4⚠neonLoad multiple single-element structures to one, two, three, or four registers
- vld2_dup_p8⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_p16⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_p64⚠neon,aesLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_u8⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_u16⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_u32⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_dup_u64⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2_lane_p8⚠neonLoad multiple 2-element structures to two registers
- vld2_lane_p16⚠neonLoad multiple 2-element structures to two registers
- vld2_lane_u8⚠neonLoad multiple 2-element structures to two registers
- vld2_lane_u16⚠neonLoad multiple 2-element structures to two registers
- vld2_lane_u32⚠neonLoad multiple 2-element structures to two registers
- vld2_p8⚠neonLoad multiple 2-element structures to two registers
- vld2_p16⚠neonLoad multiple 2-element structures to two registers
- vld2_p64⚠neon,aesLoad multiple 2-element structures to two registers
- vld2_u8⚠neonLoad multiple 2-element structures to two registers
- vld2_u16⚠neonLoad multiple 2-element structures to two registers
- vld2_u32⚠neonLoad multiple 2-element structures to two registers
- vld2_u64⚠neonLoad multiple 2-element structures to two registers
- vld2q_dup_p8⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2q_dup_p16⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2q_dup_u8⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2q_dup_u16⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2q_dup_u32⚠neonLoad single 2-element structure and replicate to all lanes of two registers
- vld2q_lane_p16⚠neonLoad multiple 2-element structures to two registers
- vld2q_lane_u16⚠neonLoad multiple 2-element structures to two registers
- vld2q_lane_u32⚠neonLoad multiple 2-element structures to two registers
- vld2q_p8⚠neonLoad multiple 2-element structures to two registers
- vld2q_p16⚠neonLoad multiple 2-element structures to two registers
- vld2q_u8⚠neonLoad multiple 2-element structures to two registers
- vld2q_u16⚠neonLoad multiple 2-element structures to two registers
- vld2q_u32⚠neonLoad multiple 2-element structures to two registers
- vld3_dup_p8⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_p16⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_p64⚠neon,aesLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_u8⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_u16⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_u32⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_dup_u64⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3_lane_p8⚠neonLoad multiple 3-element structures to three registers
- vld3_lane_p16⚠neonLoad multiple 3-element structures to three registers
- vld3_lane_u8⚠neonLoad multiple 3-element structures to three registers
- vld3_lane_u16⚠neonLoad multiple 3-element structures to three registers
- vld3_lane_u32⚠neonLoad multiple 3-element structures to three registers
- vld3_p8⚠neonLoad multiple 3-element structures to three registers
- vld3_p16⚠neonLoad multiple 3-element structures to three registers
- vld3_p64⚠neon,aesLoad multiple 3-element structures to three registers
- vld3_u8⚠neonLoad multiple 3-element structures to three registers
- vld3_u16⚠neonLoad multiple 3-element structures to three registers
- vld3_u32⚠neonLoad multiple 3-element structures to three registers
- vld3_u64⚠neonLoad multiple 3-element structures to three registers
- vld3q_dup_p8⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3q_dup_p16⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3q_dup_u8⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3q_dup_u16⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3q_dup_u32⚠neonLoad single 3-element structure and replicate to all lanes of three registers
- vld3q_lane_p16⚠neonLoad multiple 3-element structures to three registers
- vld3q_lane_u16⚠neonLoad multiple 3-element structures to three registers
- vld3q_lane_u32⚠neonLoad multiple 3-element structures to three registers
- vld3q_p8⚠neonLoad multiple 3-element structures to three registers
- vld3q_p16⚠neonLoad multiple 3-element structures to three registers
- vld3q_u8⚠neonLoad multiple 3-element structures to three registers
- vld3q_u16⚠neonLoad multiple 3-element structures to three registers
- vld3q_u32⚠neonLoad multiple 3-element structures to three registers
- vld4_dup_p8⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_p16⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_p64⚠neon,aesLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_u8⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_u16⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_u32⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_dup_u64⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4_lane_p8⚠neonLoad multiple 4-element structures to four registers
- vld4_lane_p16⚠neonLoad multiple 4-element structures to four registers
- vld4_lane_u8⚠neonLoad multiple 4-element structures to four registers
- vld4_lane_u16⚠neonLoad multiple 4-element structures to four registers
- vld4_lane_u32⚠neonLoad multiple 4-element structures to four registers
- vld4_p8⚠neonLoad multiple 4-element structures to four registers
- vld4_p16⚠neonLoad multiple 4-element structures to four registers
- vld4_p64⚠neon,aesLoad multiple 4-element structures to four registers
- vld4_u8⚠neonLoad multiple 4-element structures to four registers
- vld4_u16⚠neonLoad multiple 4-element structures to four registers
- vld4_u32⚠neonLoad multiple 4-element structures to four registers
- vld4_u64⚠neonLoad multiple 4-element structures to four registers
- vld4q_dup_p8⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4q_dup_p16⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4q_dup_u8⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4q_dup_u16⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4q_dup_u32⚠neonLoad single 4-element structure and replicate to all lanes of four registers
- vld4q_lane_p16⚠neonLoad multiple 4-element structures to four registers
- vld4q_lane_u16⚠neonLoad multiple 4-element structures to four registers
- vld4q_lane_u32⚠neonLoad multiple 4-element structures to four registers
- vld4q_p8⚠neonLoad multiple 4-element structures to four registers
- vld4q_p16⚠neonLoad multiple 4-element structures to four registers
- vld4q_u8⚠neonLoad multiple 4-element structures to four registers
- vld4q_u16⚠neonLoad multiple 4-element structures to four registers
- vld4q_u32⚠neonLoad multiple 4-element structures to four registers
- vmax_f32⚠neonMaximum (vector)
- vmax_s8⚠neonMaximum (vector)
- vmax_s16⚠neonMaximum (vector)
- vmax_s32⚠neonMaximum (vector)
- vmax_u8⚠neonMaximum (vector)
- vmax_u16⚠neonMaximum (vector)
- vmax_u32⚠neonMaximum (vector)
- vmaxnm_f32⚠neonFloating-point Maximum Number (vector)
- vmaxnmq_f32⚠neonFloating-point Maximum Number (vector)
- vmaxq_f32⚠neonMaximum (vector)
- vmaxq_s8⚠neonMaximum (vector)
- vmaxq_s16⚠neonMaximum (vector)
- vmaxq_s32⚠neonMaximum (vector)
- vmaxq_u8⚠neonMaximum (vector)
- vmaxq_u16⚠neonMaximum (vector)
- vmaxq_u32⚠neonMaximum (vector)
- vmin_f32⚠neonMinimum (vector)
- vmin_s8⚠neonMinimum (vector)
- vmin_s16⚠neonMinimum (vector)
- vmin_s32⚠neonMinimum (vector)
- vmin_u8⚠neonMinimum (vector)
- vmin_u16⚠neonMinimum (vector)
- vmin_u32⚠neonMinimum (vector)
- vminnm_f32⚠neonFloating-point Minimum Number (vector)
- vminnmq_f32⚠neonFloating-point Minimum Number (vector)
- vminq_f32⚠neonMinimum (vector)
- vminq_s8⚠neonMinimum (vector)
- vminq_s16⚠neonMinimum (vector)
- vminq_s32⚠neonMinimum (vector)
- vminq_u8⚠neonMinimum (vector)
- vminq_u16⚠neonMinimum (vector)
- vminq_u32⚠neonMinimum (vector)
- vmla_f32⚠neonFloating-point multiply-add to accumulator
- vmla_lane_f32⚠neonVector multiply accumulate with scalar
- vmla_lane_s16⚠neonVector multiply accumulate with scalar
- vmla_lane_s32⚠neonVector multiply accumulate with scalar
- vmla_lane_u16⚠neonVector multiply accumulate with scalar
- vmla_lane_u32⚠neonVector multiply accumulate with scalar
- vmla_laneq_f32⚠neonVector multiply accumulate with scalar
- vmla_laneq_s16⚠neonVector multiply accumulate with scalar
- vmla_laneq_s32⚠neonVector multiply accumulate with scalar
- vmla_laneq_u16⚠neonVector multiply accumulate with scalar
- vmla_laneq_u32⚠neonVector multiply accumulate with scalar
- vmla_n_f32⚠neonVector multiply accumulate with scalar
- vmla_n_s16⚠neonVector multiply accumulate with scalar
- vmla_n_s32⚠neonVector multiply accumulate with scalar
- vmla_n_u16⚠neonVector multiply accumulate with scalar
- vmla_n_u32⚠neonVector multiply accumulate with scalar
- vmla_s8⚠neonMultiply-add to accumulator
- vmla_s16⚠neonMultiply-add to accumulator
- vmla_s32⚠neonMultiply-add to accumulator
- vmla_u8⚠neonMultiply-add to accumulator
- vmla_u16⚠neonMultiply-add to accumulator
- vmla_u32⚠neonMultiply-add to accumulator
- vmlal_lane_s16⚠neonVector widening multiply accumulate with scalar
- vmlal_lane_s32⚠neonVector widening multiply accumulate with scalar
- vmlal_lane_u16⚠neonVector widening multiply accumulate with scalar
- vmlal_lane_u32⚠neonVector widening multiply accumulate with scalar
- vmlal_laneq_s16⚠neonVector widening multiply accumulate with scalar
- vmlal_laneq_s32⚠neonVector widening multiply accumulate with scalar
- vmlal_laneq_u16⚠neonVector widening multiply accumulate with scalar
- vmlal_laneq_u32⚠neonVector widening multiply accumulate with scalar
- vmlal_n_s16⚠neonVector widening multiply accumulate with scalar
- vmlal_n_s32⚠neonVector widening multiply accumulate with scalar
- vmlal_n_u16⚠neonVector widening multiply accumulate with scalar
- vmlal_n_u32⚠neonVector widening multiply accumulate with scalar
- vmlal_s8⚠neonSigned multiply-add long
- vmlal_s16⚠neonSigned multiply-add long
- vmlal_s32⚠neonSigned multiply-add long
- vmlal_u8⚠neonUnsigned multiply-add long
- vmlal_u16⚠neonUnsigned multiply-add long
- vmlal_u32⚠neonUnsigned multiply-add long
- vmlaq_f32⚠neonFloating-point multiply-add to accumulator
- vmlaq_lane_f32⚠neonVector multiply accumulate with scalar
- vmlaq_lane_s16⚠neonVector multiply accumulate with scalar
- vmlaq_lane_s32⚠neonVector multiply accumulate with scalar
- vmlaq_lane_u16⚠neonVector multiply accumulate with scalar
- vmlaq_lane_u32⚠neonVector multiply accumulate with scalar
- vmlaq_laneq_f32⚠neonVector multiply accumulate with scalar
- vmlaq_laneq_s16⚠neonVector multiply accumulate with scalar
- vmlaq_laneq_s32⚠neonVector multiply accumulate with scalar
- vmlaq_laneq_u16⚠neonVector multiply accumulate with scalar
- vmlaq_laneq_u32⚠neonVector multiply accumulate with scalar
- vmlaq_n_f32⚠neonVector multiply accumulate with scalar
- vmlaq_n_s16⚠neonVector multiply accumulate with scalar
- vmlaq_n_s32⚠neonVector multiply accumulate with scalar
- vmlaq_n_u16⚠neonVector multiply accumulate with scalar
- vmlaq_n_u32⚠neonVector multiply accumulate with scalar
- vmlaq_s8⚠neonMultiply-add to accumulator
- vmlaq_s16⚠neonMultiply-add to accumulator
- vmlaq_s32⚠neonMultiply-add to accumulator
- vmlaq_u8⚠neonMultiply-add to accumulator
- vmlaq_u16⚠neonMultiply-add to accumulator
- vmlaq_u32⚠neonMultiply-add to accumulator
- vmls_f32⚠neonFloating-point multiply-subtract from accumulator
- vmls_lane_f32⚠neonVector multiply subtract with scalar
- vmls_lane_s16⚠neonVector multiply subtract with scalar
- vmls_lane_s32⚠neonVector multiply subtract with scalar
- vmls_lane_u16⚠neonVector multiply subtract with scalar
- vmls_lane_u32⚠neonVector multiply subtract with scalar
- vmls_laneq_f32⚠neonVector multiply subtract with scalar
- vmls_laneq_s16⚠neonVector multiply subtract with scalar
- vmls_laneq_s32⚠neonVector multiply subtract with scalar
- vmls_laneq_u16⚠neonVector multiply subtract with scalar
- vmls_laneq_u32⚠neonVector multiply subtract with scalar
- vmls_n_f32⚠neonVector multiply subtract with scalar
- vmls_n_s16⚠neonVector multiply subtract with scalar
- vmls_n_s32⚠neonVector multiply subtract with scalar
- vmls_n_u16⚠neonVector multiply subtract with scalar
- vmls_n_u32⚠neonVector multiply subtract with scalar
- vmls_s8⚠neonMultiply-subtract from accumulator
- vmls_s16⚠neonMultiply-subtract from accumulator
- vmls_s32⚠neonMultiply-subtract from accumulator
- vmls_u8⚠neonMultiply-subtract from accumulator
- vmls_u16⚠neonMultiply-subtract from accumulator
- vmls_u32⚠neonMultiply-subtract from accumulator
- vmlsl_lane_s16⚠neonVector widening multiply subtract with scalar
- vmlsl_lane_s32⚠neonVector widening multiply subtract with scalar
- vmlsl_lane_u16⚠neonVector widening multiply subtract with scalar
- vmlsl_lane_u32⚠neonVector widening multiply subtract with scalar
- vmlsl_laneq_s16⚠neonVector widening multiply subtract with scalar
- vmlsl_laneq_s32⚠neonVector widening multiply subtract with scalar
- vmlsl_laneq_u16⚠neonVector widening multiply subtract with scalar
- vmlsl_laneq_u32⚠neonVector widening multiply subtract with scalar
- vmlsl_n_s16⚠neonVector widening multiply subtract with scalar
- vmlsl_n_s32⚠neonVector widening multiply subtract with scalar
- vmlsl_n_u16⚠neonVector widening multiply subtract with scalar
- vmlsl_n_u32⚠neonVector widening multiply subtract with scalar
- vmlsl_s8⚠neonSigned multiply-subtract long
- vmlsl_s16⚠neonSigned multiply-subtract long
- vmlsl_s32⚠neonSigned multiply-subtract long
- vmlsl_u8⚠neonUnsigned multiply-subtract long
- vmlsl_u16⚠neonUnsigned multiply-subtract long
- vmlsl_u32⚠neonUnsigned multiply-subtract long
- vmlsq_f32⚠neonFloating-point multiply-subtract from accumulator
- vmlsq_lane_f32⚠neonVector multiply subtract with scalar
- vmlsq_lane_s16⚠neonVector multiply subtract with scalar
- vmlsq_lane_s32⚠neonVector multiply subtract with scalar
- vmlsq_lane_u16⚠neonVector multiply subtract with scalar
- vmlsq_lane_u32⚠neonVector multiply subtract with scalar
- vmlsq_laneq_f32⚠neonVector multiply subtract with scalar
- vmlsq_laneq_s16⚠neonVector multiply subtract with scalar
- vmlsq_laneq_s32⚠neonVector multiply subtract with scalar
- vmlsq_laneq_u16⚠neonVector multiply subtract with scalar
- vmlsq_laneq_u32⚠neonVector multiply subtract with scalar
- vmlsq_n_f32⚠neonVector multiply subtract with scalar
- vmlsq_n_s16⚠neonVector multiply subtract with scalar
- vmlsq_n_s32⚠neonVector multiply subtract with scalar
- vmlsq_n_u16⚠neonVector multiply subtract with scalar
- vmlsq_n_u32⚠neonVector multiply subtract with scalar
- vmlsq_s8⚠neonMultiply-subtract from accumulator
- vmlsq_s16⚠neonMultiply-subtract from accumulator
- vmlsq_s32⚠neonMultiply-subtract from accumulator
- vmlsq_u8⚠neonMultiply-subtract from accumulator
- vmlsq_u16⚠neonMultiply-subtract from accumulator
- vmlsq_u32⚠neonMultiply-subtract from accumulator
- vmul_f32⚠neonMultiply
- vmul_lane_f32⚠neonFloating-point multiply
- vmul_lane_s16⚠neonMultiply
- vmul_lane_s32⚠neonMultiply
- vmul_lane_u16⚠neonMultiply
- vmul_lane_u32⚠neonMultiply
- vmul_laneq_f32⚠neonFloating-point multiply
- vmul_laneq_s16⚠neonMultiply
- vmul_laneq_s32⚠neonMultiply
- vmul_laneq_u16⚠neonMultiply
- vmul_laneq_u32⚠neonMultiply
- vmul_n_f32⚠neonVector multiply by scalar
- vmul_n_s16⚠neonVector multiply by scalar
- vmul_n_s32⚠neonVector multiply by scalar
- vmul_n_u16⚠neonVector multiply by scalar
- vmul_n_u32⚠neonVector multiply by scalar
- vmul_p8⚠neonPolynomial multiply
- vmul_s8⚠neonMultiply
- vmul_s16⚠neonMultiply
- vmul_s32⚠neonMultiply
- vmul_u8⚠neonMultiply
- vmul_u16⚠neonMultiply
- vmul_u32⚠neonMultiply
- vmull_lane_s16⚠neonVector long multiply by scalar
- vmull_lane_s32⚠neonVector long multiply by scalar
- vmull_lane_u16⚠neonVector long multiply by scalar
- vmull_lane_u32⚠neonVector long multiply by scalar
- vmull_laneq_s16⚠neonVector long multiply by scalar
- vmull_laneq_s32⚠neonVector long multiply by scalar
- vmull_laneq_u16⚠neonVector long multiply by scalar
- vmull_laneq_u32⚠neonVector long multiply by scalar
- vmull_n_s16⚠neonVector long multiply with scalar
- vmull_n_s32⚠neonVector long multiply with scalar
- vmull_n_u16⚠neonVector long multiply with scalar
- vmull_n_u32⚠neonVector long multiply with scalar
- vmull_p8⚠neonPolynomial multiply long
- vmull_s8⚠neonSigned multiply long
- vmull_s16⚠neonSigned multiply long
- vmull_s32⚠neonSigned multiply long
- vmull_u8⚠neonUnsigned multiply long
- vmull_u16⚠neonUnsigned multiply long
- vmull_u32⚠neonUnsigned multiply long
- vmulq_f32⚠neonMultiply
- vmulq_lane_f32⚠neonFloating-point multiply
- vmulq_lane_s16⚠neonMultiply
- vmulq_lane_s32⚠neonMultiply
- vmulq_lane_u16⚠neonMultiply
- vmulq_lane_u32⚠neonMultiply
- vmulq_laneq_f32⚠neonFloating-point multiply
- vmulq_laneq_s16⚠neonMultiply
- vmulq_laneq_s32⚠neonMultiply
- vmulq_laneq_u16⚠neonMultiply
- vmulq_laneq_u32⚠neonMultiply
- vmulq_n_f32⚠neonVector multiply by scalar
- vmulq_n_s16⚠neonVector multiply by scalar
- vmulq_n_s32⚠neonVector multiply by scalar
- vmulq_n_u16⚠neonVector multiply by scalar
- vmulq_n_u32⚠neonVector multiply by scalar
- vmulq_p8⚠neonPolynomial multiply
- vmulq_s8⚠neonMultiply
- vmulq_s16⚠neonMultiply
- vmulq_s32⚠neonMultiply
- vmulq_u8⚠neonMultiply
- vmulq_u16⚠neonMultiply
- vmulq_u32⚠neonMultiply
- vneg_f32⚠neonNegate
- vneg_s8⚠neonNegate
- vneg_s16⚠neonNegate
- vneg_s32⚠neonNegate
- vnegq_f32⚠neonNegate
- vnegq_s8⚠neonNegate
- vnegq_s16⚠neonNegate
- vnegq_s32⚠neonNegate
- vorr_s8⚠neonVector bitwise or (immediate, inclusive)
- vorr_s16⚠neonVector bitwise or (immediate, inclusive)
- vorr_s32⚠neonVector bitwise or (immediate, inclusive)
- vorr_s64⚠neonVector bitwise or (immediate, inclusive)
- vorr_u8⚠neonVector bitwise or (immediate, inclusive)
- vorr_u16⚠neonVector bitwise or (immediate, inclusive)
- vorr_u32⚠neonVector bitwise or (immediate, inclusive)
- vorr_u64⚠neonVector bitwise or (immediate, inclusive)
- vorrq_s8⚠neonVector bitwise or (immediate, inclusive)
- vorrq_s16⚠neonVector bitwise or (immediate, inclusive)
- vorrq_s32⚠neonVector bitwise or (immediate, inclusive)
- vorrq_s64⚠neonVector bitwise or (immediate, inclusive)
- vorrq_u8⚠neonVector bitwise or (immediate, inclusive)
- vorrq_u16⚠neonVector bitwise or (immediate, inclusive)
- vorrq_u32⚠neonVector bitwise or (immediate, inclusive)
- vorrq_u64⚠neonVector bitwise or (immediate, inclusive)
- vpadd_f32⚠neonFloating-point add pairwise
- vqabs_s8⚠neonSingned saturating Absolute value
- vqabs_s16⚠neonSingned saturating Absolute value
- vqabs_s32⚠neonSingned saturating Absolute value
- vqabsq_s8⚠neonSingned saturating Absolute value
- vqabsq_s16⚠neonSingned saturating Absolute value
- vqabsq_s32⚠neonSingned saturating Absolute value
- vqadd_s8⚠neonSaturating add
- vqadd_s16⚠neonSaturating add
- vqadd_s32⚠neonSaturating add
- vqadd_s64⚠neonSaturating add
- vqadd_u8⚠neonSaturating add
- vqadd_u16⚠neonSaturating add
- vqadd_u32⚠neonSaturating add
- vqadd_u64⚠neonSaturating add
- vqaddq_s8⚠neonSaturating add
- vqaddq_s16⚠neonSaturating add
- vqaddq_s32⚠neonSaturating add
- vqaddq_s64⚠neonSaturating add
- vqaddq_u8⚠neonSaturating add
- vqaddq_u16⚠neonSaturating add
- vqaddq_u32⚠neonSaturating add
- vqaddq_u64⚠neonSaturating add
- vqdmlal_lane_s16⚠neonVector widening saturating doubling multiply accumulate with scalar
- vqdmlal_lane_s32⚠neonVector widening saturating doubling multiply accumulate with scalar
- vqdmlal_n_s16⚠neonVector widening saturating doubling multiply accumulate with scalar
- vqdmlal_n_s32⚠neonVector widening saturating doubling multiply accumulate with scalar
- vqdmlal_s16⚠neonSigned saturating doubling multiply-add long
- vqdmlal_s32⚠neonSigned saturating doubling multiply-add long
- vqdmlsl_lane_s16⚠neonVector widening saturating doubling multiply subtract with scalar
- vqdmlsl_lane_s32⚠neonVector widening saturating doubling multiply subtract with scalar
- vqdmlsl_n_s16⚠neonVector widening saturating doubling multiply subtract with scalar
- vqdmlsl_n_s32⚠neonVector widening saturating doubling multiply subtract with scalar
- vqdmlsl_s16⚠neonSigned saturating doubling multiply-subtract long
- vqdmlsl_s32⚠neonSigned saturating doubling multiply-subtract long
- vqdmulh_laneq_s16⚠neonVector saturating doubling multiply high by scalar
- vqdmulh_laneq_s32⚠neonVector saturating doubling multiply high by scalar
- vqdmulh_n_s16⚠neonVector saturating doubling multiply high with scalar
- vqdmulh_n_s32⚠neonVector saturating doubling multiply high with scalar
- vqdmulh_s16⚠neonSigned saturating doubling multiply returning high half
- vqdmulh_s32⚠neonSigned saturating doubling multiply returning high half
- vqdmulhq_laneq_s16⚠neonVector saturating doubling multiply high by scalar
- vqdmulhq_laneq_s32⚠neonVector saturating doubling multiply high by scalar
- vqdmulhq_n_s16⚠neonVector saturating doubling multiply high with scalar
- vqdmulhq_n_s32⚠neonVector saturating doubling multiply high with scalar
- vqdmulhq_s16⚠neonSigned saturating doubling multiply returning high half
- vqdmulhq_s32⚠neonSigned saturating doubling multiply returning high half
- vqdmull_lane_s16⚠neonVector saturating doubling long multiply by scalar
- vqdmull_lane_s32⚠neonVector saturating doubling long multiply by scalar
- vqdmull_n_s16⚠neonVector saturating doubling long multiply with scalar
- vqdmull_n_s32⚠neonVector saturating doubling long multiply with scalar
- vqdmull_s16⚠neonSigned saturating doubling multiply long
- vqdmull_s32⚠neonSigned saturating doubling multiply long
- vqmovn_s16⚠neonSigned saturating extract narrow
- vqmovn_s32⚠neonSigned saturating extract narrow
- vqmovn_s64⚠neonSigned saturating extract narrow
- vqmovn_u16⚠neonUnsigned saturating extract narrow
- vqmovn_u32⚠neonUnsigned saturating extract narrow
- vqmovn_u64⚠neonUnsigned saturating extract narrow
- vqmovun_s16⚠neonSigned saturating extract unsigned narrow
- vqmovun_s32⚠neonSigned saturating extract unsigned narrow
- vqmovun_s64⚠neonSigned saturating extract unsigned narrow
- vqneg_s8⚠neonSigned saturating negate
- vqneg_s16⚠neonSigned saturating negate
- vqneg_s32⚠neonSigned saturating negate
- vqnegq_s8⚠neonSigned saturating negate
- vqnegq_s16⚠neonSigned saturating negate
- vqnegq_s32⚠neonSigned saturating negate
- vqrdmulh_lane_s16⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulh_lane_s32⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulh_laneq_s16⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulh_laneq_s32⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulh_n_s16⚠neonVector saturating rounding doubling multiply high with scalar
- vqrdmulh_n_s32⚠neonVector saturating rounding doubling multiply high with scalar
- vqrdmulh_s16⚠neonSigned saturating rounding doubling multiply returning high half
- vqrdmulh_s32⚠neonSigned saturating rounding doubling multiply returning high half
- vqrdmulhq_lane_s16⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulhq_lane_s32⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulhq_laneq_s16⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulhq_laneq_s32⚠neonVector rounding saturating doubling multiply high by scalar
- vqrdmulhq_n_s16⚠neonVector saturating rounding doubling multiply high with scalar
- vqrdmulhq_n_s32⚠neonVector saturating rounding doubling multiply high with scalar
- vqrdmulhq_s16⚠neonSigned saturating rounding doubling multiply returning high half
- vqrdmulhq_s32⚠neonSigned saturating rounding doubling multiply returning high half
- vqrshl_s8⚠neonSigned saturating rounding shift left
- vqrshl_s16⚠neonSigned saturating rounding shift left
- vqrshl_s32⚠neonSigned saturating rounding shift left
- vqrshl_s64⚠neonSigned saturating rounding shift left
- vqrshl_u8⚠neonUnsigned signed saturating rounding shift left
- vqrshl_u16⚠neonUnsigned signed saturating rounding shift left
- vqrshl_u32⚠neonUnsigned signed saturating rounding shift left
- vqrshl_u64⚠neonUnsigned signed saturating rounding shift left
- vqrshlq_s8⚠neonSigned saturating rounding shift left
- vqrshlq_s16⚠neonSigned saturating rounding shift left
- vqrshlq_s32⚠neonSigned saturating rounding shift left
- vqrshlq_s64⚠neonSigned saturating rounding shift left
- vqrshlq_u8⚠neonUnsigned signed saturating rounding shift left
- vqrshlq_u16⚠neonUnsigned signed saturating rounding shift left
- vqrshlq_u32⚠neonUnsigned signed saturating rounding shift left
- vqrshlq_u64⚠neonUnsigned signed saturating rounding shift left
- vqshl_n_s8⚠neonSigned saturating shift left
- vqshl_n_s16⚠neonSigned saturating shift left
- vqshl_n_s32⚠neonSigned saturating shift left
- vqshl_n_s64⚠neonSigned saturating shift left
- vqshl_n_u8⚠neonUnsigned saturating shift left
- vqshl_n_u16⚠neonUnsigned saturating shift left
- vqshl_n_u32⚠neonUnsigned saturating shift left
- vqshl_n_u64⚠neonUnsigned saturating shift left
- vqshl_s8⚠neonSigned saturating shift left
- vqshl_s16⚠neonSigned saturating shift left
- vqshl_s32⚠neonSigned saturating shift left
- vqshl_s64⚠neonSigned saturating shift left
- vqshl_u8⚠neonUnsigned saturating shift left
- vqshl_u16⚠neonUnsigned saturating shift left
- vqshl_u32⚠neonUnsigned saturating shift left
- vqshl_u64⚠neonUnsigned saturating shift left
- vqshlq_n_s8⚠neonSigned saturating shift left
- vqshlq_n_s16⚠neonSigned saturating shift left
- vqshlq_n_s32⚠neonSigned saturating shift left
- vqshlq_n_s64⚠neonSigned saturating shift left
- vqshlq_n_u8⚠neonUnsigned saturating shift left
- vqshlq_n_u16⚠neonUnsigned saturating shift left
- vqshlq_n_u32⚠neonUnsigned saturating shift left
- vqshlq_n_u64⚠neonUnsigned saturating shift left
- vqshlq_s8⚠neonSigned saturating shift left
- vqshlq_s16⚠neonSigned saturating shift left
- vqshlq_s32⚠neonSigned saturating shift left
- vqshlq_s64⚠neonSigned saturating shift left
- vqshlq_u8⚠neonUnsigned saturating shift left
- vqshlq_u16⚠neonUnsigned saturating shift left
- vqshlq_u32⚠neonUnsigned saturating shift left
- vqshlq_u64⚠neonUnsigned saturating shift left
- vqsub_s8⚠neonSaturating subtract
- vqsub_s16⚠neonSaturating subtract
- vqsub_s32⚠neonSaturating subtract
- vqsub_s64⚠neonSaturating subtract
- vqsub_u8⚠neonSaturating subtract
- vqsub_u16⚠neonSaturating subtract
- vqsub_u32⚠neonSaturating subtract
- vqsub_u64⚠neonSaturating subtract
- vqsubq_s8⚠neonSaturating subtract
- vqsubq_s16⚠neonSaturating subtract
- vqsubq_s32⚠neonSaturating subtract
- vqsubq_s64⚠neonSaturating subtract
- vqsubq_u8⚠neonSaturating subtract
- vqsubq_u16⚠neonSaturating subtract
- vqsubq_u32⚠neonSaturating subtract
- vqsubq_u64⚠neonSaturating subtract
- vrecpe_f32⚠neonReciprocal estimate.
- vrecpe_u32⚠neonUnsigned reciprocal estimate
- vrecpeq_f32⚠neonReciprocal estimate.
- vrecpeq_u32⚠neonUnsigned reciprocal estimate
- vrecps_f32⚠neonFloating-point reciprocal step
- vrecpsq_f32⚠neonFloating-point reciprocal step
- vreinterpret_f32_p8⚠neonVector reinterpret cast operation
- vreinterpret_f32_p16⚠neonVector reinterpret cast operation
- vreinterpret_f32_s8⚠neonVector reinterpret cast operation
- vreinterpret_f32_s16⚠neonVector reinterpret cast operation
- vreinterpret_f32_s32⚠neonVector reinterpret cast operation
- vreinterpret_f32_s64⚠neonVector reinterpret cast operation
- vreinterpret_f32_u8⚠neonVector reinterpret cast operation
- vreinterpret_f32_u16⚠neonVector reinterpret cast operation
- vreinterpret_f32_u32⚠neonVector reinterpret cast operation
- vreinterpret_f32_u64⚠neonVector reinterpret cast operation
- vreinterpret_p8_f32⚠neonVector reinterpret cast operation
- vreinterpret_p8_p16⚠neonVector reinterpret cast operation
- vreinterpret_p8_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_p8_s8⚠neonVector reinterpret cast operation
- vreinterpret_p8_s16⚠neonVector reinterpret cast operation
- vreinterpret_p8_s32⚠neonVector reinterpret cast operation
- vreinterpret_p8_s64⚠neonVector reinterpret cast operation
- vreinterpret_p8_u8⚠neonVector reinterpret cast operation
- vreinterpret_p8_u16⚠neonVector reinterpret cast operation
- vreinterpret_p8_u32⚠neonVector reinterpret cast operation
- vreinterpret_p8_u64⚠neonVector reinterpret cast operation
- vreinterpret_p16_f32⚠neonVector reinterpret cast operation
- vreinterpret_p16_p8⚠neonVector reinterpret cast operation
- vreinterpret_p16_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_p16_s8⚠neonVector reinterpret cast operation
- vreinterpret_p16_s16⚠neonVector reinterpret cast operation
- vreinterpret_p16_s32⚠neonVector reinterpret cast operation
- vreinterpret_p16_s64⚠neonVector reinterpret cast operation
- vreinterpret_p16_u8⚠neonVector reinterpret cast operation
- vreinterpret_p16_u16⚠neonVector reinterpret cast operation
- vreinterpret_p16_u32⚠neonVector reinterpret cast operation
- vreinterpret_p16_u64⚠neonVector reinterpret cast operation
- vreinterpret_p64_p8⚠neon,aesVector reinterpret cast operation
- vreinterpret_p64_p16⚠neon,aesVector reinterpret cast operation
- vreinterpret_p64_s8⚠neon,aesVector reinterpret cast operation
- vreinterpret_p64_s16⚠neon,aesVector reinterpret cast operation
- vreinterpret_p64_s32⚠neon,aesVector reinterpret cast operation
- vreinterpret_p64_u8⚠neon,aesVector reinterpret cast operation
- vreinterpret_p64_u16⚠neon,aesVector reinterpret cast operation
- vreinterpret_p64_u32⚠neon,aesVector reinterpret cast operation
- vreinterpret_s8_f32⚠neonVector reinterpret cast operation
- vreinterpret_s8_p8⚠neonVector reinterpret cast operation
- vreinterpret_s8_p16⚠neonVector reinterpret cast operation
- vreinterpret_s8_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_s8_s16⚠neonVector reinterpret cast operation
- vreinterpret_s8_s32⚠neonVector reinterpret cast operation
- vreinterpret_s8_s64⚠neonVector reinterpret cast operation
- vreinterpret_s8_u8⚠neonVector reinterpret cast operation
- vreinterpret_s8_u16⚠neonVector reinterpret cast operation
- vreinterpret_s8_u32⚠neonVector reinterpret cast operation
- vreinterpret_s8_u64⚠neonVector reinterpret cast operation
- vreinterpret_s16_f32⚠neonVector reinterpret cast operation
- vreinterpret_s16_p8⚠neonVector reinterpret cast operation
- vreinterpret_s16_p16⚠neonVector reinterpret cast operation
- vreinterpret_s16_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_s16_s8⚠neonVector reinterpret cast operation
- vreinterpret_s16_s32⚠neonVector reinterpret cast operation
- vreinterpret_s16_s64⚠neonVector reinterpret cast operation
- vreinterpret_s16_u8⚠neonVector reinterpret cast operation
- vreinterpret_s16_u16⚠neonVector reinterpret cast operation
- vreinterpret_s16_u32⚠neonVector reinterpret cast operation
- vreinterpret_s16_u64⚠neonVector reinterpret cast operation
- vreinterpret_s32_f32⚠neonVector reinterpret cast operation
- vreinterpret_s32_p8⚠neonVector reinterpret cast operation
- vreinterpret_s32_p16⚠neonVector reinterpret cast operation
- vreinterpret_s32_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_s32_s8⚠neonVector reinterpret cast operation
- vreinterpret_s32_s16⚠neonVector reinterpret cast operation
- vreinterpret_s32_s64⚠neonVector reinterpret cast operation
- vreinterpret_s32_u8⚠neonVector reinterpret cast operation
- vreinterpret_s32_u16⚠neonVector reinterpret cast operation
- vreinterpret_s32_u32⚠neonVector reinterpret cast operation
- vreinterpret_s32_u64⚠neonVector reinterpret cast operation
- vreinterpret_s64_f32⚠neonVector reinterpret cast operation
- vreinterpret_s64_p8⚠neonVector reinterpret cast operation
- vreinterpret_s64_p16⚠neonVector reinterpret cast operation
- vreinterpret_s64_s8⚠neonVector reinterpret cast operation
- vreinterpret_s64_s16⚠neonVector reinterpret cast operation
- vreinterpret_s64_s32⚠neonVector reinterpret cast operation
- vreinterpret_s64_u8⚠neonVector reinterpret cast operation
- vreinterpret_s64_u16⚠neonVector reinterpret cast operation
- vreinterpret_s64_u32⚠neonVector reinterpret cast operation
- vreinterpret_s64_u64⚠neonVector reinterpret cast operation
- vreinterpret_u8_f32⚠neonVector reinterpret cast operation
- vreinterpret_u8_p8⚠neonVector reinterpret cast operation
- vreinterpret_u8_p16⚠neonVector reinterpret cast operation
- vreinterpret_u8_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_u8_s8⚠neonVector reinterpret cast operation
- vreinterpret_u8_s16⚠neonVector reinterpret cast operation
- vreinterpret_u8_s32⚠neonVector reinterpret cast operation
- vreinterpret_u8_s64⚠neonVector reinterpret cast operation
- vreinterpret_u8_u16⚠neonVector reinterpret cast operation
- vreinterpret_u8_u32⚠neonVector reinterpret cast operation
- vreinterpret_u8_u64⚠neonVector reinterpret cast operation
- vreinterpret_u16_f32⚠neonVector reinterpret cast operation
- vreinterpret_u16_p8⚠neonVector reinterpret cast operation
- vreinterpret_u16_p16⚠neonVector reinterpret cast operation
- vreinterpret_u16_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_u16_s8⚠neonVector reinterpret cast operation
- vreinterpret_u16_s16⚠neonVector reinterpret cast operation
- vreinterpret_u16_s32⚠neonVector reinterpret cast operation
- vreinterpret_u16_s64⚠neonVector reinterpret cast operation
- vreinterpret_u16_u8⚠neonVector reinterpret cast operation
- vreinterpret_u16_u32⚠neonVector reinterpret cast operation
- vreinterpret_u16_u64⚠neonVector reinterpret cast operation
- vreinterpret_u32_f32⚠neonVector reinterpret cast operation
- vreinterpret_u32_p8⚠neonVector reinterpret cast operation
- vreinterpret_u32_p16⚠neonVector reinterpret cast operation
- vreinterpret_u32_p64⚠neon,aesVector reinterpret cast operation
- vreinterpret_u32_s8⚠neonVector reinterpret cast operation
- vreinterpret_u32_s16⚠neonVector reinterpret cast operation
- vreinterpret_u32_s32⚠neonVector reinterpret cast operation
- vreinterpret_u32_s64⚠neonVector reinterpret cast operation
- vreinterpret_u32_u8⚠neonVector reinterpret cast operation
- vreinterpret_u32_u16⚠neonVector reinterpret cast operation
- vreinterpret_u32_u64⚠neonVector reinterpret cast operation
- vreinterpret_u64_f32⚠neonVector reinterpret cast operation
- vreinterpret_u64_p8⚠neonVector reinterpret cast operation
- vreinterpret_u64_p16⚠neonVector reinterpret cast operation
- vreinterpret_u64_s8⚠neonVector reinterpret cast operation
- vreinterpret_u64_s16⚠neonVector reinterpret cast operation
- vreinterpret_u64_s32⚠neonVector reinterpret cast operation
- vreinterpret_u64_s64⚠neonVector reinterpret cast operation
- vreinterpret_u64_u8⚠neonVector reinterpret cast operation
- vreinterpret_u64_u16⚠neonVector reinterpret cast operation
- vreinterpret_u64_u32⚠neonVector reinterpret cast operation
- vreinterpretq_f32_p8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_f32_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_f32_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_p8_f32⚠neonVector reinterpret cast operation
- vreinterpretq_p8_p16⚠neonVector reinterpret cast operation
- vreinterpretq_p8_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p8_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p8_s8⚠neonVector reinterpret cast operation
- vreinterpretq_p8_s16⚠neonVector reinterpret cast operation
- vreinterpretq_p8_s32⚠neonVector reinterpret cast operation
- vreinterpretq_p8_s64⚠neonVector reinterpret cast operation
- vreinterpretq_p8_u8⚠neonVector reinterpret cast operation
- vreinterpretq_p8_u16⚠neonVector reinterpret cast operation
- vreinterpretq_p8_u32⚠neonVector reinterpret cast operation
- vreinterpretq_p8_u64⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_p16_p8⚠neonVector reinterpret cast operation
- vreinterpretq_p16_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p16_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p16_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_p16_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_p64_p8⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_p16⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_s8⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_s16⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_s32⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_u8⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_u16⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p64_u32⚠neon,aesVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_p128_p8⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_p16⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_s8⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_s16⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_s32⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_s64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_u8⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_u16⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_u32⚠neon,aesVector reinterpret cast operation
- vreinterpretq_p128_u64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s8_f32⚠neonVector reinterpret cast operation
- vreinterpretq_s8_p8⚠neonVector reinterpret cast operation
- vreinterpretq_s8_p16⚠neonVector reinterpret cast operation
- vreinterpretq_s8_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s8_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s8_s16⚠neonVector reinterpret cast operation
- vreinterpretq_s8_s32⚠neonVector reinterpret cast operation
- vreinterpretq_s8_s64⚠neonVector reinterpret cast operation
- vreinterpretq_s8_u8⚠neonVector reinterpret cast operation
- vreinterpretq_s8_u16⚠neonVector reinterpret cast operation
- vreinterpretq_s8_u32⚠neonVector reinterpret cast operation
- vreinterpretq_s8_u64⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s16_p8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s16_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s16_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s16_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s16_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s32_p8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s32_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s32_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s32_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s32_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s64_p8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s64_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_s64_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_s64_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u8_f32⚠neonVector reinterpret cast operation
- vreinterpretq_u8_p8⚠neonVector reinterpret cast operation
- vreinterpretq_u8_p16⚠neonVector reinterpret cast operation
- vreinterpretq_u8_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_u8_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_u8_s8⚠neonVector reinterpret cast operation
- vreinterpretq_u8_s16⚠neonVector reinterpret cast operation
- vreinterpretq_u8_s32⚠neonVector reinterpret cast operation
- vreinterpretq_u8_s64⚠neonVector reinterpret cast operation
- vreinterpretq_u8_u16⚠neonVector reinterpret cast operation
- vreinterpretq_u8_u32⚠neonVector reinterpret cast operation
- vreinterpretq_u8_u64⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u16_p8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u16_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_u16_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_u16_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u16_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u32_p8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u32_p64⚠neon,aesVector reinterpret cast operation
- vreinterpretq_u32_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_u32_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u32_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u64_p8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u64_p128⚠neon,aesVector reinterpret cast operation
- vreinterpretq_u64_s8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vreinterpretq_u64_u8⚠neonVector reinterpret cast operation
- Vector reinterpret cast operation
- Vector reinterpret cast operation
- vrhadd_s8⚠neonRounding halving add
- vrhadd_s16⚠neonRounding halving add
- vrhadd_s32⚠neonRounding halving add
- vrhadd_u8⚠neonRounding halving add
- vrhadd_u16⚠neonRounding halving add
- vrhadd_u32⚠neonRounding halving add
- vrhaddq_s8⚠neonRounding halving add
- vrhaddq_s16⚠neonRounding halving add
- vrhaddq_s32⚠neonRounding halving add
- vrhaddq_u8⚠neonRounding halving add
- vrhaddq_u16⚠neonRounding halving add
- vrhaddq_u32⚠neonRounding halving add
- vrndn_f32⚠neonFloating-point round to integral, to nearest with ties to even
- vrndnq_f32⚠neonFloating-point round to integral, to nearest with ties to even
- vrshl_s8⚠neonSigned rounding shift left
- vrshl_s16⚠neonSigned rounding shift left
- vrshl_s32⚠neonSigned rounding shift left
- vrshl_s64⚠neonSigned rounding shift left
- vrshl_u8⚠neonUnsigned rounding shift left
- vrshl_u16⚠neonUnsigned rounding shift left
- vrshl_u32⚠neonUnsigned rounding shift left
- vrshl_u64⚠neonUnsigned rounding shift left
- vrshlq_s8⚠neonSigned rounding shift left
- vrshlq_s16⚠neonSigned rounding shift left
- vrshlq_s32⚠neonSigned rounding shift left
- vrshlq_s64⚠neonSigned rounding shift left
- vrshlq_u8⚠neonUnsigned rounding shift left
- vrshlq_u16⚠neonUnsigned rounding shift left
- vrshlq_u32⚠neonUnsigned rounding shift left
- vrshlq_u64⚠neonUnsigned rounding shift left
- vrshr_n_s8⚠neonSigned rounding shift right
- vrshr_n_s16⚠neonSigned rounding shift right
- vrshr_n_s32⚠neonSigned rounding shift right
- vrshr_n_s64⚠neonSigned rounding shift right
- vrshr_n_u8⚠neonUnsigned rounding shift right
- vrshr_n_u16⚠neonUnsigned rounding shift right
- vrshr_n_u32⚠neonUnsigned rounding shift right
- vrshr_n_u64⚠neonUnsigned rounding shift right
- vrshrn_n_u16⚠neonRounding shift right narrow
- vrshrn_n_u32⚠neonRounding shift right narrow
- vrshrn_n_u64⚠neonRounding shift right narrow
- vrshrq_n_s8⚠neonSigned rounding shift right
- vrshrq_n_s16⚠neonSigned rounding shift right
- vrshrq_n_s32⚠neonSigned rounding shift right
- vrshrq_n_s64⚠neonSigned rounding shift right
- vrshrq_n_u8⚠neonUnsigned rounding shift right
- vrshrq_n_u16⚠neonUnsigned rounding shift right
- vrshrq_n_u32⚠neonUnsigned rounding shift right
- vrshrq_n_u64⚠neonUnsigned rounding shift right
- vrsqrte_f32⚠neonReciprocal square-root estimate.
- vrsqrte_u32⚠neonUnsigned reciprocal square root estimate
- vrsqrteq_f32⚠neonReciprocal square-root estimate.
- vrsqrteq_u32⚠neonUnsigned reciprocal square root estimate
- vrsqrts_f32⚠neonFloating-point reciprocal square root step
- vrsqrtsq_f32⚠neonFloating-point reciprocal square root step
- vrsra_n_s8⚠neonSigned rounding shift right and accumulate
- vrsra_n_s16⚠neonSigned rounding shift right and accumulate
- vrsra_n_s32⚠neonSigned rounding shift right and accumulate
- vrsra_n_s64⚠neonSigned rounding shift right and accumulate
- vrsra_n_u8⚠neonUnsigned rounding shift right and accumulate
- vrsra_n_u16⚠neonUnsigned rounding shift right and accumulate
- vrsra_n_u32⚠neonUnsigned rounding shift right and accumulate
- vrsra_n_u64⚠neonUnsigned rounding shift right and accumulate
- vrsraq_n_s8⚠neonSigned rounding shift right and accumulate
- vrsraq_n_s16⚠neonSigned rounding shift right and accumulate
- vrsraq_n_s32⚠neonSigned rounding shift right and accumulate
- vrsraq_n_s64⚠neonSigned rounding shift right and accumulate
- vrsraq_n_u8⚠neonUnsigned rounding shift right and accumulate
- vrsraq_n_u16⚠neonUnsigned rounding shift right and accumulate
- vrsraq_n_u32⚠neonUnsigned rounding shift right and accumulate
- vrsraq_n_u64⚠neonUnsigned rounding shift right and accumulate
- vrsubhn_s16⚠neonRounding subtract returning high narrow
- vrsubhn_s32⚠neonRounding subtract returning high narrow
- vrsubhn_s64⚠neonRounding subtract returning high narrow
- vrsubhn_u16⚠neonRounding subtract returning high narrow
- vrsubhn_u32⚠neonRounding subtract returning high narrow
- vrsubhn_u64⚠neonRounding subtract returning high narrow
- vset_lane_f32⚠neonInsert vector element from another vector element
- vset_lane_p8⚠neonInsert vector element from another vector element
- vset_lane_p16⚠neonInsert vector element from another vector element
- vset_lane_p64⚠neon,aesInsert vector element from another vector element
- vset_lane_s8⚠neonInsert vector element from another vector element
- vset_lane_s16⚠neonInsert vector element from another vector element
- vset_lane_s32⚠neonInsert vector element from another vector element
- vset_lane_s64⚠neonInsert vector element from another vector element
- vset_lane_u8⚠neonInsert vector element from another vector element
- vset_lane_u16⚠neonInsert vector element from another vector element
- vset_lane_u32⚠neonInsert vector element from another vector element
- vset_lane_u64⚠neonInsert vector element from another vector element
- vsetq_lane_f32⚠neonInsert vector element from another vector element
- vsetq_lane_p8⚠neonInsert vector element from another vector element
- vsetq_lane_p16⚠neonInsert vector element from another vector element
- vsetq_lane_p64⚠neon,aesInsert vector element from another vector element
- vsetq_lane_s8⚠neonInsert vector element from another vector element
- vsetq_lane_s16⚠neonInsert vector element from another vector element
- vsetq_lane_s32⚠neonInsert vector element from another vector element
- vsetq_lane_s64⚠neonInsert vector element from another vector element
- vsetq_lane_u8⚠neonInsert vector element from another vector element
- vsetq_lane_u16⚠neonInsert vector element from another vector element
- vsetq_lane_u32⚠neonInsert vector element from another vector element
- vsetq_lane_u64⚠neonInsert vector element from another vector element
- vshl_n_s8⚠neonShift left
- vshl_n_s16⚠neonShift left
- vshl_n_s32⚠neonShift left
- vshl_n_s64⚠neonShift left
- vshl_n_u8⚠neonShift left
- vshl_n_u16⚠neonShift left
- vshl_n_u32⚠neonShift left
- vshl_n_u64⚠neonShift left
- vshl_s8⚠neonSigned Shift left
- vshl_s16⚠neonSigned Shift left
- vshl_s32⚠neonSigned Shift left
- vshl_s64⚠neonSigned Shift left
- vshl_u8⚠neonUnsigned Shift left
- vshl_u16⚠neonUnsigned Shift left
- vshl_u32⚠neonUnsigned Shift left
- vshl_u64⚠neonUnsigned Shift left
- vshll_n_s8⚠neonSigned shift left long
- vshll_n_s16⚠neonSigned shift left long
- vshll_n_s32⚠neonSigned shift left long
- vshll_n_u8⚠neonSigned shift left long
- vshll_n_u16⚠neonSigned shift left long
- vshll_n_u32⚠neonSigned shift left long
- vshlq_n_s8⚠neonShift left
- vshlq_n_s16⚠neonShift left
- vshlq_n_s32⚠neonShift left
- vshlq_n_s64⚠neonShift left
- vshlq_n_u8⚠neonShift left
- vshlq_n_u16⚠neonShift left
- vshlq_n_u32⚠neonShift left
- vshlq_n_u64⚠neonShift left
- vshlq_s8⚠neonSigned Shift left
- vshlq_s16⚠neonSigned Shift left
- vshlq_s32⚠neonSigned Shift left
- vshlq_s64⚠neonSigned Shift left
- vshlq_u8⚠neonUnsigned Shift left
- vshlq_u16⚠neonUnsigned Shift left
- vshlq_u32⚠neonUnsigned Shift left
- vshlq_u64⚠neonUnsigned Shift left
- vshr_n_s8⚠neonShift right
- vshr_n_s16⚠neonShift right
- vshr_n_s32⚠neonShift right
- vshr_n_s64⚠neonShift right
- vshr_n_u8⚠neonShift right
- vshr_n_u16⚠neonShift right
- vshr_n_u32⚠neonShift right
- vshr_n_u64⚠neonShift right
- vshrn_n_s16⚠neonShift right narrow
- vshrn_n_s32⚠neonShift right narrow
- vshrn_n_s64⚠neonShift right narrow
- vshrn_n_u16⚠neonShift right narrow
- vshrn_n_u32⚠neonShift right narrow
- vshrn_n_u64⚠neonShift right narrow
- vshrq_n_s8⚠neonShift right
- vshrq_n_s16⚠neonShift right
- vshrq_n_s32⚠neonShift right
- vshrq_n_s64⚠neonShift right
- vshrq_n_u8⚠neonShift right
- vshrq_n_u16⚠neonShift right
- vshrq_n_u32⚠neonShift right
- vshrq_n_u64⚠neonShift right
- vsra_n_s8⚠neonSigned shift right and accumulate
- vsra_n_s16⚠neonSigned shift right and accumulate
- vsra_n_s32⚠neonSigned shift right and accumulate
- vsra_n_s64⚠neonSigned shift right and accumulate
- vsra_n_u8⚠neonUnsigned shift right and accumulate
- vsra_n_u16⚠neonUnsigned shift right and accumulate
- vsra_n_u32⚠neonUnsigned shift right and accumulate
- vsra_n_u64⚠neonUnsigned shift right and accumulate
- vsraq_n_s8⚠neonSigned shift right and accumulate
- vsraq_n_s16⚠neonSigned shift right and accumulate
- vsraq_n_s32⚠neonSigned shift right and accumulate
- vsraq_n_s64⚠neonSigned shift right and accumulate
- vsraq_n_u8⚠neonUnsigned shift right and accumulate
- vsraq_n_u16⚠neonUnsigned shift right and accumulate
- vsraq_n_u32⚠neonUnsigned shift right and accumulate
- vsraq_n_u64⚠neonUnsigned shift right and accumulate
- vst1_lane_f32⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_p8⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_p16⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_p64⚠neon,aesStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_s8⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_s16⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_s32⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_s64⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_u8⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_u16⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_u32⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_lane_u64⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1_p8_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_p8_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_p8_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_p16_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_p16_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_p16_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_p64_x2⚠neon,aesStore multiple single-element structures to one, two, three, or four registers
- vst1_p64_x3⚠neon,aesStore multiple single-element structures to one, two, three, or four registers
- vst1_p64_x4⚠neon,aesStore multiple single-element structures to one, two, three, or four registers
- vst1_u8_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u8_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u8_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u16_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u16_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u16_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u32_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u32_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u32_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u64_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u64_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1_u64_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_lane_f32⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_p8⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_p16⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_p64⚠neon,aesStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_s8⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_s16⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_s32⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_s64⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_u8⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_u16⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_u32⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_lane_u64⚠neonStore multiple single-element structures from one, two, three, or four registers
- vst1q_p8_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_p8_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_p8_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_p16_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_p16_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_p16_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_p64_x2⚠neon,aesStore multiple single-element structures to one, two, three, or four registers
- vst1q_p64_x3⚠neon,aesStore multiple single-element structures to one, two, three, or four registers
- vst1q_p64_x4⚠neon,aesStore multiple single-element structures to one, two, three, or four registers
- vst1q_u8_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u8_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u8_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u16_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u16_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u16_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u32_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u32_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u32_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u64_x2⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u64_x3⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst1q_u64_x4⚠neonStore multiple single-element structures to one, two, three, or four registers
- vst2_lane_p8⚠neonStore multiple 2-element structures from two registers
- vst2_lane_p16⚠neonStore multiple 2-element structures from two registers
- vst2_lane_u8⚠neonStore multiple 2-element structures from two registers
- vst2_lane_u16⚠neonStore multiple 2-element structures from two registers
- vst2_lane_u32⚠neonStore multiple 2-element structures from two registers
- vst2_p8⚠neonStore multiple 2-element structures from two registers
- vst2_p16⚠neonStore multiple 2-element structures from two registers
- vst2_p64⚠neon,aesStore multiple 2-element structures from two registers
- vst2_u8⚠neonStore multiple 2-element structures from two registers
- vst2_u16⚠neonStore multiple 2-element structures from two registers
- vst2_u32⚠neonStore multiple 2-element structures from two registers
- vst2_u64⚠neonStore multiple 2-element structures from two registers
- vst2q_lane_p16⚠neonStore multiple 2-element structures from two registers
- vst2q_lane_u16⚠neonStore multiple 2-element structures from two registers
- vst2q_lane_u32⚠neonStore multiple 2-element structures from two registers
- vst2q_p8⚠neonStore multiple 2-element structures from two registers
- vst2q_p16⚠neonStore multiple 2-element structures from two registers
- vst2q_u8⚠neonStore multiple 2-element structures from two registers
- vst2q_u16⚠neonStore multiple 2-element structures from two registers
- vst2q_u32⚠neonStore multiple 2-element structures from two registers
- vst3_lane_p8⚠neonStore multiple 3-element structures from three registers
- vst3_lane_p16⚠neonStore multiple 3-element structures from three registers
- vst3_lane_u8⚠neonStore multiple 3-element structures from three registers
- vst3_lane_u16⚠neonStore multiple 3-element structures from three registers
- vst3_lane_u32⚠neonStore multiple 3-element structures from three registers
- vst3_p8⚠neonStore multiple 3-element structures from three registers
- vst3_p16⚠neonStore multiple 3-element structures from three registers
- vst3_p64⚠neon,aesStore multiple 3-element structures from three registers
- vst3_u8⚠neonStore multiple 3-element structures from three registers
- vst3_u16⚠neonStore multiple 3-element structures from three registers
- vst3_u32⚠neonStore multiple 3-element structures from three registers
- vst3_u64⚠neonStore multiple 3-element structures from three registers
- vst3q_lane_p16⚠neonStore multiple 3-element structures from three registers
- vst3q_lane_u16⚠neonStore multiple 3-element structures from three registers
- vst3q_lane_u32⚠neonStore multiple 3-element structures from three registers
- vst3q_p8⚠neonStore multiple 3-element structures from three registers
- vst3q_p16⚠neonStore multiple 3-element structures from three registers
- vst3q_u8⚠neonStore multiple 3-element structures from three registers
- vst3q_u16⚠neonStore multiple 3-element structures from three registers
- vst3q_u32⚠neonStore multiple 3-element structures from three registers
- vst4_lane_p8⚠neonStore multiple 4-element structures from four registers
- vst4_lane_p16⚠neonStore multiple 4-element structures from four registers
- vst4_lane_u8⚠neonStore multiple 4-element structures from four registers
- vst4_lane_u16⚠neonStore multiple 4-element structures from four registers
- vst4_lane_u32⚠neonStore multiple 4-element structures from four registers
- vst4_p8⚠neonStore multiple 4-element structures from four registers
- vst4_p16⚠neonStore multiple 4-element structures from four registers
- vst4_p64⚠neon,aesStore multiple 4-element structures from four registers
- vst4_u8⚠neonStore multiple 4-element structures from four registers
- vst4_u16⚠neonStore multiple 4-element structures from four registers
- vst4_u32⚠neonStore multiple 4-element structures from four registers
- vst4_u64⚠neonStore multiple 4-element structures from four registers
- vst4q_lane_p16⚠neonStore multiple 4-element structures from four registers
- vst4q_lane_u16⚠neonStore multiple 4-element structures from four registers
- vst4q_lane_u32⚠neonStore multiple 4-element structures from four registers
- vst4q_p8⚠neonStore multiple 4-element structures from four registers
- vst4q_p16⚠neonStore multiple 4-element structures from four registers
- vst4q_u8⚠neonStore multiple 4-element structures from four registers
- vst4q_u16⚠neonStore multiple 4-element structures from four registers
- vst4q_u32⚠neonStore multiple 4-element structures from four registers
- vsub_f32⚠neonSubtract
- vsub_s8⚠neonSubtract
- vsub_s16⚠neonSubtract
- vsub_s32⚠neonSubtract
- vsub_s64⚠neonSubtract
- vsub_u8⚠neonSubtract
- vsub_u16⚠neonSubtract
- vsub_u32⚠neonSubtract
- vsub_u64⚠neonSubtract
- vsubhn_high_s16⚠neonSubtract returning high narrow
- vsubhn_high_s32⚠neonSubtract returning high narrow
- vsubhn_high_s64⚠neonSubtract returning high narrow
- vsubhn_high_u16⚠neonSubtract returning high narrow
- vsubhn_high_u32⚠neonSubtract returning high narrow
- vsubhn_high_u64⚠neonSubtract returning high narrow
- vsubhn_s16⚠neonSubtract returning high narrow
- vsubhn_s32⚠neonSubtract returning high narrow
- vsubhn_s64⚠neonSubtract returning high narrow
- vsubhn_u16⚠neonSubtract returning high narrow
- vsubhn_u32⚠neonSubtract returning high narrow
- vsubhn_u64⚠neonSubtract returning high narrow
- vsubl_s8⚠neonSigned Subtract Long
- vsubl_s16⚠neonSigned Subtract Long
- vsubl_s32⚠neonSigned Subtract Long
- vsubl_u8⚠neonUnsigned Subtract Long
- vsubl_u16⚠neonUnsigned Subtract Long
- vsubl_u32⚠neonUnsigned Subtract Long
- vsubq_f32⚠neonSubtract
- vsubq_s8⚠neonSubtract
- vsubq_s16⚠neonSubtract
- vsubq_s32⚠neonSubtract
- vsubq_s64⚠neonSubtract
- vsubq_u8⚠neonSubtract
- vsubq_u16⚠neonSubtract
- vsubq_u32⚠neonSubtract
- vsubq_u64⚠neonSubtract
- vsubw_s8⚠neonSigned Subtract Wide
- vsubw_s16⚠neonSigned Subtract Wide
- vsubw_s32⚠neonSigned Subtract Wide
- vsubw_u8⚠neonUnsigned Subtract Wide
- vsubw_u16⚠neonUnsigned Subtract Wide
- vsubw_u32⚠neonUnsigned Subtract Wide
- vtrn_f32⚠neonTranspose elements
- vtrn_p8⚠neonTranspose elements
- vtrn_p16⚠neonTranspose elements
- vtrn_s8⚠neonTranspose elements
- vtrn_s16⚠neonTranspose elements
- vtrn_s32⚠neonTranspose elements
- vtrn_u8⚠neonTranspose elements
- vtrn_u16⚠neonTranspose elements
- vtrn_u32⚠neonTranspose elements
- vtrnq_f32⚠neonTranspose elements
- vtrnq_p8⚠neonTranspose elements
- vtrnq_p16⚠neonTranspose elements
- vtrnq_s8⚠neonTranspose elements
- vtrnq_s16⚠neonTranspose elements
- vtrnq_s32⚠neonTranspose elements
- vtrnq_u8⚠neonTranspose elements
- vtrnq_u16⚠neonTranspose elements
- vtrnq_u32⚠neonTranspose elements
- vtst_p8⚠neonSigned compare bitwise Test bits nonzero
- vtst_p16⚠neonSigned compare bitwise Test bits nonzero
- vtst_s8⚠neonSigned compare bitwise Test bits nonzero
- vtst_s16⚠neonSigned compare bitwise Test bits nonzero
- vtst_s32⚠neonSigned compare bitwise Test bits nonzero
- vtst_u8⚠neonUnsigned compare bitwise Test bits nonzero
- vtst_u16⚠neonUnsigned compare bitwise Test bits nonzero
- vtst_u32⚠neonUnsigned compare bitwise Test bits nonzero
- vtstq_p8⚠neonSigned compare bitwise Test bits nonzero
- vtstq_p16⚠neonSigned compare bitwise Test bits nonzero
- vtstq_s8⚠neonSigned compare bitwise Test bits nonzero
- vtstq_s16⚠neonSigned compare bitwise Test bits nonzero
- vtstq_s32⚠neonSigned compare bitwise Test bits nonzero
- vtstq_u8⚠neonUnsigned compare bitwise Test bits nonzero
- vtstq_u16⚠neonUnsigned compare bitwise Test bits nonzero
- vtstq_u32⚠neonUnsigned compare bitwise Test bits nonzero
- vuzp_f32⚠neonUnzip vectors
- vuzp_p8⚠neonUnzip vectors
- vuzp_p16⚠neonUnzip vectors
- vuzp_s8⚠neonUnzip vectors
- vuzp_s16⚠neonUnzip vectors
- vuzp_s32⚠neonUnzip vectors
- vuzp_u8⚠neonUnzip vectors
- vuzp_u16⚠neonUnzip vectors
- vuzp_u32⚠neonUnzip vectors
- vuzpq_f32⚠neonUnzip vectors
- vuzpq_p8⚠neonUnzip vectors
- vuzpq_p16⚠neonUnzip vectors
- vuzpq_s8⚠neonUnzip vectors
- vuzpq_s16⚠neonUnzip vectors
- vuzpq_s32⚠neonUnzip vectors
- vuzpq_u8⚠neonUnzip vectors
- vuzpq_u16⚠neonUnzip vectors
- vuzpq_u32⚠neonUnzip vectors
- vzip_f32⚠neonZip vectors
- vzip_p8⚠neonZip vectors
- vzip_p16⚠neonZip vectors
- vzip_s8⚠neonZip vectors
- vzip_s16⚠neonZip vectors
- vzip_s32⚠neonZip vectors
- vzip_u8⚠neonZip vectors
- vzip_u16⚠neonZip vectors
- vzip_u32⚠neonZip vectors
- vzipq_f32⚠neonZip vectors
- vzipq_p8⚠neonZip vectors
- vzipq_p16⚠neonZip vectors
- vzipq_s8⚠neonZip vectors
- vzipq_s16⚠neonZip vectors
- vzipq_s32⚠neonZip vectors
- vzipq_u8⚠neonZip vectors
- vzipq_u16⚠neonZip vectors
- vzipq_u32⚠neonZip vectors