Opcode/Instruction | Op / En | 64/32 bit Mode Support | CPUID Feature Flag | Description |
---|---|---|---|---|
F2 0F 5C /r SUBSD xmm1, xmm2/m64 | A | V/V | SSE2 | Subtract the low double-precision floating-point value in xmm2/m64 from xmm1 and store the result in xmm1. |
VEX.LIG.F2.0F.WIG 5C /r VSUBSD xmm1,xmm2, xmm3/m64 | B | V/V | AVX | Subtract the low double-precision floating-point value in xmm3/m64 from xmm2 and store the result in xmm1. |
EVEX.LIG.F2.0F.W1 5C /r VSUBSD xmm1 {k1}{z}, xmm2, xmm3/m64{er} | C | V/V | AVX512F | Subtract the low double-precision floating-point value in xmm3/m64 from xmm2 and store the result in xmm1 under writemask k1. |
Op/En | Tuple Type | Operand 1 | Operand 2 | Operand 3 | Operand 4 |
A | NA | ModRM:reg (r, w) | ModRM:r/m (r) | NA | NA |
B | NA | ModRM:reg (w) | VEX.vvvv (r) | ModRM:r/m (r) | NA |
C | Tuple1 Scalar | ModRM:reg (w) | EVEX.vvvv (r) | ModRM:r/m (r) | NA |
Subtract the low double-precision floating-point value in the second source operand from the first source operand and stores the double-precision floating-point result in the low quadword of the destination operand.
The second source operand can be an XMM register or a 64-bit memory location. The first source and destination operands are XMM registers.
128-bit Legacy SSE version: The destination and first source operand are the same. Bits (MAXVL-1:64) of the corresponding destination register remain unchanged.
VEX.128 and EVEX encoded versions: Bits (127:64) of the XMM register destination are copied from corresponding bits in the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.
EVEX encoded version: The low quadword element of the destination operand is updated according to the writemask.
Software should ensure VSUBSD is encoded with VEX.L=0. Encoding VSUBSD with VEX.L=1 may encounter unpredictable behavior across different processor generations.
IF (SRC2 *is register*) AND (EVEX.b = 1) THEN SET_RM(EVEX.RC); ELSE SET_RM(MXCSR.RM); FI; IF k1[0] or *no writemask* THEN DEST[63:0]←SRC1[63:0] - SRC2[63:0] ELSE IF *merging-masking* ; merging-masking THEN *DEST[63:0] remains unchanged* ELSE ; zeroing-masking THEN DEST[63:0]←0 FI; FI; DEST[127:64] ← SRC1[127:64] DEST[MAXVL-1:128] ← 0
DEST[63:0]←SRC1[63:0] - SRC2[63:0] DEST[127:64] ←SRC1[127:64] DEST[MAXVL-1:128] ←0
DEST[63:0]←DEST[63:0] - SRC[63:0] DEST[MAXVL-1:64] (Unmodified)
VSUBSD __m128d _mm_mask_sub_sd (__m128d s, __mmask8 k, __m128d a, __m128d b);
VSUBSD __m128d _mm_maskz_sub_sd (__mmask8 k, __m128d a, __m128d b);
VSUBSD __m128d _mm_sub_round_sd (__m128d a, __m128d b, int);
VSUBSD __m128d _mm_mask_sub_round_sd (__m128d s, __mmask8 k, __m128d a, __m128d b, int);
VSUBSD __m128d _mm_maskz_sub_round_sd (__mmask8 k, __m128d a, __m128d b, int);
SUBSD __m128d _mm_sub_sd (__m128d a, __m128d b);
Overflow, Underflow, Invalid, Precision, Denormal
VEX-encoded instructions, see Exceptions Type 3.
EVEX-encoded instructions, see Exceptions Type E3.