/ ˈdʌbl̩ priˈsɪʒn̩ /
Floating point number that holds 64 bits of data.
The degree of accuracy that requires two computer words to represent a number. Numbers are stored with 17 digits of accuracy and printed with up to 16 digits.
In computing, a type of floating-point notation that has higher precision, that is, more significant decimal places. The term “double” is not strictly correct, deriving from such numbers using twice as many bits as standard floating-point notation. double-precision