About: Kahan summation algorithm     Goto   Sponge   NotDistinct   Permalink

An Entity of Type : owl:Thing, within Data Space : el.dbpedia.org associated with source document(s)

In numerical analysis, the Kahan summation algorithm, also known as compensated summation, significantly reduces the numerical error in the total obtained by adding a sequence of finite-precision floating-point numbers, compared to the obvious approach. This is done by keeping a separate running compensation (a variable to accumulate small errors), in effect extending the precision of the sum by the precision of the compensation variable.

AttributesValues
rdfs:label
  • Kahan summation algorithm (en)
rdfs:comment
  • In numerical analysis, the Kahan summation algorithm, also known as compensated summation, significantly reduces the numerical error in the total obtained by adding a sequence of finite-precision floating-point numbers, compared to the obvious approach. This is done by keeping a separate running compensation (a variable to accumulate small errors), in effect extending the precision of the sum by the precision of the compensation variable. (en)
sameAs
dbp:wikiPageUsesTemplate
Subject
prov:wasDerivedFrom
Wikipage page ID
page length (characters) of wiki page
Wikipage revision ID
Link from a Wikipage to another Wikipage
has abstract
  • In numerical analysis, the Kahan summation algorithm, also known as compensated summation, significantly reduces the numerical error in the total obtained by adding a sequence of finite-precision floating-point numbers, compared to the obvious approach. This is done by keeping a separate running compensation (a variable to accumulate small errors), in effect extending the precision of the sum by the precision of the compensation variable. In particular, simply summing numbers in sequence has a worst-case error that grows proportional to , and a root mean square error that grows as for random inputs (the roundoff errors form a random walk). With compensated summation, using a compensation variable with sufficiently high precision the worst-case error bound is effectively independent of , so a large number of values can be summed with an error that only depends on the floating-point precision of the result. The algorithm is attributed to William Kahan; Ivo Babuška seems to have come up with a similar algorithm independently (hence Kahan–Babuška summation). Similar, earlier techniques are, for example, Bresenham's line algorithm, keeping track of the accumulated error in integer operations (although first documented around the same time) and the delta-sigma modulation. (en)
foaf:isPrimaryTopicOf
is Wikipage redirect of
is Link from a Wikipage to another Wikipage of
is known for of
is foaf:primaryTopic of
Faceted Search & Find service v1.17_git151 as of Feb 20 2025


Alternative Linked Data Documents: ODE     Content Formats:   [cxml] [csv]     RDF   [text] [turtle] [ld+json] [rdf+json] [rdf+xml]     ODATA   [atom+xml] [odata+json]     Microdata   [microdata+json] [html]    About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data] Valid XHTML + RDFa
OpenLink Virtuoso version 07.20.3240 as of Nov 11 2024, on Linux (x86_64-ubuntu_focal-linux-gnu), Single-Server Edition (71 GB total memory, 2 GB memory in use)
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2025 OpenLink Software