About: Simultaneous perturbation stochastic approximation     Goto   Sponge   NotDistinct   Permalink

An Entity of Type : owl:Thing, within Data Space : el.dbpedia.org associated with source document(s)

Simultaneous perturbation stochastic approximation (SPSA) is an algorithmic method for optimizing systems with multiple unknown parameters. It is a type of stochastic approximation algorithm. As an optimization method, it is appropriately suited to large-scale population models, adaptive modeling, simulation optimization, and atmospheric modeling. Many examples are presented at the SPSA website . A comprehensive recent book on the subject is Bhatnagar et al. (2013). An early paper on the subject is Spall (1987) and the foundational paper providing the key theory and justification is Spall (1992).

AttributesValues
rdfs:label
  • Simultaneous perturbation stochastic approximation (en)
rdfs:comment
  • Simultaneous perturbation stochastic approximation (SPSA) is an algorithmic method for optimizing systems with multiple unknown parameters. It is a type of stochastic approximation algorithm. As an optimization method, it is appropriately suited to large-scale population models, adaptive modeling, simulation optimization, and atmospheric modeling. Many examples are presented at the SPSA website . A comprehensive recent book on the subject is Bhatnagar et al. (2013). An early paper on the subject is Spall (1987) and the foundational paper providing the key theory and justification is Spall (1992). (en)
sameAs
dbp:wikiPageUsesTemplate
Subject
prov:wasDerivedFrom
Wikipage page ID
page length (characters) of wiki page
Wikipage revision ID
has abstract
  • Simultaneous perturbation stochastic approximation (SPSA) is an algorithmic method for optimizing systems with multiple unknown parameters. It is a type of stochastic approximation algorithm. As an optimization method, it is appropriately suited to large-scale population models, adaptive modeling, simulation optimization, and atmospheric modeling. Many examples are presented at the SPSA website . A comprehensive recent book on the subject is Bhatnagar et al. (2013). An early paper on the subject is Spall (1987) and the foundational paper providing the key theory and justification is Spall (1992). SPSA is a descent method capable of finding global minima, sharing this property with other methods as simulated annealing. Its main feature is the gradient approximation that requires only two measurements of the objective function, regardless of the dimension of the optimization problem. Recall that we want to find the optimal control with lossfunction : Both Finite Differences Stochastic Approximation (FDSA)and SPSA use the same iterative process: where represents the iterate, is the estimate of the gradient of the objective function evaluated at , and is a positive number sequence converging to 0. If is a p-dimensional vector, the component of the symmetric finite difference gradient estimator is: FD: 1 ≤i ≤p, where is the unit vector with a 1 in the place, and is a small positive number that decreases with n. With this method, 2p evaluations of J for each are needed. Clearly, when p is large, this estimator loses efficiency. Let now be a random perturbation vector. The component of the stochastic perturbation gradient estimator is: SP: Remark that FD perturbs only one direction at a time, while the SP estimator disturbs all directions at the same time (the numerator is identical in all p components). The number of loss function measurements needed in the SPSA method for each is always 2, independent of the dimension p. Thus, SPSA uses p times fewer function evaluations than FDSA, which makes it a lot more efficient. Simple experiments with p=2 showed that SPSA converges in the same number of iterations as FDSA. The latter follows approximately the steepest descent direction, behaving like the gradient method. On the other hand, SPSA, with the random search direction, does not follow exactly the gradient path. In average though, it tracks it nearly because the gradient approximation is an almost unbiasedestimator of the gradient, as shown in the following lemma. (en)
foaf:isPrimaryTopicOf
is Wikipage redirect of
is Link from a Wikipage to another Wikipage of
Faceted Search & Find service v1.17_git151 as of Feb 20 2025


Alternative Linked Data Documents: ODE     Content Formats:   [cxml] [csv]     RDF   [text] [turtle] [ld+json] [rdf+json] [rdf+xml]     ODATA   [atom+xml] [odata+json]     Microdata   [microdata+json] [html]    About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data] Valid XHTML + RDFa
OpenLink Virtuoso version 07.20.3240 as of Nov 11 2024, on Linux (x86_64-ubuntu_focal-linux-gnu), Single-Server Edition (72 GB total memory, 1 GB memory in use)
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2025 OpenLink Software