This HTML5 document contains 109 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dcthttp://purl.org/dc/terms/
yago-reshttp://yago-knowledge.org/resource/
dbohttp://dbpedia.org/ontology/
n13http://dbpedia.org/resource/File:
n19http://dbpedia.org/resource/Wiktionary:
foafhttp://xmlns.com/foaf/0.1/
dbthttp://dbpedia.org/resource/Template:
rdfshttp://www.w3.org/2000/01/rdf-schema#
freebasehttp://rdf.freebase.com/ns/
n12http://commons.wikimedia.org/wiki/Special:FilePath/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
n18https://archive.org/details/
n5http://en.wikipedia.org/wiki/
dbchttp://dbpedia.org/resource/Category:
dbphttp://dbpedia.org/property/
provhttp://www.w3.org/ns/prov#
xsdhhttp://www.w3.org/2001/XMLSchema#
goldhttp://purl.org/linguistics/gold/
dbrhttp://dbpedia.org/resource/

Statements

Subject Item
dbr:Analysis_of_algorithms
rdf:type
dbo:Software
rdfs:label
Analysis of algorithms
rdfs:comment
In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usua
owl:sameAs
freebase:m.0xvr yago-res:Analysis_of_algorithms
dbp:wikiPageUsesTemplate
dbt:Main dbt:Quote dbt:Color dbt:More_footnotes dbt:Short_description dbt:Commons_category-inline dbt:Math dbt:Cite_book dbt:Mvar dbt:Reflist dbt:Computer_science
dct:subject
dbc:Analysis_of_algorithms dbc:Computational_complexity_theory
dbo:wikiPageExternalLink
n18:algorithmsinc00sedg
dbo:thumbnail
n12:Binary_search_vs_Linear_search_example_svg.svg?width=300
foaf:depiction
n12:Binary_search_vs_Linear_search_example_svg.svg n12:Comparison_computational_complexity.svg
gold:hypernym
dbr:Determination
prov:wasDerivedFrom
n5:Analysis_of_algorithms?oldid=1073227085&ns=0
dbo:wikiPageID
2230
dbo:wikiPageLength
26077
dbo:wikiPageRevisionID
1073227085
dbo:wikiPageWikiLink
dbr:Quadratic_growth dbr:Cross-platform_software dbr:Implementation dbc:Computational_complexity_theory dbr:Scalability dbr:DTIME dbr:Iteration dbc:Analysis_of_algorithms dbr:Model_of_computation dbr:DSPACE dbr:Asymptotic_analysis dbr:Asymptotic_computational_complexity dbr:Computer_program dbr:Upper_and_lower_bounds dbr:Algorithm dbr:Logarithm dbr:Log–log_plot dbr:System_resource dbr:Hybrid_algorithm dbr:Big_O_notation dbr:Cryptography dbr:Profiling_(computer_programming) dbr:Operating_system dbr:Introduction_to_Algorithms dbr:Space_complexity dbr:Iterated_logarithm dbr:Turing_machine dbr:Deterministic_system n13:Binary_search_vs_Linear_search_example_svg.svg dbr:Donald_Knuth dbr:Linearity dbr:Factorization dbr:Time_complexity dbr:Benchmark_(computing) dbr:Computer dbr:Exponential_growth dbr:Program_optimization dbr:Smoothed_analysis dbr:Computation dbr:Analysis_of_parallel_algorithms dbr:List_(abstract_data_type) dbr:NP-completeness dbr:Empirical_evidence dbr:Nanosecond dbr:Numerical_analysis dbr:Control_flow dbr:Algorithmic_efficiency dbr:Insertion_sort dbr:Arbitrary-precision_arithmetic dbr:Collation dbr:Elegance dbr:Instruction_set_architecture dbr:Timsort dbr:Quicksort n13:Comparison_computational_complexity.svg dbr:Master_theorem_(analysis_of_algorithms) dbr:Computer_file dbr:Cambridge_University_Press dbr:Termination_analysis dbr:Memory_segmentation dbr:Linear_search dbr:Kilobyte dbr:Computational_complexity_theory dbr:Programming_language dbr:Information-based_complexity dbr:Computational_problem dbr:Computational_complexity dbr:Rule_of_thumb dbr:Best,_worst_and_average_case dbr:Abstract_machine dbr:Information dbr:The_Art_of_Computer_Programming dbr:Arithmetic_progression dbr:Amortized_analysis dbr:Merge_sort dbr:Function_(mathematics) dbr:Binary_search_algorithm dbr:Computer_science n19:Constant dbr:Pseudocode
dbo:abstract
In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm. The term "analysis of algorithms" was coined by Donald Knuth. Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms. In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of the size n of the sorted list being searched, or in O(log n), colloquially "in logarithmic time". Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant. Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called model of computation. A model of computation may be defined in terms of an abstract computer, e.g. Turing machine, and/or by postulating that certain operations are executed in unit time.For example, if the sorted list to which we apply binary search has n elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log2(n) + 1 time units are needed to return an answer.
foaf:isPrimaryTopicOf
n5:Analysis_of_algorithms