'Computer'에 해당되는 글 568건

  1. 2008.03.27 Goldbach's conjecture by 알 수 없는 사용자
  2. 2008.03.27 Decision problem by 알 수 없는 사용자
  3. 2008.03.27 Long division by 알 수 없는 사용자
  4. 2008.03.27 Church–Turing thesis by 알 수 없는 사용자 1
  5. 2008.03.27 Recursion by 알 수 없는 사용자
  6. 2008.03.27 Computational complexity theory by 알 수 없는 사용자 1
  7. 2008.03.27 Reflection (computer science) by 알 수 없는 사용자
  8. 2008.03.27 Formal methods by 알 수 없는 사용자 1
  9. 2008.03.27 Effective descriptive set theory by 알 수 없는 사용자
  10. 2008.03.27 Proof theory by 알 수 없는 사용자

Goldbach's conjecture is one of the oldest unsolved problems in number theory and in all of mathematics. It states:

Every even integer greater than 2 can be written as the sum of two primes.
Expressing a given even number as a sum of two primes is called a Goldbach partition of the number. For example,

  4 = 2 + 2
  6 = 3 + 3
  8 = 3 + 5
10 = 3 + 7 = 5 + 5
12 = 5 + 7
14 = 3 + 11 = 7 + 7

In other words, the Goldbach conjecture states that every even number greater than or equal to four is a Goldbach number, a number that can be expressed as the sum of two primes. See also Levy's conjecture.

Reference:
http://en.wikipedia.org/wiki/Goldbach%27s_conjecture

Posted by 알 수 없는 사용자
,

Decision problem

Computer/Terms 2008. 3. 27. 15:34

In computability theory and computational complexity theory, a decision problem is a question in some formal system with a yes-or-no answer, depending on the values of some input parameters. For example, the problem "given two numbers x and y, does x evenly divide y?" is a decision problem. The answer can be either 'yes' or 'no', and depends upon the values of x and y.

Decision problems are closely related to function problems, which can have answers that are more complex than a simple 'yes' or 'no'. A corresponding function problem is "given two numbers x and y, what is x divided by y?". They are also related to optimization problems, which are concerned with finding the best answer to a particular problem.

A method for solving a decision problem given in the form of an algorithm is called a decision procedure for that problem. A decision procedure for the decision problem "given two numbers x and y, does x evenly divide y?" would give the steps for determining whether x evenly divides y, given x and y. One such algorithm is long division, taught to many school children. If the remainder is zero the answer produced is 'yes', otherwise it is 'no'. A decision problem which can be solved by an algorithm, such as this example, is called decidable.

The field of computational complexity categorizes decidable decision problems by how difficult they are to solve. "Difficult", in this sense, is described in terms of the computational resources needed by the most efficient algorithm for a certain problem. The field of recursion theory, meanwhile, categorizes undecidable decision problems by Turing degree, which is a measure of the noncomputability inherent in any solution.

Research in computability theory has typically focused on decision problems. As explained in the section Equivalence with function problems below, there is no loss of generality.

Reference:
http://en.wikipedia.org/wiki/Decision_problem

Posted by 알 수 없는 사용자
,

Long division

Computer/Terms 2008. 3. 27. 15:07

In arithmetic, long division is the standard procedure suitable for dividing simple or complex multidigit numbers. It breaks down a division problem into a series of easier steps. As in all division problems, one number, called the dividend, is divided by another, called the divisor, producing a result called the quotient. Long division is understood by most adults, including teachers and parents, who have been taught elementary arithmetic. It enables computations involving arbitrarily large numbers to be performed by following a series of simple steps.

Reference:
http://en.wikipedia.org/wiki/Long_division
Posted by 알 수 없는 사용자
,

In computability theory the Church–Turing thesis (also known as Church's thesis, Church's conjecture and Turing's thesis) is a combined hypothesis about the nature of effectively calculable (computable) functions by recursion (Church's Thesis), by mechanical device equivalent to a Turing machine (Turing's Thesis) or by use of Church's λ-calculus:

Church's Thesis: "Every effectively calculable function (effectively decidable predicate) is general recursive" (Kleene 1952:300)
Turing's Thesis: "Turing's thesis that every function which would naturally be regarded as computable is computable under his definition, i.e. by one of his machines, is equivalent to Church's thesis by Theorem XXX." (Kleene 1952:376)

The three computational processes (recursion, λ-calculus, and Turing machine) were shown to be equivalent by Alonzo Church, Stephen Kleene and J.B. Rosser (1934-6) and by Alan Turing (1936-7). Although Stephen Kleene proved the two theses equivalent, the fundamental premise behind the theses — the notion of "effectively computable" or "effectively calculable" — is "a vague intuitive one" based as it is on (i) “heuristic [observational, experiential] evidence”, (ii) the “equivalence of 'diverse formulations'” (e.g. the three computational processes) and (iii) on an observational analysis of a human with a pencil and paper following a set of rules (Turing's analysis) and of a "worker" in boxes (Emil Post's analysis 1936). Thus, as they stand, neither thesis can be proven. The various notions for "effective calculability/computability" — the fly in the ointment — have been proposed as a "formal definition" by Church, Turing and Rosser, as a "working hypothesis" leading by inductive reasoning to a "natural law" by Post (an idea "sharply" criticized by Church), or as an axiom or set of axioms (suggested by Kurt Gödel to Church in 1934-5 but discounted in Post 1936).

Today the thesis has near-universal acceptance.

Informally the Church–Turing thesis states that if an algorithm (a procedure that terminates) exists then there is an equivalent Turing Machine or applicable λ-function for that algorithm.

Reference:
http://en.wikipedia.org/wiki/Church-turing_thesis

Posted by 알 수 없는 사용자
,

Recursion

Computer/Terms 2008. 3. 27. 11:06

Recursion, in mathematics and computer science, is a method of defining functions in which the function being defined is applied within its own definition. The term is also used more generally to describe a process of repeating objects in a self-similar way. For instance, when the surfaces of two mirrors are almost parallel with each other the nested images that occur are a form of recursion.

Reference:
http://en.wikipedia.org/wiki/Recursion
Posted by 알 수 없는 사용자
,

Computational complexity theory, as a branch of the theory of computation in computer science, investigates the problems related to the amounts of resources required for the execution of algorithms (e.g., execution time), and the inherent difficulty in providing efficient algorithms for specific computational problems.

A typical question of the theory is, "As the size of the input to an algorithm increases, how do the running time and memory requirements of the algorithm change and what are the implications and ramifications of that change?" In other words, the theory, among other things, investigates the scalability of computational problems and algorithms. In particular, the theory places practical limits on what computers can accomplish.

An important aspect of the theory is to categorize computational problems and algorithms into complexity classes. The most important open question of complexity theory is whether the complexity class P is the same as the complexity class NP, or is a strict subset as is generally believed. Shortly after the question was first posed, it was realised that many important industry problems in the field of operations research are of an NP subclass called NP-complete. NP-complete problems have the property that solutions to these problems are quick to check, yet the current methods to find solutions are not "efficiently scalable" (as described more formally below). More importantly, if the NP class is larger than P, then no efficiently scalable solutions exist for these problems.

The openness of the P-NP problem prompts and justifies various research areas in the computational complexity theory, such as identification of efficiently solvable special cases of common computational problems, study of the computational complexity of finding approximate or heuristic solutions, as well as research into the hierarchies of complexity classes.

Reference:
http://en.wikipedia.org/wiki/Computational_complexity_theory

Posted by 알 수 없는 사용자
,

In computer science, reflection is the process by which a computer program can observe and modify its own structure and behavior. The programming paradigm driven by reflection is called reflective programming.

At the lowest level, machine code can be treated reflectively because the distinction between instruction and data becomes just a matter of how the information is treated by the computer. Normally, 'instructions' are 'executed' and 'data' is 'processed', however, the program can also treat instructions as data and therefore make reflective modifications. Reflection is most common in high-level virtual machine programming languages like Smalltalk, and less common in lower-level programming languages like C.

Common types of reflection are runtime and dynamic, but some programming languages support compile time or static reflection.

Reference:
http://en.wikipedia.org/wiki/Reflection_%28computer_science%29

Posted by 알 수 없는 사용자
,

Formal methods

Computer/Terms 2008. 3. 27. 10:34

In computer science and software engineering, formal methods are mathematically-based techniques for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analyses can contribute to the reliability and robustness of a design. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity systems, where safety or security is important.

Reference:
http://en.wikipedia.org/wiki/Formal_methods
Posted by 알 수 없는 사용자
,

Effective descriptive set theory is the branch of descriptive set theory dealing with sets of reals having lightface definitions; that is, definitions that do not require an arbitrary real parameter. Thus effective descriptive set theory combines descriptive set theory with recursion theory.

Reference:
http://en.wikipedia.org/wiki/Effective_descriptive_set_theory
Posted by 알 수 없는 사용자
,

Proof theory

Computer/Terms 2008. 3. 27. 10:12

Proof theory is a branch of mathematical logic that represents proofs as formal mathematical objects, facilitating their analysis by mathematical techniques. Proofs are typically presented as inductively-defined data structures such as plain lists, boxed lists, or trees, which are constructed according to the axioms and rules of inference of the logical system. As such, proof theory is syntactic in nature, in contrast to model theory, which is semantic in nature. Together with model theory, axiomatic set theory, and recursion theory, proof theory is one of the so-called four pillars of the foundations of mathematics. Proof theory can also be considered a branch of philosophical logic, where the primary interest is in the idea of a proof-theoretic semantics, an idea which depends upon technical ideas in structural proof theory to be feasible.

Reference:
http://en.wikipedia.org/wiki/Proof_theory
Posted by 알 수 없는 사용자
,