'Computer'에 해당되는 글 568건

  1. 2008.04.08 Deterministic finite-state machine by 알 수 없는 사용자
  2. 2008.04.08 Nondeterministic finite state machine by 알 수 없는 사용자
  3. 2008.04.08 Powerset construction by 알 수 없는 사용자
  4. 2008.04.08 Integer factorization by 알 수 없는 사용자
  5. 2008.04.08 Travelling salesman problem by 알 수 없는 사용자
  6. 2008.04.08 Gap theorem by 알 수 없는 사용자
  7. 2008.04.08 Blum's speedup theorem by 알 수 없는 사용자
  8. 2008.04.08 Automata theory by 알 수 없는 사용자
  9. 2008.04.07 Lemma (mathematics) by 알 수 없는 사용자
  10. 2008.04.07 Polynomial-time reduction by 알 수 없는 사용자

In the theory of computation, a deterministic finite state machine or deterministic finite automaton (DFA) is a finite state machine where for each pair of state and input symbol there is one and only one transition to a next state. DFAs recognize the set of regular languages and no other languages.

A DFA will take in a string of input symbols. For each input symbol it will then transition to a state given by following a transition function. When the last input symbol has been received it will either accept or reject the string depending on whether the DFA is in an accepting state or a non-accepting state.

Reference:
http://en.wikipedia.org/wiki/Deterministic_finite_automaton

Posted by 알 수 없는 사용자
,

In the theory of computation, a nondeterministic finite state machine or nondeterministic finite automaton (NFA) is a finite state machine where for each pair of state and input symbol there may be several possible next states. This distinguishes it from the deterministic finite automaton (DFA), where the next possible state is uniquely determined. Although the DFA and NFA have distinct definitions, it may be shown in the formal theory that they are equivalent, in that, for any given NFA, one may construct an equivalent DFA, and vice-versa: this is the powerset construction. Both types of automata recognize only regular languages. Non-deterministic finite state machines are sometimes studied by the name subshifts of finite type. Non-deterministic finite state machines are generalized by probabilistic automata, which assign a probability to each state transition.

Nondeterministic finite automata were introduced by Michael O. Rabin and Dana Scott in 1959, who showed their equivalence to deterministic automata.

Reference:
http://en.wikipedia.org/wiki/Nondeterministic_finite_state_machine

Posted by 알 수 없는 사용자
,

In the theory of computation, the powerset construction or subset construction is a standard method for converting a nondeterministic finite automaton (NFA) into a deterministic finite automaton (DFA) which recognizes the same formal language. It is important in theory because it establishes that NFAs, despite their additional flexibility, are unable to recognize any language that cannot be recognized by some DFA. It is also important in practice for converting easier-to-construct NFAs into more efficiently executable DFAs. However, if the NFA has n states, the resulting DFA can have up to 2^n states, exponentially more, which sometimes makes the construction impractical in practice for large NFAs.

Reference:
http://en.wikipedia.org/wiki/Powerset_construction
Posted by 알 수 없는 사용자
,

In number theory, integer factorization is the process of breaking down a composite number into smaller non-trivial divisors, which when multiplied together equal the original integer.

When the numbers are very large, no efficient integer factorization algorithm is publicly known; a recent effort which factored a 200-digit number (RSA-200) took eighteen months and used over half a century of computer time. The presumed difficulty of this problem is at the heart of certain algorithms in cryptography such as RSA. Many areas of mathematics and computer science have been brought to bear on the problem, including elliptic curves, algebraic number theory, and quantum computing.

Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) are semiprimes, i.e. the product of two distinct prime numbers. When they are both large, randomly chosen, and about the same size (but not too close), even the fastest prime factorization algorithms on the fastest computers can take enough time to make the search impractical.

Reference:
http://en.wikipedia.org/wiki/Integer_factorization

Posted by 알 수 없는 사용자
,

The travelling salesman problem (TSP) is a problem in discrete or combinatorial optimization. It is a prominent illustration of a class of problems in computational complexity theory which are classified as NP-hard. Mathematical problems related to the travelling salesman problem were treated in the 1800s by the Irish mathematician Sir William Rowan Hamilton and by the British mathematician Thomas Kirkman. A discussion of the early work of Hamilton and Kirkman can be found in Graph Theory 1736-1936. The general form of the TSP appears to have been first studied by mathematicians during the 1930s in Vienna and at Harvard, notably by Karl Menger. The problem was later undertaken by Hassler Whitney and Merrill M. Flood at Princeton. A detailed treatment of the connection between Menger and Whitney as well as the growth in the study of TSP can be found in Alexander Schrijver's 2005 paper "On the history of combinatorial optimization (till 1960)".

Reference:
http://en.wikipedia.org/wiki/Travelling_salesman_problem
Posted by 알 수 없는 사용자
,

Gap theorem

Computer/Terms 2008. 4. 8. 10:54

In computational complexity theory the Gap theorem is a major theorem about the complexity of computable functions. It essentially states that there are arbitrarily large computable gaps in the hierarchy of complexity classes. For any computable function that represents an increase in computational resources, one can find a resource bound such that the set of functions computable within the expanded resource bound is the same as the set computable within the original bound.

The theorem was proved independently by Boris Trakhtenbrot in 1964 and Allan Borodin in 1972. The theorem can be proved by using the Blum axioms without any reference to a concrete computational model, so it applies to time, space, or any other reasonable complexity measure.

Reference:
http://en.wikipedia.org/wiki/Gap_theorem

Posted by 알 수 없는 사용자
,

In computational complexity theory Blum's speedup theorem, first stated by Manuel Blum in 1967, is an important theorem about the complexity of computable functions.

Each computable function has an infinite number of different program representations in a given programming language. In the theory of algorithms one often needs to find the program with the smallest complexity for a given computable function and a given complexity measure. Blum's speedup theorem states that for any complexity measure there are computable functions which have no smallest program.

Reference:
http://en.wikipedia.org/wiki/Blum%27s_speedup_theorem

Posted by 알 수 없는 사용자
,

Automata theory

Computer/Terms 2008. 4. 8. 10:48

In theoretical computer science, automata theory is the study of abstract machines and problems they are able to solve. Automata theory is closely related to formal language theory as the automata are often classified by the class of formal languages they are able to recognize.

An automaton is a mathematical model for a finite state machine (FSM). A FSM is a machine that, given an input of symbols, "jumps" through a series of states according to a transition function (which can be expressed as a table). In the common "Mealy" variety of FSMs, this transition function tells the automaton which state to go to next given a current state and a current symbol.

The input is read symbol by symbol, until it is consumed completely (think of it as a tape with a word written on it, that is read by a reading head of the automaton; the head moves forward over the tape, reading one symbol at a time). Once the input is depleted, the automaton is said to have stopped.

Depending on the state in which the automaton stops, it's said that the automaton either accepts or rejects the input. If it landed in an accept state, then the automaton accepts the word. If, on the other hand, it lands on a reject state, the word is rejected. The set of all the words accepted by an automaton is called the language accepted by the automaton.

Note, however, that, in general, an automaton need not have a finite number of states, or even a countable number of states. Thus, for example, the quantum finite automaton has an uncountable infinity of states, as the set of all possible states is the set of all points in complex projective space. Thus, quantum finite automata, as well as finite state machines, are special cases of a more general idea, that of a topological automaton, where the set of states is a topological space, and the state transition functions are taken from the set of all possible functions on the space. Topological automata are often called M-automata, and are simply the augmentation of a semiautomaton with a set of accept states, where set intersection determines whether the initial state is accepted or rejected.

In general, an automaton need not strictly accept or reject an input; it may accept it with some probability between zero and one. Again this is illustrated by the quantum finite automaton, which only accepts input with some probability. This idea is again a special case of a more general notion, the geometric automaton or metric automaton, where the set of states is a metric space, and a language is accepted by the automaton if the distance between the initial point, and the set of accept states is sufficiently small with respect to the metric.

Automata play a major role in compiler design and parsing.

Reference:
http://en.wikipedia.org/wiki/Automata_Theory

Posted by 알 수 없는 사용자
,

In mathematics, a lemma (plural lemmata or lemmas; from the Greek λήμμα, "lemma" meaning "anything which is received, such as a gift, profit, or a bribe.") is a proven proposition which is used as a stepping stone to a larger result rather than as a statement in-and-of itself. A good stepping stone leads to many others, so some of the most powerful results in mathematics are known as lemmata, such as Bézout's lemma, Dehn's lemma, Fatou's lemma, Gauss's lemma, Nakayama's lemma, Riesz's lemma, and Zorn's lemma. There is no formal distinction between a lemma and a theorem.

Reference:
http://en.wikipedia.org/wiki/Lemma_%28mathematics%29
Posted by 알 수 없는 사용자
,

In computational complexity theory a polynomial-time reduction is a reduction which is computable by a deterministic Turing machine in polynomial time. If it is a many-one reduction, it is called a polynomial-time many-one reduction, polynomial transformation, or Karp reduction. If it is a Turing reduction, it is called a polynomial-time Turing reduction or Cook reduction.

Polynomial-time reductions are important and widely-used because they are powerful enough to perform many transformations between important problems, but still weak enough that polynomial-time reductions from problems in NP or co-NP to problems in P are considered unlikely to exist. This notion of reducibility is used in the standard definitions of several complete complexity classes, such as NP-complete, PSPACE-complete and EXPTIME-complete.

Within the class P, however, polynomial-time reductions are inappropriate, because any problem in P can be polynomial-time reduced (both many-one and Turing) to any non-trivial problem in P. Thus, for classes within P such as L, NL, NC, and P itself, log-space reductions are used instead.

If a problem has a Karp reduction to a problem in NP, this shows that the problem is in NP. Cook reductions seem to be more powerful than Karp reductions; for example, any problem in co-NP has a Cook reduction to any NP-complete problem, whereas any problems that are in co-NP - NP (assuming they exist) will not have Karp reductions to any problem in NP. While this power is useful for designing reductions, the downside is that certain classes such as NP are not known to be closed under Cook reductions (and are widely believed not to be), so they are not useful for proving that a problem is in NP. However, they are useful for showing that problems are in P and other classes that are closed under such reductions.

Reference:
http://en.wikipedia.org/wiki/Polynomial-time_reduction

Posted by 알 수 없는 사용자
,