'Computer'에 해당되는 글 568건

  1. 2008.04.07 If and only if by 알 수 없는 사용자
  2. 2008.04.07 Euclidean algorithm by 알 수 없는 사용자
  3. 2008.04.07 Polynomial time by 알 수 없는 사용자
  4. 2008.04.07 Second-order logic by 알 수 없는 사용자
  5. 2008.04.07 Principle of bivalence by 알 수 없는 사용자
  6. 2008.04.07 Peano axioms by 알 수 없는 사용자
  7. 2008.04.07 Recursively enumerable set by 알 수 없는 사용자
  8. 2008.04.07 RE (complexity) by 알 수 없는 사용자
  9. 2008.04.07 엑셀에서 차트를 이미지로 저장하기 by 알 수 없는 사용자
  10. 2008.04.07 Gamma correction by 알 수 없는 사용자

If and only if

Computer/Terms 2008. 4. 7. 19:31

If and only if, in logic and fields that rely on it such as mathematics and philosophy, is a logical connective between statements which means that the truth of either one of the statements requires the truth of the other. Thus, either both statements are true, or both are false. To put it another way, the first statement will always be true when the second statement is, and will only be true under those conditions.

In writing, common alternative phrases to "if and only if" include iff, Q is necessary and sufficient for P, P is equivalent to Q, P precisely if Q, P precisely (or exactly) when Q, P exactly in case Q, and P just in case Q. Many authors regard "iff" as unsuitable in formal writing; others use it freely.

The statement "(P iff Q)" is equivalent to the statement "not (P xor Q)" or "P == Q" in computer science.

In logic formulas, logical symbols are used instead of these phrases; see the discussion of notation.

Reference:
http://en.wikipedia.org/wiki/If_and_only_if

Posted by 알 수 없는 사용자
,

In number theory, the Euclidean algorithm (also called Euclid's algorithm) is an algorithm to determine the greatest common divisor (GCD) of two elements of any Euclidean domain (for example, the integers). Its major significance is that it does not require factoring the two integers, and it is also significant in that it is one of the oldest algorithms known, dating back to the ancient Greeks.

History of the Euclidean algorithm
The Euclidean algorithm is one of the oldest algorithms known, since it appeared in Euclid's Elements around 300 BC (7th book, Proposition 2). Euclid originally formulated the problem geometrically, as the problem of finding the greatest common "measure" for two line lengths (a line that could be used to measure both lines without a remainder), and his algorithm proceeded by repeated subtraction of the shorter from the longer segment. However, the algorithm was probably not discovered by Euclid and it may have been known up to 200 years earlier. It was almost certainly known by Eudoxus of Cnidus (about 375 BC), and Aristotle (about 330 BC) hinted at it in his Topics, 158b, 29–35.

Description of the algorithm
Given two natural numbers a and b, not both equal to zero: check if b is zero; if yes, a is the gcd. If not, repeat the process using, respectively, b, and the remainder after dividing a by b. The remainder after dividing a by b is usually written as a mod b.

These algorithms can be used in any context where division with remainder is possible. This includes rings of polynomials over a field as well as the ring of Gaussian integers, and in general all Euclidean domains. Applying the algorithm to the more general case other than natural number will be discussed in more detail later in the article.

Using recursion
Using recursion, the algorithm can be expressed:

 function gcd(a, b)
     if b = 0 return a
     else return gcd(b, a mod b)

Using iteration
An efficient, iterative method, for compilers that don't optimize tail recursion:

 function gcd(a, b)
     while b ≠ 0
         t := b
         b := a mod b
         a := t
     return a

The extended Euclidean algorithm
Main article: extended Euclidean algorithm
By keeping track of the quotients occurring during the algorithm, one can also determine integers p and q with ap + bq = gcd(a, b). This is known as the extended Euclidean algorithm.

Original algorithm
The original algorithm as described by Euclid treated the problem geometrically, using repeated subtraction rather than mod (remainder).

 function gcd(a, b)
     if a = 0 return b
     while b ≠ 0
         if a > b
             a := a − b
         else
             b := b − a
     return a

An example
As an example, consider computing the gcd of 1071 and 1029, which is 21. Recall that “mod” means “the remainder after dividing.”

With the recursive algorithm:

  a b Explanations
 gcd( 1071, 1029) The initial arguments
= gcd( 1029, 42) The second argument is 1071 mod 1029
= gcd( 42, 21) The second argument is 1029 mod 42
= gcd( 21, 0) The second argument is 42 mod 21
=  21  Since b=0, we return a

With the iterative algorithm:

a b Explanation
1071 1029 Step 1: The initial inputs
1029 42 Step 2: The remainder of 1071 divided by 1029 is 42, which is put on the right, and the divisor 1029 is put on the left.
42 21 Step 3: We repeat the loop, dividing 1029 by 42, and get 21 as remainder.
21 0 Step 4: Repeat the loop again, since 42 is divisible by 21, we get 0 as remainder, and the algorithm terminates. The number on the left, that is 21, is the gcd as required.

Observe that a ≥ b in each call. If initially, b > a, there is no problem; the first iteration effectively swaps the two values.

Proof
Suppose a and b are the natural numbers whose gcd has to be determined. Now, suppose b > 0, and the remainder of the division of a by b is r. Therefore a = qb + r where q is the quotient of the division.

Any common divisor of a and b is also a divisor of r. To see why this is true, consider that r can be written as r = a − qb. Now, if there is a common divisor d of a and b such that a = sd and b = td, then r = (s−qt)d. Since all these numbers, including s−qt, are whole numbers, it can be seen that r is divisible by d.

The above analysis is true for any divisor d; thus, the greatest common divisor of a and b is also the greatest common divisor of b and r. Therefore it is enough if we continue searching for the greatest common divisor with the numbers b and r. Since r is smaller in absolute value than b, we will reach r = 0 after finitely many steps.

Reference:
http://en.wikipedia.org/wiki/Euclidean_Algorithm

Posted by 알 수 없는 사용자
,

Polynomial time

Computer/Terms 2008. 4. 7. 19:25

In computational complexity theory, polynomial time refers to the computation time of a problem where the run time, m(n), is no greater than a polynomial function of the problem size, n. Written mathematically using big O notation, this states that m(n) = O(n^k) where k is some constant that may depend on the problem. For example, the quicksort sorting algorithm on n integers performs at most An^2 operations for some constant A. Thus it runs in time O(n^2) and is a polynomial time algorithm.

Mathematicians sometimes use the notion of "polynomial time on the length of the input" as a definition of a "fast" or "feasible" computation (see Cobham's thesis), as opposed to "super-polynomial time", which is anything slower than that. Exponential time is one example of a super-polynomial time.

The complexity class of decision problems that can be solved on a deterministic sequential machine in polynomial time is known as P. The class of decision problems that can be verified in polynomial time is known as NP. Equivalently, NP is the class of decision problems that can be solved in polynomial time on a non-deterministic Turing machine (NP stands for Nondeterministic Polynomial time).

P is the smallest time-complexity class on a deterministic machine which is robust in terms of machine model changes. (For example, a change from a single-tape Turing machine to a multi-tape machine can lead to a quadratic speedup, but any algorithm that runs in polynomial time under one model also does so on the other.) P is also the smallest class closed under composition of subproblems. Any given abstract machine will have a complexity class corresponding to the problems which can be solved in polynomial time on that machine.

Some texts use the term weakly polynomial run time. This means that run time is polynomial not in the size of the input, but in the numerical value of the input, which may be exponentially larger. For example, the Euclidean Algorithm is only weakly polynomial when implemented using subtraction.

Similarly, running in strongly polynomial time means that the algorithm's run time is independent of the numerical data size and depends only on the inherent dimensions of the problem. For example, an algorithm which could sort n integers each less than k in time O(n^2) would be strongly polynomial, while an algorithm sorting them in time O(nk) would be weakly polynomial (because an integer less than k can be represented in size logarithmic in k).

Reference:
http://en.wikipedia.org/wiki/Polynomial_time

Posted by 알 수 없는 사용자
,

In logic and mathematics second-order logic is an extension of first-order logic, which itself is an extension of propositional logic. Second-order logic is in turn extended by higher-order logic and type theory.

First-order logic only uses discrete variables (eg. the variable x represents a person) whereas second-order logic extends uses variables that range over sets of individuals. For example, the second-order sentence ∀P ∀x (x ∈ P ∨ x ∉ P) says that for every set P of people and every person x, either x is in P or it is not (this is the principle of bivalence). Second-order logic also includes variables quantifying over functions, and other variables as explained in the section Syntax below. Both first-order and second-order logic use the idea of a domain of discourse (often called simply the "domain" or the "universe"). The domain is a set of individual elements which can be quantified over.

Reference:
http://en.wikipedia.org/wiki/Second-order_logic

Posted by 알 수 없는 사용자
,

In logic, the semantic principle of bivalence states that every proposition takes exactly one of two truth values (e.g. truth or falsehood). The laws of bivalence, excluded middle, and non-contradiction are related, but they refer to the calculus of logic, not its semantics, and are hence not the same. The law of bivalence is compatible with classical logic, but not intuitionistic logic, linear logic, or multi-valued logic.

Reference:
http://en.wikipedia.org/wiki/Principle_of_bivalence
Posted by 알 수 없는 사용자
,

Peano axioms

Computer/Terms 2008. 4. 7. 19:02

In mathematical logic, the Peano axioms, also known as the Dedekind-Peano axioms or the Peano postulates, are a set of axioms for the natural numbers presented by the 19th century Italian mathematician Giuseppe Peano. These axioms have been used nearly unchanged in a number of metamathematical investigations, including research into fundamental questions of consistency and completeness of number theory.

The need for formalism in arithmetic was not well appreciated until the work of Hermann Grassmann, who showed in the 1860s that many facts in arithmetic could be derived from more basic facts about the successor operation and induction. In 1888, Richard Dedekind proposed a collection of axioms about the numbers, and in 1889 Peano published a more precisely formulated version of them as a collection of axioms in his book, The principles of arithmetic presented by a new method (Latin: Árithmetices principa, nova methodo exposita).

The Peano axioms contain three types of statements. The first four statements are general statements about equality; in modern treatments these are often considered axioms of pure logic. The next four axioms are first-order statements about natural numbers expressing the fundamental properties of the successor operation. The ninth, final axiom is a second order statement of the principle of mathematical induction over the natural numbers. A weaker first-order system called Peano arithmetic is obtained by replacing this second-order induction axiom with a first-order axiom schema.

Reference:
http://en.wikipedia.org/wiki/Peano_arithmetic#Arithmetic

Posted by 알 수 없는 사용자
,

In computability theory, traditionally called recursion theory, a set S of natural numbers is called recursively enumerable, computably enumerable, semidecidable, provable or Turing-recognizable if:

There is an algorithm that, when given an input number, eventually halts if and only if the input is an element of S.
Or, equivalently,

There is an algorithm that enumerates the members of S. That means that its output is simply a list of the members of S: s_1, s_2, s_3, ... . If necessary, this algorithm may run forever.
The first condition suggests why the term semidecidable is sometimes used; the second suggests why computably enumerable is used. The abbreviations r.e. and c.e. are often used, even in print, instead of the full phrase.

In computational complexity theory, the complexity class containing all recursively enumerable sets is RE. In recursion theory, the lattice of r.e. sets under inclusion is denoted ε.

Reference:
http://en.wikipedia.org/wiki/Recursively_enumerable

Posted by 알 수 없는 사용자
,

RE (complexity)

Computer/Terms 2008. 4. 7. 18:45

RE (recursively enumerable) is the class of decision problems for which a 'yes' answer can be verified by a Turing machine in a finite amount of time. (If the answer is 'no,' on the other hand, the machine might never halt.)

Equivalently, RE is the class of decision problems for which a Turing machine can list all the 'yes' instances, one by one (this is what 'enumerable' means).

For class of the 'no' answers, see Co-RE.

Each member of RE is a recursively enumerable set.

Reference:
http://en.wikipedia.org/wiki/RE_%28complexity%29

Posted by 알 수 없는 사용자
,

엑셀에서 차트를 이미지로 저장하기 위해 차트에 마우스 오른쪽 버튼을 누르고 팝업 메뉴를 열심히 읽었으나 답이 없었다. 편집 메뉴를 뒤져도 없었다.

결국 꽁수로 다음과 같은 방법으로 얻어야만 했다.

파일 -> 웹 페이지로 저장

Posted by 알 수 없는 사용자
,

Gamma correction

Computer/Terms 2008. 4. 7. 14:33

Gamma correction, gamma nonlinearity, gamma encoding, or often simply gamma, is the name of a nonlinear operation used to code and decode luminance or tristimulus values in video or still image systems. Gamma correction is, in the simplest cases, defined by the following power-law expression:

V_out = V_in^r
 
where the input and output values are non-negative real values, typically in a predetermined range such as 0 to 1. A gamma value r < 1 is sometimes called an encoding gamma, and the process of encoding with this compressive power-law nonlinearity is called gamma compression; conversely a gamma value r > 1 is called a decoding gamma and the application of the expansive power-law nonlinearity is called gamma expansion.

Reference:
http://en.wikipedia.org/wiki/Gamma_correction

Posted by 알 수 없는 사용자
,