Data recovery is the process of salvaging data from damaged, failed, corrupted, or inaccessible secondary storage media when it cannot be accessed normally. Often the data are being salvaged from storage media such as hard disk drives, storage tapes, CDs, DVDs, RAID, and other electronics. Recovery may be required due to physical damage to the storage device or logical damage to the file system that prevents it from being mounted by the host operating system.
The most common "data recovery" issue involves an operating system (OS) failure (typically on a single-disk, single-partition, single-OS system), where the goal is to simply copy all wanted files to another disk. This can be easily accomplished with a Live CD, most of which provide a means to 1) mount the system drive, 2) mount and backup disk or media drives, and 3) move the files from the system to the backup with a file manager or optical disc authoring software. Further, such cases can be mitigated by disk partitioning and consistently moving valuable data files to a different partition from the replaceable OS system files.
The second type involves a disk-level failure such as a compromised file system, disk partition, or a hard disk failure —in each of which the data cannot be easily read. Depending on the case, solutions involve repairing the file system, partition table or MBR, or hard disk recovery techniques ranging from software-based recovery of corrupted data to hardware replacement on a physically damaged disk. These last two typically indicate the permanent failure of the disk, thus "recovery" means sufficient repair for a one-time recovery of files.
A third type involves the process of retrieving files that have been "deleted" from a storage media, since the files are usually not erased in any way but are merely deleted from the directory listings
Tuesday, November 24, 2009
Wednesday, September 16, 2009
Processor Kinds
Kinds of Processors
Processors can broadly be divided into the categories of: CISC, RISC, hybrid, and special purpose.
Complex Instruction Set Computers (CISC) have a large instruction set, with hardware support for a wide variety of operations. In scientific, engineering, and mathematical operations with hand coded assembly language (and some business applications with hand coded assembly language), CISC processors usually perform the most work in the shortest time.
Reduced Instruction Set Computers (RISC) have a small, compact instruction set. In most business applications and in programs created by compilers from high level language source, RISC processors usually perform the most work in the shortest time.
Hybrid processors are some combination of CISC and RISC approaches, attempting to balance the advantages of each approach.
Special purpose processors are optimized to perform specific functions. Digital signal processors and various kinds of co-processors are the most common kinds of special purpose processors.
Hypothetical processors are processors that don’t exist yet (and may never exist). Sometimes these are processors in the design phase. Sometimes these are processors used for theoretical work. The most famous hypothetical processor is MIX (or 1009), a hypothetical teaching processor created by Donald E. Knuth for presenting computer algorithms in his famous series “The Art of Computer Programming” (discussed in the Basic Concepts section of Volume I, Fundamental Algorithms).
Processors can broadly be divided into the categories of: CISC, RISC, hybrid, and special purpose.
Complex Instruction Set Computers (CISC) have a large instruction set, with hardware support for a wide variety of operations. In scientific, engineering, and mathematical operations with hand coded assembly language (and some business applications with hand coded assembly language), CISC processors usually perform the most work in the shortest time.
Reduced Instruction Set Computers (RISC) have a small, compact instruction set. In most business applications and in programs created by compilers from high level language source, RISC processors usually perform the most work in the shortest time.
Hybrid processors are some combination of CISC and RISC approaches, attempting to balance the advantages of each approach.
Special purpose processors are optimized to perform specific functions. Digital signal processors and various kinds of co-processors are the most common kinds of special purpose processors.
Hypothetical processors are processors that don’t exist yet (and may never exist). Sometimes these are processors in the design phase. Sometimes these are processors used for theoretical work. The most famous hypothetical processor is MIX (or 1009), a hypothetical teaching processor created by Donald E. Knuth for presenting computer algorithms in his famous series “The Art of Computer Programming” (discussed in the Basic Concepts section of Volume I, Fundamental Algorithms).
Basics
A computer is a programmable machine. There are two basic kinds of computers: analog and digital.
Analog computers are analog devices. That is, they have continuous states rather than discrete numbered states. An analog computer can represent fractional or irrational values exactly, with no round-off. Analog computers are almost never used outside of experimental settings.
A digital computer is a programmable clocked sequential state machine. A digital computer uses discrete states. A binary digital computer uses two discrete states, such as positive/negative, high/low, on/off, used to represent the binary digits zero and one.
A computer it contains three elements: processor unit, memory, and I/O (input/output). The borders between those three terms are highly ambigious, non-contiguous, and erratically shifting.
Processors
The processor is the part of the computer that actually does the computations. This is sometimes called an MPU (for main processor unit) or CPU (for central processing unit or central processor unit).
A processor typically contains an arithmetic/logic unit (ALU), control unit (including processor flags, flag register, or status register), internal buses, and sometimes special function units (the most common special function unit being a floating point unit for floating point arithmetic).
Some computers have more than one processor. This is called multi-processing.
The major kinds of digital processors are: CISC, RISC, DSP, and hybrid.
CISC
CISC stands for Complex Instruction Set Computer. Mainframe computers and minicomputers were CISC processors, with manufacturers competing to offer the most useful instruction sets. Many of the first two generations of microprocessors were also CISC.
RISC
RISC stands for Reduced Instruction Set Computer. RISC came about as a result of academic research that showed that a small well designed instruction set running compiled programs at high speed could perform more computing work than a CISC running the same programs (although very expensive hand optimized assembly language favored CISC).
DSP
DSP stands for Digital Signal Processing. DSP is used primarily in dedicated devices, such as MODEMs, digital cameras, graphics cards, and other specialty devices.
Hybrid
Hybrid processors combine elements of two or three of the major classes of processors.
Assembly Language
There are four general classes of machine instructions. Some instructions may have characteristics of more than one major group. The four general classes of machine instructions are: computation, data transfer, sequencing, and environment control.
Analog computers are analog devices. That is, they have continuous states rather than discrete numbered states. An analog computer can represent fractional or irrational values exactly, with no round-off. Analog computers are almost never used outside of experimental settings.
A digital computer is a programmable clocked sequential state machine. A digital computer uses discrete states. A binary digital computer uses two discrete states, such as positive/negative, high/low, on/off, used to represent the binary digits zero and one.
A computer it contains three elements: processor unit, memory, and I/O (input/output). The borders between those three terms are highly ambigious, non-contiguous, and erratically shifting.
Processors
The processor is the part of the computer that actually does the computations. This is sometimes called an MPU (for main processor unit) or CPU (for central processing unit or central processor unit).
A processor typically contains an arithmetic/logic unit (ALU), control unit (including processor flags, flag register, or status register), internal buses, and sometimes special function units (the most common special function unit being a floating point unit for floating point arithmetic).
Some computers have more than one processor. This is called multi-processing.
The major kinds of digital processors are: CISC, RISC, DSP, and hybrid.
CISC
CISC stands for Complex Instruction Set Computer. Mainframe computers and minicomputers were CISC processors, with manufacturers competing to offer the most useful instruction sets. Many of the first two generations of microprocessors were also CISC.
RISC
RISC stands for Reduced Instruction Set Computer. RISC came about as a result of academic research that showed that a small well designed instruction set running compiled programs at high speed could perform more computing work than a CISC running the same programs (although very expensive hand optimized assembly language favored CISC).
DSP
DSP stands for Digital Signal Processing. DSP is used primarily in dedicated devices, such as MODEMs, digital cameras, graphics cards, and other specialty devices.
Hybrid
Hybrid processors combine elements of two or three of the major classes of processors.
Assembly Language
There are four general classes of machine instructions. Some instructions may have characteristics of more than one major group. The four general classes of machine instructions are: computation, data transfer, sequencing, and environment control.
Tuesday, July 7, 2009
Parts of computer
A computer divides into five elements:
Arithmetic and logic subsystem,
control subsystem,
main storage,
input subsystem, and
output subsystem.
These are the parts :
* processor
* arithmetic and logic
* control
* main storage
* external storage
* input
* output
Processor
The processor is the part of the computer that actually does the computations. This is sometimes called an MPU (for main processor unit) or CPU (for central processing unit or central processor unit).
Arithmetic and Logic
An arithmetic/logic unit (ALU) performs integer arithmetic and logic operations. It also performs shift and rotate operations and other specialized operations. Usually floating point arithmetic is performed by a dedicated floating point unit (FPU), which may be implemented as a co-processor.
control
Control units are in charge of the computer. Control units fetch and decode machine instructions. Control units may also control some external devices.
main storage
Main storage is also called memory or internal memory (to distinguish from external memory, such as hard drives).
external storage
External storage (also called auxillary storage) is any storage other than main memory.
input/output overview
Most external devices are capable of both input and output (I/O). Some devices are inherently input-only (also called read-only) or inherently output-only (also called write-only). Regardless of whether a device is I/O, read-only, or write-only, external devices can be classified as block or character devices.
input
Input devices are devices that bring information into a computer.
output
Output devices are devices that bring information out of a computer.
Arithmetic and logic subsystem,
control subsystem,
main storage,
input subsystem, and
output subsystem.
These are the parts :
* processor
* arithmetic and logic
* control
* main storage
* external storage
* input
* output
Processor
The processor is the part of the computer that actually does the computations. This is sometimes called an MPU (for main processor unit) or CPU (for central processing unit or central processor unit).
Arithmetic and Logic
An arithmetic/logic unit (ALU) performs integer arithmetic and logic operations. It also performs shift and rotate operations and other specialized operations. Usually floating point arithmetic is performed by a dedicated floating point unit (FPU), which may be implemented as a co-processor.
control
Control units are in charge of the computer. Control units fetch and decode machine instructions. Control units may also control some external devices.
main storage
Main storage is also called memory or internal memory (to distinguish from external memory, such as hard drives).
external storage
External storage (also called auxillary storage) is any storage other than main memory.
input/output overview
Most external devices are capable of both input and output (I/O). Some devices are inherently input-only (also called read-only) or inherently output-only (also called write-only). Regardless of whether a device is I/O, read-only, or write-only, external devices can be classified as block or character devices.
input
Input devices are devices that bring information into a computer.
output
Output devices are devices that bring information out of a computer.
Tuesday, June 9, 2009
What Computer Can And Can't Do
What Computer Can DO:
1. Work
Lot of people use their home pc's for work-related purposes. work(reports,spreadsheets,.. etc.) or can use your PC for small business
2. Play
Play some cool games,print pictures, listen to your favorite music etc.
3. Managing your finances
What Computer Can't DO:
A computer is certainly not like a brain,anatomically, physiologically and pharmacologically, in other words,physically. But if the theory that cognition is computation is right,then this does not matter, because once you capture cognition computationally, it doesn't matter how you implement it physically: The software can be run on all kinds of different hardware, but it still
has the same capabilities if it's the right software.
Computers cannot DO everything, not nearly everything, that people can. That's certainly true. But what's not obvious is whether that is because of a basic limitation of computation, or just because we haven't come up with a good enough computational theory yet.
1. Work
Lot of people use their home pc's for work-related purposes. work(reports,spreadsheets,.. etc.) or can use your PC for small business
2. Play
Play some cool games,print pictures, listen to your favorite music etc.
3. Managing your finances
What Computer Can't DO:
A computer is certainly not like a brain,anatomically, physiologically and pharmacologically, in other words,physically. But if the theory that cognition is computation is right,then this does not matter, because once you capture cognition computationally, it doesn't matter how you implement it physically: The software can be run on all kinds of different hardware, but it still
has the same capabilities if it's the right software.
Computers cannot DO everything, not nearly everything, that people can. That's certainly true. But what's not obvious is whether that is because of a basic limitation of computation, or just because we haven't come up with a good enough computational theory yet.
Monday, June 8, 2009
What is a Computer
Computers play a key role in how individuals work and how they live. Even the smallest organizations have computers to help them operate more efficiently, and many individuals use computers at home for educational, entertainment, and business purposes.
Nearly 5,000 years ago the abacus emerged in Asia Minor. The abacus may be considered the first computer. This device allowed its users to make computations using a system of sliding beads arranged on a rack. Early shopkeepers used the abacus to keep up with transactions. The use of pencil and paper spread, the abacus lost its importance. Nearly twelve centuries past before the next important advance in computing devices emerged.
In 1642, Blaise Pascal, the 18-year-old son of a French tax collector, invented what he called a numerical wheel calculator to help his father with his duties. The Pascaline, a brass rectangular box, used eight movable dials to add sums up to eight figures long. Pascal's device used a base of ten to achieve this. The disadvantage to the Pascaline, of course, was its limitation to addition. In 1694, Gottfried Wilhem von Leibniza a German mathematician and philosopher improved the Pascaline by creating a machine that could also multiply. Like its predecessor, Leibniz's mechanical multiplier worked by a system of gears and dials.
It wasn't until 1820, however, that mechanical calculators gained widespread use. A Frenchman, Charles Xavier Thomas de Colmar, invented a machine that could perform the four basic mathematic functions. The arithometer, presented a more systematic approach to computing because it could add, subtract, multiply and divide. With its enhanced versatility, the arithometer was widely used up until World War I.
The real beginnings of computers began with an English mathematics professor, Charles Babbage. Babbage's steam-powered Engine, outlined the basic elements of a modern general purpose computer and was a breakthrough concept. The Analytical Engine consisted of over 50,000 components. The basic design of included input devices in the form of perforated cards containing operating instructions and a "store" for memory of 1,000 numbers of up to 50 decimal digits long.
In 1889, an American inventor, Herman Hollerith, created a machine that used cards to store data information which was fed into a machine and compiled the results mechanically. Each punch on a card represented one number, and combinations of two punches represented one letter. As many as 80 variables could be stored on a single card. Hollerith brought his punch card reader into the business world, founding Tabulating Machine Company in 1896, later to become International Business Machines (IBM) in 1924 after a series of mergers. Other companies also manufactured punch readers for business use. Both business and government used punch cards for data processing until the 1960's.
When World War II began, the governments sought to develop computers to accomplishment their potential strategic importance. This increased funding for computer development projects and hastened technical progress. In 1941, a German engineer Konrad Zuse had developed a computer to design airplanes and missiles. The Allied forces, however, made greater strides in developing powerful computers. American efforts produced a broader achievement. In 1933, Howard H. Aiken, a Harvard engineer working with IBM, succeeded in producing an all-electronic calculator. The purpose of the computer was to create ballistic charts for the U.S. Navy. It was about half as long as a football field and contained about 500 miles of wiring. It used electromagnetic signals to move mechanical parts. The machine was slow taking 3-5 seconds per calculation and inflexible in that sequences of calculations could not change; but it could perform basic arithmetic as well as more complex equations.
Another computer development spurred by the war was the Electronic Numerical Integrator and Computer (ENIAC). It consisted of 18,000 vacuum tubes, 70,000 resistors and 5 million soldered joints, the computer was such a massive piece of machinery that it consumed 160 kilowatts of electrical power. ENIAC was developed by John Presper Eckert and John W. Mauchl. ENIAC was a general-purpose computer.
In 1945, Von Neumann designed the Electronic Discrete Variable Automatic Computer (EDVAC) with a memory to hold both a stored program as well as data. This "stored memory" technique as well as the "conditional control transfer," that allowed the computer to be stopped at any point and then resumed, allowed for greater versatility in computer programming. The key element to the von Neumann architecture was the central processing unit, which allowed all computer functions to be coordinated through a single source. In 1951, the UNIVAC I (Universal Automatic Computer), built by Remington Rand, became one of the first commercially available computers to take advantage of these advances. The first computers were characterized by the fact that operating instructions were made-to-order for the specific task for which the computer was to be used. Each computer had a different binary-coded program called a machine language that told it how to operate. This made the computer difficult to program and limited its versatility and speed. Other unique features of first computers were the use of vacuum tubes and magnetic drums for data storage.
The invention of the transistor greatly changed the computer's development in 1948. The transistor replaced the large, cumbersome vacuum tubes. The transistor was at work in the computer by 1956. Throughout the early 1960's, there were a number of commercially successful computers used in business, universities, and government from companies such as Burroughs, Honeywell, IBM, and others. These computers also contained transistors in place of vacuum tubes. They also contained all the components we associate with the modern day computer: printers, disk storage, memory, tape storage, operating systems, and stored programs.
By 1965, most large business routinely processed financial information using computers. It was the stored program and programming language that gave computers the flexibility to finally be cost effective and productive for business use. Though transistors were clearly an improvement over the vacuum tube, they still generated a great deal of heat, which damaged the computer's sensitive internal parts. Jack Kilby, an engineer with Texas Instruments, developed the integrated circuit in 1958. The IC combined three electronic components onto a small silicon disc, which was made from quartz. Scientists later managed to fit even more components on a single chip, called a semiconductor.
By the 1980's, very large scale integration squeezed hundreds of thousands of components onto a chip. Ultra-large scale integration increased that number into the millions. The ability to fit so much onto an area about half the size of a dime helped diminish the size and price of computers. It also increased their power, efficiency and reliability. By the mid-1970's, computer manufacturers sought to bring computers to general consumers. These minicomputers came complete with user-friendly software packages that offered even non-technical users an arrangement of applications, most popularly word processing and spreadsheet programs..
In 1981, IBM introduced its personal computer (PC) for use in the home, office and schools. The 1980's saw an expansion in computer use in all three arenas as clones of the IBM PC made the personal computer even more affordable. The number of personal computers in use more than doubled from 2 million in 1981 to 5.5 million in 1982. Ten years later, 65 million PCs were being used. As computers became more widespread in the workplace, new ways to harness their potential developed. As smaller computers became more powerful, they could be linked together, or networked, to share memory space, software, information and communicate with each other. Computers continue to grow smaller and more powerful.
Nearly 5,000 years ago the abacus emerged in Asia Minor. The abacus may be considered the first computer. This device allowed its users to make computations using a system of sliding beads arranged on a rack. Early shopkeepers used the abacus to keep up with transactions. The use of pencil and paper spread, the abacus lost its importance. Nearly twelve centuries past before the next important advance in computing devices emerged.
In 1642, Blaise Pascal, the 18-year-old son of a French tax collector, invented what he called a numerical wheel calculator to help his father with his duties. The Pascaline, a brass rectangular box, used eight movable dials to add sums up to eight figures long. Pascal's device used a base of ten to achieve this. The disadvantage to the Pascaline, of course, was its limitation to addition. In 1694, Gottfried Wilhem von Leibniza a German mathematician and philosopher improved the Pascaline by creating a machine that could also multiply. Like its predecessor, Leibniz's mechanical multiplier worked by a system of gears and dials.
It wasn't until 1820, however, that mechanical calculators gained widespread use. A Frenchman, Charles Xavier Thomas de Colmar, invented a machine that could perform the four basic mathematic functions. The arithometer, presented a more systematic approach to computing because it could add, subtract, multiply and divide. With its enhanced versatility, the arithometer was widely used up until World War I.
The real beginnings of computers began with an English mathematics professor, Charles Babbage. Babbage's steam-powered Engine, outlined the basic elements of a modern general purpose computer and was a breakthrough concept. The Analytical Engine consisted of over 50,000 components. The basic design of included input devices in the form of perforated cards containing operating instructions and a "store" for memory of 1,000 numbers of up to 50 decimal digits long.
In 1889, an American inventor, Herman Hollerith, created a machine that used cards to store data information which was fed into a machine and compiled the results mechanically. Each punch on a card represented one number, and combinations of two punches represented one letter. As many as 80 variables could be stored on a single card. Hollerith brought his punch card reader into the business world, founding Tabulating Machine Company in 1896, later to become International Business Machines (IBM) in 1924 after a series of mergers. Other companies also manufactured punch readers for business use. Both business and government used punch cards for data processing until the 1960's.
When World War II began, the governments sought to develop computers to accomplishment their potential strategic importance. This increased funding for computer development projects and hastened technical progress. In 1941, a German engineer Konrad Zuse had developed a computer to design airplanes and missiles. The Allied forces, however, made greater strides in developing powerful computers. American efforts produced a broader achievement. In 1933, Howard H. Aiken, a Harvard engineer working with IBM, succeeded in producing an all-electronic calculator. The purpose of the computer was to create ballistic charts for the U.S. Navy. It was about half as long as a football field and contained about 500 miles of wiring. It used electromagnetic signals to move mechanical parts. The machine was slow taking 3-5 seconds per calculation and inflexible in that sequences of calculations could not change; but it could perform basic arithmetic as well as more complex equations.
Another computer development spurred by the war was the Electronic Numerical Integrator and Computer (ENIAC). It consisted of 18,000 vacuum tubes, 70,000 resistors and 5 million soldered joints, the computer was such a massive piece of machinery that it consumed 160 kilowatts of electrical power. ENIAC was developed by John Presper Eckert and John W. Mauchl. ENIAC was a general-purpose computer.
In 1945, Von Neumann designed the Electronic Discrete Variable Automatic Computer (EDVAC) with a memory to hold both a stored program as well as data. This "stored memory" technique as well as the "conditional control transfer," that allowed the computer to be stopped at any point and then resumed, allowed for greater versatility in computer programming. The key element to the von Neumann architecture was the central processing unit, which allowed all computer functions to be coordinated through a single source. In 1951, the UNIVAC I (Universal Automatic Computer), built by Remington Rand, became one of the first commercially available computers to take advantage of these advances. The first computers were characterized by the fact that operating instructions were made-to-order for the specific task for which the computer was to be used. Each computer had a different binary-coded program called a machine language that told it how to operate. This made the computer difficult to program and limited its versatility and speed. Other unique features of first computers were the use of vacuum tubes and magnetic drums for data storage.
The invention of the transistor greatly changed the computer's development in 1948. The transistor replaced the large, cumbersome vacuum tubes. The transistor was at work in the computer by 1956. Throughout the early 1960's, there were a number of commercially successful computers used in business, universities, and government from companies such as Burroughs, Honeywell, IBM, and others. These computers also contained transistors in place of vacuum tubes. They also contained all the components we associate with the modern day computer: printers, disk storage, memory, tape storage, operating systems, and stored programs.
By 1965, most large business routinely processed financial information using computers. It was the stored program and programming language that gave computers the flexibility to finally be cost effective and productive for business use. Though transistors were clearly an improvement over the vacuum tube, they still generated a great deal of heat, which damaged the computer's sensitive internal parts. Jack Kilby, an engineer with Texas Instruments, developed the integrated circuit in 1958. The IC combined three electronic components onto a small silicon disc, which was made from quartz. Scientists later managed to fit even more components on a single chip, called a semiconductor.
By the 1980's, very large scale integration squeezed hundreds of thousands of components onto a chip. Ultra-large scale integration increased that number into the millions. The ability to fit so much onto an area about half the size of a dime helped diminish the size and price of computers. It also increased their power, efficiency and reliability. By the mid-1970's, computer manufacturers sought to bring computers to general consumers. These minicomputers came complete with user-friendly software packages that offered even non-technical users an arrangement of applications, most popularly word processing and spreadsheet programs..
In 1981, IBM introduced its personal computer (PC) for use in the home, office and schools. The 1980's saw an expansion in computer use in all three arenas as clones of the IBM PC made the personal computer even more affordable. The number of personal computers in use more than doubled from 2 million in 1981 to 5.5 million in 1982. Ten years later, 65 million PCs were being used. As computers became more widespread in the workplace, new ways to harness their potential developed. As smaller computers became more powerful, they could be linked together, or networked, to share memory space, software, information and communicate with each other. Computers continue to grow smaller and more powerful.
Subscribe to:
Posts (Atom)