MINDPRIDE Computer Services

 
Home | About Us | Our Services | Contact Information | Tutorials, Articles & Dictionaries | Site Map

HOME 

 

About Us

WhyUS

Services

Virus Alerts

 

Contact

Estimates

Refer A Friend

Site Map

 

Links

Privacy Policy

Disclaimer

MakePayment

 

History of Data Storage

 

 

Data Recovery History

History of Data Storage

History of Hard Drive Technologies

Magnetic Storage Technologies

Optical Storage Technologies

Electronic Storage Technology

Media Conversion Technologies

 

 

 

 

Data Recovery History

Data recovery is a broad term that relates to the many ways to extract data from a damaged or inaccessible magnetic medium. Technology varies greatly, and there are no standards yet set for this type of service. The demand for data recovery is increasing steadily and throughout time numerous devices have been invented that have aided the data recovery process.

The origins of data recovery can be traced back to the discoveries of Charles Babbage and Ada Lovelace. In 1833, Babbage began working on the creation of the first computer. This invention, later known as the Analytical Engine, was based on the components of the modern computer we use today. Babbage worked on plans for the engine for 11 years until he reported his developments at a seminar in Italy in the autumn of 1841. An Italian named Menabrea wrote a summary of this seminar and published the article in French. In 1843,  Ada Lovelace translated this article and suggested to Babbage that she add her own notes. These notes turned out to be 3 times the length of the original article, and Ada and Babbage combined ideas to finish the creation of the Analytical Engine. Programs for the engine were punched on Jacquard cards; as a result the machine was known as the 'punch-card system'. However, before long Babbage and Lovelace ran into a problem: one of the punch cards was damaged extensively by Babbage in the handling process . Retrieving the lost data from the corrupted punch card turned out to be an extremely difficult task, one in fact that neither Babbage nor Lovelace could do. A more advanced data storage system was needed in order to correctly retrieve lost information. This was the first known instance of the need for data recovery technology.

Over the years since the Analytical Engine was developed, the computer field (and consequently the data recovery field) has developed at an overwhelmingly rapid pace. As a result, the demand for new data recovery solutions has advanced at the same rate. An extensive array of new ideas and concepts have surfaced as technology has advanced. One of the major accomplishments in the computer field was the ENIAC in the 1940's.

The ENIAC (Electrical Numerical Integrator and Computer) was the first multipurpose computer. Within the next decade computers began to be used commercially, largely due to the incredible accomplishments of the ENIAC. Not only were multipurpose computers able to store more information, but with the invention of the ENIAC, computer use became more common. Consequently, there was more room to store data and more people were using computers to store information. Another major advance in the industry was IBM's first magnetic tape drive vacuum column for data storage in 1952. This discovery further increased storage and processing capabilities of the computer. Before the column was introduced, weak magnetic tape was used to store data. The fragile magnetic tape was a reasonable means for storage but the frequency of breakage and sudden starts and stops was high. With the IBM vacuum column, the tape was held down by a vacuum during movement. The decrease in breakage resulted in a less occurrence of data loss and made data easier to retrieve when there was a problem. In 1962, the Logic probe was introduced. The Logic probe is used on electronic logic circuits to look into failed chips. While the Logic probe only indicates state changes, it helps to identify the basic reason a chip may be failing.

In recent years, data recovery has continued to become a vital industry as computers become increasingly important in our everyday lives. MindPride is on the forefront of this rapidly changing field and is dedicated to retrieving the data important to you.

 

 History of Data Storage

 

1832-1952
Charles Babbage and Lady Lovelace proposed the first computer and called it the Analytical Engine. The basic design included perforated cards containing operating instructions and a "store" for memory of 1,000 numbers of up to 50 decimal digits long.

 

Sir Charles Wheatstone uses paper tape to store data. This technique for data storage was similar to punch cards, except that the tape was made to be fed continually through the machine.

 

Herman Hollerith, was looking for a faster way to conduct the U.S. census. He used cards to store data information which he fed into a machine that compiled the results mechanically. Hollerith, with his new machine, founded the Tabulating Machine Company which later became International Business Machines (IBM).

 

 

 

1952-1970

 

IBM created the first magnetic tape unit for data storage. Magnetic tape was much faster than punch cards.

 

IBM introduced the 305 RAMAC. The RAMAC could store five million characters (five megabytes) on fifty disks, each 24 inches in diameter. Its recording head could go directly to any location on a disk surface without reading all the information in between. This made it possible to use computers for airline reservations, automated banking, medical diagnosis and space flights.

 

With the introduction of the first storage unit with removable disks, the end of the punch card era was hastened. Each disk pack could hold two million characters (2 megabytes) or as much and 25,000 punched cards.

 

Reducing the distance between head and disk made it possible to nearly double recording density-writing information smaller and more packed together.

 

The floppy disk was invented, which ushered in the era of data portability and desktop computing.

 

 

 

1971-1980

 

The introduction of the 3340 Winchester drive set the industry standard for the next decade. It featured two spindles with a storage capacity of 30 million characters each.

 

The first two speed tape unit is used, raising streaming speeds to 160kb per second.

 

RAID (Redundant Arrays of Independent Disks) was first introduced. RAID employs two or more drives in combination for fault tolerance and performance. They are used frequently on servers but are not generally necessary for personal computers.

 

Hierarchal Storage Manager (HSM) provides system delivered migration of inactive data from the disk to a less expensive storage media.

 

 

 

1981-1990

 

Thin film head technology enabled the introduction of the first commercial disk drive capable of reading and writing three million characters per second. It offered 6000 times more storage per square inch that the original RAMAC disk drive.

 

The first use of data compaction occurs, thus saving computer users time and money.

 

The Data Facility Storage Management Subsystem (DFSMS) is the first full function, automatic environment for management of storage systems.

 

The first magneto resistant head enables one gigabyte per square inch recording.


Electronic buffers replaced vacuum columns in tape drives, helping to increase data rate to three megabytes per second.

 

 

 

1991-Present

 

The first one-gigabyte 3.5 inch disk drive is invented.

 

The first one inch high one-gigabyte disk drive which stores 354 million bits per square inch is made.

 

Highly parallel processing, multi-level cache, RAID 5, and redundant components allow outstanding new levels of mainframe storage.

 

Three billion bits per square inch of magnetic recording is achieved-a new world record.
 

 

History of Hard Drive Technologies

1956
IBM invents the first computer disk storage system, the 305 RAMAC. The system can store 5 megabytes and has fifty 24-inch diameter disks.

1961
IBM invents the first disk drive with air bearing heads.

1963
IBM introduces the first removable disk pack drive.

1970
The eight-inch floppy disk drive is introduced.

1973
IBM creates the model 3340 Winchester sealed hard disk drive, the predecessor of all current disk drives. It has two spindles each with a capacity of 30 MBytes.

1980
Seagate Technology introduces the first hard disk drive for microcomputers, the ST506. It can hold 5 Mbytes.

Phillips introduces the first optical laser drive.

1986
Integrated Drive Electronics (IDE) technology was proposed. It is a standard which controls the flow of data between the processor and the hard disk. IDE is not itself an actual hardware standard, but the proposals were integrated into an industry-agreed interface specification known as ATA (AT Attachment). ATA defines a command and register set for the interface, creating a universal standard for communication between the drive unit and the PC.

The SCSI specification is completed. It is a bus which controls the flow of data between the processor and its peripherals. It can handle up to eight devices such as hard disks, CD-ROM drives, printers, scanners, etc. Throughout the years SCSI has evolved (examples below).

SCSI version Signaling rate (MHz) Bus width(bits) Max. DTR(MBps) Max. devices Max. cable length
SCSI-1 5 8 5 7 6m
SCSI-2 5 8 5 7 6m
Wide SCSI 5 16 10 15 6m
Fast SCSI 10 8 10 7 6m
Fast Wide SCSI 10 16 20 15 6m
Ultra SCSI 20 8 20 7 1.5m
Ultra SCSI-2 20 16 40 7 12m
Ultra2 SCSI 40 16 80 15 12m
Ultra160 SCSI 80 16 160 15 12m


1988
Redundant Arrays of Inexpensive Disks (RAID) is proposed. The original concept was to cluster small inexpensive disk drives into an array so that the array could appear to the system as a single large expensive drive. Such an array was found to have better performance characteristics than an individual hard drive. Subsequent development of RAID resulted in six standardized RAID levels to offer a combination of performance and data protection (example below).

Level 0 provides 'data striping' (spreading out blocks of each file across multiple disks) but no redundancy. This improves performance but does not deliver fault tolerance. The collection of drives in a RAID Level 0 array has data laid down in such a way that it is organized in stripes across the multiple drives, enabling data to be accessed from multiple drives in parallel.
Level 1 provides disk mirroring, a technique in which data is written to two duplicate disks simultaneously, so that if one of the disk drives fails the system can instantly switch to the other disk without any loss of data or service. RAID 1 enhances read performance, but the improved performance and fault tolerance are at the expense of available capacity in the drives used.
Level 3 is the same as Level 0, but 0 sacrifices some capacity, for the same number of drives, to achieve a higher level of data integrity or fault tolerance by reserving one dedicated disk for error correction data. This drive is used to store parity information that is used to maintain data integrity across all drives in the subsystem.
Level 5 is probably the most frequently implemented. It provides data striping at the byte level and also stripe error correction information. This results in excellent performance coupled with the ability to recover any lost data should any single drive fail.
 

1992
SMART (Self-Monitoring, Analysis and Reporting Technology) by IBM is an industry first. Hard drives equipped with Predictive Failure Analysis (PFA) can actually predict their own failure.

1993
Western Digital introduces Enhanced IDE (EIDE). It is a standard to overcome the constraints of ATA. It supports faster data transfer rates and higher disk capacities. It also supports AT Attachment Packet Interface (ATAPI) which supports non-disk peripherals such as CD-ROM drives and tape drives.

1997
EIDE's data transfer rate limit was doubled to 33 MBps by the new Ultra ATA.

1999
The Microdrive by IBM, is the world's smallest hard disk drive using a single one inch diameter platter.
 

 

Magnetic Storage Technologies

Magnetic storage is a storage medium commonly used for large volumes of data (e.g., video, image, or remote sensing data). Magnetic tape drives use magnetic tape to store the data. Large amounts of data are stored through tape drives because the capacity on the drives is huge - three billion (or three gigabits) of data per square inch can fit on a single magnetic disk.
Magnetic media is made up of a thin layer that can record a magnetic signal supported by a thicker film backing. The top coat consists of a magnetic pigment. The binder holds the magnetic particles together. The magnetic layer (top coat) records and stores the magnetic signals that are written to it. The backing film supports the magnetic top coat and reduces tape friction and distortion.

History of Magnetic Storage Technologies
2001
IBM researchers demonstrated a new world record in magnetic data storage density, reaching five times the density of the most advanced disk drive available today.
2000
IBM announced the spin valve, the world's most sensitive magnetic recording head. It is anticipated that the valve will be used to eventuallyexceed 10 gigabits per square inch densities.
1991
The first hard disk drive with MR recording heads was introduced.
1989
IBM's Advanced Magnetic Recording Laboratory reported it surpassed 1-gigabit-per-square-inch density.
1984
The thin-film magnetoresistive (MR) recording head was first used in a storage device, an IBM tape drive.
1978
Sony introduced the first digital recorders.
1971
The first floppy disk drive was introduced.
1966
The first disk drive with a wound-coil ferrite recording head was introduced.
1963
The compact audio cassette was introduced, the most successful audio magnetic recording product yet.
1957
The first magnetic hard drive for data storage became part of a new machine, the RAMAC (Randon access Method of Accounting and Control). Before the RAMAC, there was no way to increase internal disk memory and most computers were still using either magnetic tape or a punch card system.
1956
IBM introduced the first magnetic hard drive for data storage
1953
IBM made its first tape drive, the 726.
1935
AEG announced an audio recording device known as the Magnetophon, later used for broadcasting.
1898
Danish inventor and engineer Valdemar Poulsen invented the first magnetic recording device, the first telephone answering machine.

 

 

Optical storage technologies

Optical disk - a storage medium from which data is read and to which data is written by lasers. Optical disks can store much more data (up to six gigabytes) than most portable magnetic media. There are three basic types of optical disks:

CD-ROM-Like audio CDs, CD-ROMs come with data already encoded onto them. The data is permanent and can be read any number of times, but CD-ROMs cannot be modified. The CD-ROM drive's nominal speed is the same as its transfer rate. Single-speed drives have a 150KBps transfer rate while the rate for 12X drives is 1.8 Mbps. It is expected in the future that manufacturers are likely to shift from CLV (constant linear velocity) to CAV (constant angular velocity). While CLV rotates at varying speeds, CAV moves the disk at one constant speed. While this may not sound like much of a change to most, the difference is that this method is easier on the spindle motor because it does not require the drive to change motor speed as often, resulting in an improvement in performance.

WORM-Stands for write-once, read-many. With a WORM disk drive, the disk can be read and reread but once it is recorded it cannot be changed. After that, the WORM disk behaves just like a CD-ROM. The WORM drive is a high-capacity storage device and is best for storing archives and other large amounts of unchanging information.

Erasable-Optical disks- can be erased and loaded with new data, just like magnetic disks. These are often referred to as EO (erasable optical) disks.

These technologies are not compatible with each other. Each requires a different type of disk and drive.

An optical disk drive reads and writes data onto the disk. The disk is read by means of laser, then a magnetic field in addition to the laser is employed to write data onto the optical disk. The disk is exposed to a magnet on the label side and to the laser on the other side. A laser is used for two primary reasons: to allow a tiny one-micron diameter spot to be heated by an optical lens and the laser has enough energy to instantaneously reach Curie temperature (Curie temperature is 300 degrees Celsius, the level at which the magnetic domain loses its characteristic as a magnet).

Construction of the optical disk - The optical disk is mostly made of polycarbonate. The poly-carbonate plate should always allow the laser beam to transmit completely through the disk without a problem. Resin is applied to the disk substrate to ensure that the disk is not harmed in any way during the process (including heat damage, damage by impact, etc.). On the poly-carbonate resin substrate are seven types of film. Reflective film improves the read process. Protective film protects recording film. Both first and
second dielectric film protect the magneto-optical film, which is used for recording. Protective film protects the polycarbonate surface, and polycarbonate resin is a transparent plate.

 

 

Electronic Storage Technology

Electronic storage technology includes what is commonly called RAM (Random Access Memory) . RAM is used to hold the operating system, data and application programs currently needed to complete tasks, enabling the computer's Central Processing System (CPU) to access information stored in memory quickly. When a command is entered from the keyboard, the CPU interprets the command and instructs the hard drive to load the command into RAM, where it is more accessible. RAM is faster to write to and read from than any other type of storage in a computer (including the floppy disk, hard disk, and CD-ROM).

RAM is called "random access" because any storage location on the computer can be accessed directly. It is organized in a way that enables information to be stored and accessed directly to specific locations.

RAM is small in size and in the amount of data it can hold. It can be compared to short-term memory, focusing on the work at hand. When there is not enough room in memory for all the data needed by the CPU, the computer has to create a virtual memory file. This is the equivalent of simulating additional RAM, a process called "swapping". On average, the CPU is 60,000 times slower in accessing the hard drive than in accessing RAM.

Generally, the more memory a computer system has, the better its performance. A typical computer could come with 32 million bytes of RAM and a hard drive than can hold around 4 billion bytes. Most computers are designed to allow for additional RAM to be added. The more memory your computer has the faster applications run, and the easier it is to run several programs at once.

RAM can be divided into main RAM and video RAM. Main RAM stores every kind of data and makes it easily accessible to microprocessor, while video RAM stores data for your display screen so images can get to your display quickly.

Main RAM includes dynamic and static RAM. Dynamic RAM (DRAM) is the least expensive type of RAM and requires frequent power refreshing after each read in order to keep the charge that holds its content in place. DRAM includes: Fast Page Mode DRAM, Enhanced DRAM, Extended Data Output RAM or DRAM, Double Data Rate SDRAM, Direct Rambus DRAM, Burst Extended Data Output DRAM, Synchronous DRAM, Nonvolatile RAM, Enhanced SDRAM, Ferroelectric RAM, PC100 SDRAM, and JEDEC SDRAM. Static RAM (SRAM) is more expensive but does not need to be refreshed after each read, so it is quicker to access. However, SRAM requires four times more space than DRAM. Burst SRAM is synchronized with the system clock to make it more easily synchronized with anything that accesses it, in order to speed up waiting time.

Video RAM is basically just all RAM used to store image information for the display monitor. All kinds of video RAM are arrangements of dynamic RAM. Images are first read by the processor and then written to video RAM. From here, the data is converted by a RAM digital-to-analog converter into signals that are sent to the display presentation mechanism. Video RAM consists of RAMDAC (Random Access Memory digital-to-analog converter), Window RAM, Rambus Dynamic RAM, Synchronous Graphics RAM, Multibank Dynamic RAM, and Video RAM (the most common type).

Flash memory (also called flash RAM) is a type of memory that can be deleted and reprogrammed into units of memory called blocks. While flash memory is not as useful as random access memory, it can be helpful in holding control codes in order to make them easier to update and change. The name "flash memory" is used because on the microchip, a section of memory cells are erased in a "flash". Flash memory is used not only in computers, but in digital cameras, cellular phones, and other devices.
 

 

Media Conversion Technologies

Data conversion is the process of changing data from one format to another, or migrating data to and from various formats. Conversion is used to change data to the correct format to work with the system and software specified by the customer, or to convert data from an original application format to a more accessible form . The process of Media Conversion varies according to the situation. Different conversion methods are utilized according to cost and data accuracy.

There are various types of Media Conversion, including word processor conversion, file conversion, image conversion, data base conversion, and computer systems conversion.

Word processor conversion may be necessary when developing an application that needs to read and/or write various file formats. Word processor file formats are very complex and new formats are constantly being introduced, so the demand for conversion is high.

File conversion is just converting files to a different format. It may be necessary in order to view multiple file types without the related application. For instance, instead of having to buy ten different types of word-processing software to view ten different files, the files can be converted to fit one word processing system.

Image conversion may be used to gain access to the various image processing methods from your favorite application development environment. Some popular image formats are JPEG, Photo CD, PNG, PDF, and GIF. It may be necessary to convert an image to a different format in order to resize, sharpen, change color, or add certain special effects to meet your standards.

Data base conversion is needed to change any files saved in an out-of-date database to a more modern data base program. More modern databases are generally more convenient and dependable so the advantage of data base conversion is obvious.

Basically, there are two very general methods of conversion: data and media. In general, Media Conversion is extracting the files off of the source media, followed by backing up the files to the destination media. Two types of Media Conversion are media transfer and media migration. The reasons for media transfer may be to transfer archived data into a more convenient format or to save on cost of equipment, and media migration may be utilized for a greater storage capacity, a more reliable media type, less expensive media cost per GB or flexibility of media. However, a full Media Conversion can be much more
complicated, possibly including processing the data well beyond just copying files to a different type of media.

 

  Services What We Offer Areas Covered Rates & Discounts
Estimates Maintenance Plans Links Phone Tech Support
About Us Refer A Friend Why Us? Reference Dictionaries Tutorials
Privacy Policy Service Protocol Disclaimer Contact Us

Web Page Designed By  ADAM
Copyright © 1981 - 2008
MINDPRIDE CONSULTING All rights reserved.
Revised: November 21, 2007