History of the Sound Card: How it Came About
The very first sound card every manufactured was a Sound Blaster card. Far West was the manufacturer of the first Sound Blaster sound card. Let’s step back a little in time to take a look back at when sound cards haven’t even yet existed.
“Computers were never designed to handle sound.” Before sound cards were invented, the only sounds you would hear from a computer would be the beeps that would tell you if something was wrong with the computer. That’s all! No sounds would accompany any games, you couldn’t play music at all, nothing! Computer programmers wanted to use the beeps for games they created, and so they would program the beeps into their games. However, it would be “awful music as an accompaniment to games like Space Invaders”.
Far West came up with the solution, thus the invention of the first Sound Blaster sound card. It still wasn’t good quality music, but it was a big step up from just the beeps. “It could record real audio and play it back, something of a quantum leap. It also had a MIDI interface, still common on sound cards today, which could control synthesizers, samplers and other electronic music equipment”. The first sound card was of 8 bit 11 kHz audio quality, similar to an AM radio.
There are two parts to the “complicated piece of electronics”, the sound card. ADC and DAC were they. ADC is the analog-to-digital converter and DAC being the opposite (digital-to-analog converter). ADC took an analog signal from a device and converts it to digital signals for the computer to use, as DAC did the exact opposite, taking a digital signal and converting it to analog. However, in the future, there will be no use of ADC and DAC since “both speakers and microphones will be able to directly record and playback digital signals directly”. An example of ADC would be a sound through the microphone being recorded into the computer. A CD player is an example of a device that uses DAC.
Digital audio has its advantages. One would be that “no matter how many times it is copied it remains identical”. It does not degrade analogue sources. An example of an analogue source is vinyl. A leap up to 16 bit 44.1 kHz was a major development. This is the quality of a CD. This became a problem for the ISA bus.
In 1904 Eugene Lauste successfully recorded sound onto a piece of photographic film. This invention was known as a “Sound Grate” the results where still far to crude to be used to public display.
Before 1925 recordings were made with an acoustical horn that would capture the sound of the musicians in front of it and transferred the vibration to a cutting stylus. No electricity was used. This process was called the acoustical process. In 1925, microphones were introduced to transfer the acoustical energy to an electric signal, which fed the cutting stylus. This electrical process ameliorated recordings sound.
Sound waves consist of a disturbance of air molecules, the vibrations which pass from molecule to molecules from the speaker to the ear of the listener. The rate at which particles in the medium vibrate in the disturbance is the frequency or pitch of the sound measured in hertz (cycles/sound). As the pitch increases there comes a frequency at about 20kHz when the sound is no longer audible and above the frequency disturbance, this is know as ultrasound. The first major breakthrough in the evolution of high frequency echo-sounding techniques came when the piezo-electric effect in certain crystals was discovered by Pierre and Jacques Curie in Paris in 1880. The turn of the century saw the invention of the Diode (component that restricts the direction of movement, allows an electric current to flow in one direction) and the Triode (type of vacu... ...
Kimmel (1997) states “ Quite simply, the technology behind the MP3 audio format allows for high
A man named Thaddeus Cahill is said to have developed the first electronic instrument named the Telharmonium. This instrument was not made for the purpose of electronic music, it was used to broadcast music in restaurants and other public areas. “Cahill has never realized his plan, but his ideas were not so bad because today we make massive use of streaming media.” (The History Of Electronic Music, 2013)
The Use of Electronic Technology in 20th and 21st Century Music In this essay, I have examined the use of electronic technology within 20th and 21st Century music. This has involved analysis of the development and continuing refinement of the computer in today’s music industry, as well as the theory of the synthesiser and the various pioneers of electronic technology, including Dr. Robert Moog and Les Paul. Also within the essay, I have discussed the increasing use of computers in the recording studio. The computer has become an indispensable tool in ensuring that both recording and playback sound quality is kept at the maximum possible level. Many positive ideas have come from the continued onslaught of computerisation.
The story of the hearing aid depicts one of the most ridiculous timelines of technological advancements in all of history. Although we modernly think of a “hearing aid” as a small device which is inserted into the ear canal, the reality is that a hearing aid is “an apparatus that amplifies sound and compensates for impaired hearing.” Thus, I invite you to expand your mind, and draw your attention to the intriguing, and absolutely absurd, timeline of the hearing aid.
The “slap back” was often used to tape the guitar or even drums in a short range of usually 40-120 milliseconds to create the monumental echo effect. Combining all these new and improved innovations to the music world, this type of music was very successful.
This improved game system brought upgrades to the graphics, along with superb quality sound. This improvement allowed the composers to develop music that imitated various instruments. This digital sound system provides composers with the ability to create orchestral-like sounds with a multitude of instruments. To allow the SNES to handle this musical improvement, the technology also had to improve with the addition of upgraded sound chips. An example of this technological orchestral sound is from the "Overworld", the theme found in the game Super Mario World, composed by Koji Kando.
There are other precursors like the Novachord and the Ondes Martenot, which used a sliding metal piece in addition to a keyboard to create pitches, but it wasn’t until 1964 that Robert Moog released the first voltage controlled synthesizer. Moog said in his documentary: Synthesizer Documentary ~ Moog by Hans Fjellestad:
When it comes to recording in a modern day environment DAW’s (digital audio workstation) are an essential piece of equipment if professional standard results are desired. Although DAW’s are considered a modern technological advancement the first attempt at a DAW was in 1977 and it came from Dr. Tom Stockham’s Soundstream (See references for full description) digital system. It had very powerful editing capabilities and for its time a very advanced crossfader but was still primitive compared to today’s standard. At this moment there are 100’s of DAW’s on the market but arguably there some obvious leaders. Avid’s Pro Tools has been the go to DAW for any professional studio for the past 20 years and although there have been rumors of Avid going out of business and the features in Pro Tools becoming dated, Pro Tools is still a viable option for studios worldwide. Logic Pro has risen to the fore-front of the industry in recent years due to its easy to use interface that is possible of producing professional results. Ableton Live strays away from a hardware instrument music environment to cater for electronic music users. Audio to MIDI is a main focus along with the critically acclaimed Max for Live used for live performances by many current EDM artists. Each individual DAW has its own pros and cons and comparing these can highlight which DAW is the best for what task.
Most of the applications in terms of speech and audio compression may seem obvious at first, but what most do not realize is the scale at which it is used. Some of the more common examples include: telephone communications, compact disc players in the form of digital audio coding, stereo sound systems, speech recognition and playback, noise reduction/filtering after voice recognition and speech synthesis [1]. The uses of DSP for speech and audio compression is certainly not limited to these examples, but just these alone are examples that the general public use through various devices on a daily basis often without realizing the function of the systems and processes that go into their operation.
In 500 B.C. the abacus was first used by the Babylonians as an aid to simple arithmetic. In 1623 Wihelm Schickard (1592 - 1635) invented a "Calculating Clock". This mechanical machine could add and subtract up to 6 digit numbers, and warned of an overflow by ringing a bell. J. H. Mueller comes up with the idea of the "difference engine", in 1786. This calculator could tabulate values of a polynomial. Muellers attempt to raise funds fails and the project was forgotten. Scheutz and his son Edward produced a 3rd order difference engine with a printer in 1843 and their government agreed to fund their next project.
punched in them, appropriately called “punchcards”. His inventions were failures for the most part because of the lack of precision machining techniques used at the time and the lack of demand for such a device (Soma, 46).
Thousands of years ago calculations were done using people’s fingers and pebbles that were found just lying around. Technology has transformed so much that today the most complicated computations are done within seconds. Human dependency on computers is increasing everyday. Just think how hard it would be to live a week without a computer. We owe the advancements of computers and other such electronic devices to the intelligence of men of the past.