Craik and Lockhart (1972)
Craik and Lockhart (1972), proposed a 'conceptual framework of memory', which accentuates the importance at which levels of new information is processed. They further emphasised that the 'depth' in which we process information whilst learning it determines how it is stored in LTM. According to Craik and Lockhart (1972), memories and information are processed better in LTM if they're semantically encoded, processed and stored. If meaning (semantic) is processed during learning then the information is more likely to be stored in LTM, in contrary to if there is no meaning added during the process. More so, for the information that is stored in our memory, there is a continuum that illustrates
…show more content…
Structural processing comprises of the physical appearance and physical qualities of an object such as the colour, shape, and pattern of it, in which they're all analysed. Phonemic processing focuses on the sounds we encode into our STM. Intermediate processing is when something is identified and named. For example, when looking at the Australian flag many people would be able to identify that it is the flag of Australia and as well as name it. Deep processing is when semantic characteristics are utilised to remember the information. Semantic processing requires elaborative rehearsal, which involves a deeper analysis of the information, which leads to a better recall. For example, linking words with specific associations, or conjoining words with factual knowledge. For instance, when thinking about the Australian flag one could say that they would one day like the flag to be raised on their behalf if they ever make it to the Olympics and win a gold medal. Craik and Tulving furthermore propose that information in shallow processing will only be retained in STM for a brief amount of time, in contrary to items that are semantically encoded which …show more content…
It additionally highlights how elaborative rehearsal (which requires deep processing) can aid memory. Subsequently, the study which was led by Craik and Lockhart led to hundreds of experiments being conducted, which resulted in many agreeing with the 'superiority' of deep processing for remembering information. However, Craik and Lockhart's theory for memory has its limitations, as rather than explaining how deep processing accounts for an effective recall, it just describes it; more so, the concept of their theory is merely vague which therefore means it cannot be measured, which diminishes its validity. Furthermore, rather than explaining how deeper processing of memory results in better memory, it just describes
...Baddeley (1966) study of encoding in the short term memory and long term memory supports the MSM model on the mode of processing such that words are processed on recall and both models share the same opinion that processing does influence recall. Finally, the MSM model of memory states that all information is stored in the long term memory, however, this interpretation contrasts with that of Baddeley (1974) who argue that we store different types of memories and it is unlikely that they occur only in the LTM store. Additionally, other theories have recognised different types of memories that we experience, therefore it is debatable that all these different memories occur only in the long-term memory as presumed by the multi-store model which states the long term memory store as with unlimited capacity, in addition it also fails to explain how we recall information.
This investigation looks at retrieval failure in the long-term memory, particularly context-dependant forgetting. The theory behind retrieval failure is that available information stored in the long-term memory cannot be accessed because the retrieval cues are defective. Cue-dependant forgetting theory focuses on the assumption that the context in which we learn something is significant when we come to recall the information. Recall is better if it takes place in the same context as the learning. Research conducted on retrieval failure includes Tulving and Pearlstone (1966) who studied intrinsic cues by asking subjects to learn a list of words from different categories.
Atkinson, R.C. & Shiffrin, R.M. (1968). Human memory: A proposed system and its control process.
The second stage of memory processing is storage. Aronson et al. (2013) defines storage as the process by which people store the information they just acquired. Unfortunately, memories are affected by incoming information through alteration or reconstruction. This phenomenon is referred to as recon...
...pporting details. At the conclusion of the article, the authors share their thoughts on how it might be virtually impossible to determine when a memory is true or false. I also like their willingness to continue the investigations despite how difficult it might be to obtain concrete answers.
The Effects of Levels of Processing on Memory PB1: Identify the aim of the research and state the experimental/alternative hypothesis/es. (credited in the report mark scheme) To show how different levels of processing affects the memory. “People who process information deeply (i.e. semantic processing) tend to remember more than those who process information shallowly (i.e. visual processing). ” PB2: Explain why a directional or non-directional experimental/alternative hypothesis/es has been selected. (I mark) I have used a directional experimental hypothesis because past research, such as that by Craik and Tulving (1975) has proved this. PB3:
In the article, “The Critical Importance of Retrieval For Learning” the researchers were studying human learning and memory by presenting people with information to be learned in a study period and testing them on the information that they were told to learn in order to see what they were able to retain. They also pointed out that retrieval of information in a test, is considered a neutral event because it does not produce learning. Researchers were trying to find a correlation between the speed of something being learned and the rate at which it is forgotten
Craik and Tulving did a series of experiments on the depth of processing model. They had participants use a series of processing methods to encode words at different levels; shallow, moderate, and deep. The subjects were shown a series of words and ask questions about the words that would provide a "yes" or "no" response. At the shallow level they were asked questions about whether or not the word was written in capital letters. At the moderate level of processing, the subject was asked questions as to whether or not two words rhymed. Finally, the subjects were asked about words in sentences and whether or not they fit. This was the deep level of processing. After participants had completed the task they were then given a surprise recognition test with the words that they were just asked questions on (target words) and then words that they have never seen before (distraction words). The results of the experiment showed that people remembered the words better that were at deeper level of processing (Craik and Tulving 1975).
At the cognitive level of analysis humans are seen as behavioral entrepreneurs. Cognitive researchers have been interested in how verbal reaction is effected during interference or inhibition. According to Craig and Lockhart (1972) information is processed two ways. Shallow processing takes two forms one being structural processing (appearance), this occurs when only the physical qualities of something is encoded i.e. what the letters spell versus the color of the word. Shallow processing only involves maintenance rehearsal and leads to fairly short-term retention of information. Deep processing involves elaboration rehearsal which is a more meaningful analysis (e.g. images, thinking, associations etc.) of information and leads to better recall. It is generally easier for people to interpret the word itself which involves deep processing than to interpret the colors of the word which involves shallow processing. According to the speed of processing model word processing is much faster than color processing, thus, in a situation of interference between words and colors, when the task is to report the color, the word information arrives at the decision process stage earlier than the color information, and in result processing confusion.
In this experiment we replicated a study done by Bransford and Johnson (1972). They conducted research on memory using schemas. All human beings possess categorical rules or scripts that they use to interpret the world. New information is processed according to how it fits into these rules, called schemas. Bransford and Johnson did research on memory for text passages that had been well comprehended or poorly comprehended. Their major finding was that memory was superior for passages that were made easy to comprehend. For our experiment we used two different groups of students. We gave them different titles and read them a passage with the intentions of finding out how many ideas they were able to recall. Since our first experiment found no significant difference, we conducted a second experiment except this time we gave the title either before or after the passage was read. We found no significant difference between the title types, but we did find a significant difference between before and after. We also found a significant title type x presentation interaction. We then performed a third experiment involving showing objects before and after the passage was read. There we did encountersome significant findings. The importance and lack of findings is discussed and we also discuss suggestions for future studies, and how to improve our results.
Georg Elias Muller and his young student Alfons Pilzecker (Encyclopaedia Britannica, 2015). Collectively the pair studied and researched the idea of consolidation, in 1900 the two published a monograph considering the concept that “learning does not induce instantaneous permanent memories, but that memory takes time to be ‘fixed’ (consolidated). Consequently, memory remains vulnerable to disruption for a period of time after learning”. (Lechner, Squire et al, 1999). From Muller and Pilzecker’s work researchers like Richard Atkinson and Richard Shiffrin in 1968 began to extend and research with the technology they had on the consolidation process. Atkinson and Shiffrin with thorough research developed the Multi Store Model (Appendix 1) which defines the roles within the consolidation process. Atkinson and Shiffrin found that humans encode information from sensory registers into a usable format, store the STM in LTM by rehearsal then finally retrieve the information (Atkinson & Shiffrin,
Memory is the tool we use to learn and think. We all use memory in our everyday lives. Memory is the mental faculty of retaining and recalling past experiences. We all reassure ourselves that our memories are accurate and precise. Many people believe that they would be able to remember anything from the event and the different features of the situation. Yet, people don’t realize the fact that the more you think about a situation the more likely the story will change. Our memories are not a camcorder or a camera. Our memory tends to be very selective and reconstructive.
In the process of memory, there are three stages which are; encoding, storage and retrieval. All three stages determines whether or not a memory is stored or forgotten.The first stage is the processing of information which is known as encoding. Encoding involves converting information into a useable form so that it can be stored in memory. There are many ways of encoding such as acoustically, visually or semantically. Storage, which is the second stage, involves the retention of the information. This is done by organising information so it can be used or retrieved when required. Lastly the retrieval process is where locating and recovering the information stored in the long term memory occurs. In order to retrieve this information back to the short term memory, prompts or cues may be used. The information can be recalled and recognised. Recall is when material can be retrieved without providing a cue whereas recognition is the ability to bring forth information through the use of a cue (Lecture,2013) This is important as eyewitnesses may be asked to ...
Ebbinghaus has done many experiments on the brain. In one of his experiments, he concluded that the human brain can overlearn and over learning helps you remember things better. However, you can also over learn too much and forget the things you taught the human brain to learn (Douglas Mook pg
According to Sternberg (1999), memory is the extraction of past experiences for information to be used in the present. The retrieval of memory is essential in every aspect of daily life, whether it is for academics, work or social purposes. However, many often take memory for granted and assume that it can be relied on because of how realistic it appears in the mind. This form of memory is also known as flashbulb memory. (Brown and Kulik, 1977). The question of whether our memory is reliably accurate has been shown to have implications in providing precise details of past events. (The British Psychological Association, 2011). In this essay, I would put forth arguments that human memory, in fact, is not completely reliable in providing accurate depictions of our past experiences. Evidence can be seen in the following two studies that support these arguments by examining episodic memory in humans. The first study is by Loftus and Pickrell (1995) who found that memory can be modified by suggestions. The second study is by Naveh-Benjamin and Craik (1995) who found that there is a predisposition for memory to decline with increasing age.