Filter Theories of Attention
e.g. 496 (ear one) + 852 (ear two) = 496852, NOT, 489562
- filter located early in processing system
- ONLY physical stimulus characteristics are processed
- filter then chooses one stimulus (on the basis of its physical attirubutes) for further processing, the other remains in buffer for later processing.
- Therefore, the filter prevents overloading of the limited capacity mechanisms beyond the filter
Support: In dichotic listening tasks, only physical changes (such as gender change in voice) was noticed in the unattended ear. Changes in language (meaning) went unnoticed.
- when words associated with electric shock (meaning) were presented to the unattended ear, sometimes there was a physiological reaction.
- a third of people report hearing their own name in the unattended ear
- “who 6 there?” and “4 goes 1” is heard as “who goes there?” suggesting that selection can be based on the meaning of presented information
Triesman’s Attenuation Theory (1964):
- filter located late in the processing stream
- Physical AND Semantic characteristics are processed
- Filter reduces analysis of unattended information
- stimulus analysis occurs through a hierarchy, with physical cues toward the bottom and semantic analysis at the top. When there is insufficient processing capacity, the top of the hierarchy is omitted.
- threshold for expected or salient words (e.g. own name, and FIRE) is lower.
Deutch and Deutch (1963)
- filter located late in the processing stream
- Physical AND Semantic characteristics are processed
- all stimuli are FULLY analysed, with the most important/relevant stimulus determining the response.
Lavie’s Perceptual Load Theory:
- sometimes there is early selection (Broadbent) sometimes there is late selection (Deutsch and Deutsch)
- The perceptual load of a stimulus determines how much attention is devoted to the stimulus. The number of task stimuli and the processing demands of each stimuli determine the load of a stimulus.
- High Perceptual Load = no spare capacity to process distractors = early selection
- Low Perceptual Load = spare capacity to process distractors = late selection
Trajectory of Accident Opportunity
Active errors are unsafe acts that occur at the front line of operations, and the consequences are felt immediately e.g. a truck driver playing a violin whilst driving.
Latent errors are associated with management and design, and its effects lay dormant for long periods of time. There is much potential in everyday life for latent errors to occur.
- Task demands - time pressure, high workload
- Work Environment - distractions, interruptions
- Individual Capabilities - unfamiliarity with task, not trained properly
- Human Nature - stress, habit, assumptions, biases.
The trajectory of accident opportunity, which explains that accidents are caused by a combination of latent errors, active errors, triggering events and a failure of defense systems. Only a hazard that passes through a hole in each of these layers lead to a failure.
Latent Error - management deciding to not service their trucks as often
Active Error - truck driver playing the violin
Triggering Event - atypical condition, such as rain
Defense System Failure
Abstraction vs. Hyperspecifity
Abstraction explains how people remember conglomerations (generalisation, the gist) NOT verbatim information. Literal meaning and surface/perceptual information appears to be lost.
Research constructed sentences that contained four ideas, but presented participants with sentences containing only 3, 2 or 1 of those ideas. Some of them were old (the same sentences) and some were new. They then performed an old/new recognition test, and found that the more ideas in the sentence, the more confident they were that the sentence was old, and they’d seen it before. Even if they hadn’t. This demonstrates how people remember the general gist of information, and the abstractions/inferences/generalisations they have made, but not word for word exactly what was presented.
Research on Nixon’s tapes revealed that Dean was entirely truthful in substance, but inaccurate in detail.
Therefore, abstraction implies that LTM maintains the gist of information and the generalisations we have made, but not detail (surface/perceptual information).
Hyperspecificity is the reduction of the priming effect if the surface characteristics of an item are changed, demonstrating that surface/perceptual information is maintained in LTM.
Here, priming refers to the difference in response time to an item you have already seen before, and a new item.
Research has found that you respond faster to the word ‘tiger’ that has been presented before, than to the term ‘car’ that has not, by 100ms.
However, if you change the font and colour of the word ‘tiger’ on the second presentation, there is only a 50ms priming effect, demonstrating that the perceptual characteristics are maintained in LTM and facilitate retrieval.
Similar research initially presented people with an elephant facing left, then when presented with the same elephant facing left there was a 100ms priming effect, but when presented with a different elephant facing right there was only a 55ms priming effect.
This demonstrates that the first 55ms speed up on RT is due to the conceptual (‘gist’) match; the fact that they’re both elephants.
There’s an extra 45ms speed up on RT if the perceptual (‘surface’) features match, too.
Research has shown that you’re better at reading the SAME inverted text (therefore, having the same surface features) a year later, than new inverted text.
Abstraction is generally found on unspecific (implicit memory) tests, such as free recall.
Hyperspecificity is generally found on specific (direct memory) tests, such as priming and cued recall.
It can be found on unspecific tests, too, depending on its novelty/distinctiveness. For example, you remember and are faster to respond to inverted text better when it’s novel/new to you.
Simple or Complex Problems?
There are four criteria that make complex problems difficult and differentiate them from simple problems.
1. The complexity
- the number of elements
- interconnectedness of the elements
- type of connections; simple problems have linear connections, complex problems have exponential connections; people have problems interpreting non-linearity.
- the number of goals - complex problems have many goals as people have issues prioritising goals
- nonobvious connection and interactions are a characteristic of complex problems. For example, delays and butterfly effects make inferences and connections very difficult to see. Intransparancy leads to ambiguity.
- complex problems are associated with dynamic systems that change independently of intervention, so you may assume that your action has caused the outcome, when it may have just been by chance. Any unforeseen changes leads to time pressure, which leads to shallow processing and premature action.
Heuristics: Mental Shortcuts to Make Decisions
Representative Heuristic - when probabilities are evaluated by the degree to which one thing is representative of (or similar to) another. This leads to base-rate neglect, where the relative frequency of an event within a population is ignored.
For example, after given a description of someone’s personality, and then asked what job he comes from, you base your decision on how the personality is an accurate representation of the type of person in each career. Research has found that the probability of the person being in one career, is the same as the degree of similarity between the description and the career, and people fail to consider how many people are in that profession in the occupation.
For instance, even if the description matched that of a typical astronaut, if we looked at the base-rate of astronauts in the population, the chances of him being one are very low.
The Availability Heuristic - probabilities are evaluated by the ease with which instances and occurrences are brought to mind (how easy it is to think of examples.)
For example, it is a lot easier to think of words that begin with r, than have r at the third position, however, there are actually more words with r at the third position.
Another example, is if you are presented with an equal amount of male and female names, and most of the female names are of famous people, you will recall that there were more female names presented because they are easier to retrieve from memory. This is an example of a retrievability bias
A final example, is for events that are more imaginable are perceived as having a higher probability of occuring than they actually do, such as shark bites and plane crashes. This is an example of an imagineability bias.
The anchoring heuristic - probabilities are evaluated by starting from an initial anchor that is adjusted to yield the final answer.
This explains the human tendency to rely too heavily on the first piece of information received (the ‘anchor’). Once that anchor is placed, other judgements are biased by interpreting other information around that anchor, leading to a huge bias towards initial information.
Continued Influence of Misinformation Effect
When jurors, or other decision makers, are presented with tainted evidence or misinformation that is subsequently retracted, they continue to use the information anyway. This is an example of the continued influence of misinformation effect, where the original information that people are given is wrong, and then they are told that the information is wrong, so they need to remove the wrong information and not use it; but they don’t.
Evidence to prove this is: When there’s no confession (control group) 19% of the time these people are still ruled ‘guilty.’
Voluntary confessions ruled admissable (63%) vs. Coerced confessions ruled admissable (50%) - showing that coerced/forced confessions are less believable than voluntary coerced confessions.
However, when there’s a forced confession and judges are told to disregard the information (Admonish to disregard), the number of people ruled guilty SHOULD drop to the control condition, because they’re stating that the confession didn’t influence them. However, it was found that 43% of people were ruled guilty, which is way above the 19% control.
One explanation for this is because when information is retracted, it leaves gaps in the memory of an event. So when asked questions, people tend to use the information anyway, because they prefer an incorrect model than an incomplete model of an event.
However, there are a few circumstances in which it is easier for people to remove information that has been retracted.
Firstly, if you provide the person with an alternative explanation, the retraction is more likely to be successful. Alternative explanations fill the gaps that retractions create.
Secondly, if the person is suspicious of a person’s ulterior motives, they are better able to discount misinformation
Researchers studied this topic. The control condition were given no information, the pretrial publicity were given inflammatory articles about the accused person (e.g. he tortured animals as a child), and the suspicion condition were given the same articles, but they were followed by a suggestion that the prosecutor had placed them (made suspicious of original info.) Participants were asked to rate the verdict as guilty or not, based purely on evidence provided in the court transcript (not the prior info presented in the inflammatory articles). Looking at conviction rates, those in the pretrial publicity condition rose above the control condition, and those who were made suspicious of the authenticity of the original information dropped below the control condition.
People continued to use misinformation when they were not given an alternative explanation, and when they were given an extensive explanation for why the information was ruled inadmissable.
This was shown in a study where people were given information ruled admissable, ruled inadmissable with simple explanation, ruled inadmissable with extensive lawful explanation.
The final condition with an extensive explanation for why the information was ruled inadmissable and they should not use it actually led to the highest conviction rate. This demonstrates reactance, where the instruction to disregard evidence can backfire, and they do the exact opposite, in an attempt to retain control.
The Process Dissociation Procedure Alleviates Conscious Contamination
Implicit memory tests are supposed to measure memory that is there, that you’re NOT consciously aware of.
Conscious contamination occurs when conscious recollection acts on implicit memory tests.
For example, after studying a list of words, a standard implicit memory test would ask you to “recall the first word that comes to mind” when completing a stem completion task, where you are presented with the first few letters of a word. But how do you know that what you’ve retrieved is actually unconscious recollection?
The process dissociation procedure gives a more process-pure measure of implicit memory, by dissociating between conscious and unconscious recollection, removing any conscious contamination.
There are two conditions:
1. The Inclusion Condition, including conscious recollection (R) and unconscious recollection (U). For example, in this condition they are asked to “Use the studied word, or if that fails, use any word.”
2. The Exclusion Condition, only including unconscious recollection (U), where participants are asked to “use a new word NOT studied previously.” So if they remembered that a word
Then we measure the completion rate with the studied word.
In the Inclusion condition, there are two situations where a studied word may arise. They may remember the word, and use it through conscious recollection. Or they may not remember the word, but unconsciously recollect it.
In the exclusion condition, there is only one situation where a studied word may arise. If they remember the word, they are told NOT to use it. However, if they don’t remember a word, they may still use it due to unconscious recollection.
So then we can get a pure measure of conscious recollection (R) by subtracting the exclusion rate from the inclusion rate.
R = Inclusion - Exclusion
After we have a pure measure of explicit memory, we can then get a pure measure of implicit memory.
U = Exclusion / (1-R)
Exams have arrived: Welcome!
So, my first exam is tomorrow. For Cognitive Psychology.
I’ve revised the whole unit, so now I’m going to do some practice questions, to tell tumblr my answer!
(Sorry if it bores you)
I learnt about this in my psychology units :)
1,125 notes | Reblogged: (via