English Linguistics
English Linguistics
Piotr Pęzik
Thursday 15:15-16:45
05.10.2023
1. Phonetics – the study of the minimal units that make up language. The study of sounds in
speech in physical terms (consonants, vowels, melodies, and rhythms). Syllables; The
study of sounds, how they are made.
2. Phonology – the study of how sounds (phones) are organised within a language and how
they interact with each other. The system of sounds, can we make predictions about the
stress, etc.
3. Articulatory phonetics – the study of the production of speech sounds. It analyses how
words are produced in the speech organ. In articulatory phonetics, we want to know the
way in which speech sounds are produced - what parts of the mouth are used and in
what sorts of configurations.
4. Acoustic phonetics – the study of the transmission and the physical properties of speech
sounds. Physically analyses speech as vibrations of air (frequency, formants, sound
waves compositions);
5. Auditory phonetics – the study of the perception (reception) of speech sounds. Analyses
how speech is perceived.
6. X-rays - X-ray photography - the use of this technique can reveal the details of the
functioning of the vocal apparatus. It shows how a sound is produced; we can actually
see as it happens.
- Palatographs – an experimental method used to observe contact between the tongue and the
roof of the mouth. Can be static or dynamic.
- Spectrographs – equipment that generates a three-dimensional representation of sounds in
which the vertical axis represents frequency, the horizontal axis represents time, and the
darkness of shading represents amplitude.
- Phonetic transcription – a method of writing down speech sounds in order to capture what
is said and how it is pronounced. Usually based simply on how the sounds are perceived
when heard without any special analysis.
- Articulatory phonetics;
9. Describe the basic articulatory difference between the production of vowels and
consonants:
• Vowels – having only a slight narrowing while the air is flowing freely through the oral
cavity. We do not block the flow of air. Vowels are always voiced; The speech organs are
relatively apart;
• Consonants – are produced with a construction somewhere in the vocal tract that
obstructs airflow. While producing a consonant we block the flow of air.
11. Why are diphthongs classified as vowels? – diphthongs are also commonly referred to as
glide vowels because the manner of their articulation involves a glide from one vowel to
another in the same syllable (complex two-part sounds). Diphthongs might also be
referred to as complex vowels because they are composed of two simple vowels. Their
articulation is quite similar to the way we pronounce single vowels.
15. How does one whisper? – when a person whispers, their vocal folds are in an
intermediate position in which they are partially open (the vocal folds are located
withing a larynx - which contains the vocal folds and the glottis and is located in the
throat at the top of the trachea, at the Adam’s apple). While whispering, the vocal folds
do not vibrate.
Glottis, vocal folds, …
19. Vowels: tongue advancement (front, back) – the action of the tongue to move forward or
pull back within the oral cavity:
• Front vowels – the tongue is advanced and moved forward;
• Back vowels – the tongue is retracted or pulled back;
• The central vowels ([ʌ] as in luck or [ə] as the first vowel in the word another)
require neither advancement nor retraction of the tongue;
12.10.2023
- Homework for 19.10.2023: to read chapter 3 “Phonology” and answer the questions;
Phonology:
1. Main areas of phonology
2. Phonotactic constraints:
• Consonantal clusters – rules that govern which sound sequences are possible in a
language and which are not. Restrictions on possible combinations of sounds; Every
language has its own set of permitted segmental sequences; Mostly constrains about
how we can organise sounds withing a syllable;
• Syllable structure and distribution – usually many languages prefer to start a
syllable with a consonant first and a vowel second, but some languages allow more than
only one consonant in a syllable onset. English allows up to 3 consonants to start a word
provided that we start with [s] that is followed by [p], [t], [k] (voiceless plosives), and
then by [l], [j], [ɹ].
19.10.2023
- Exam task – use the example above to explain the statement: Phonemes are abstract entities.
They are never pronounced:
Phonemes in English are the smallest abstract and meaningful units of speech, sound
categories that change the meaning of a word. Phonemes are never pronounced because they
are abstract. Allophones, on the other hand, are realisations of phonemes in real speech.
Allophones are noncontrastive, which means that that they do not affect a word’s meaning. Any
sound that is pronounced is, in fact, an allophone.
09.11.2023
1. Morphology:
- Definitions and overview of Morphology - morphology is a component of mental grammar
that studies the internal structure of words; Studies types of words and how words are
formed out of smaller meaningful pieces and other words; Major distinctions in
morphology – word-formation, morphological processes, morphological typology;
- Lexical categories – classes of words that differ in how other words can be constructed out of
them;
2. Word Formation:
- Derivation: the process of creating new words out of other words; Derivation takes one word
and performs one or more “operations” on it, the result being some other word, often of
a different lexical category; Derivational affixes may be attached either before or after
the stem;
* Open vs. Closed lexical categories – Open lexical categories - nouns, verbs,
adjectives, and adverbs – new words added to the language usually belong to these
- Terminology:
* Morphemes. Free vs. Bound, bound roots – Morpheme – the smallest linguistic unit
with a meaning (e.g. the morpheme cat) or a grammatical function (e.g. the morpheme
---ed that indicates past tense);
Free morphemes Bound morphemes Bound roots
- Simple words which cannot - Morphemes which cannot - Morphemes which on the
be broken into smaller stand alone as independent one hand do seem to have
meaningful pieces. Can stand words (affixes like -ing, -y) – some associated basic
alone as independent words consist of only one meaning but on the other
(e.g. cat, dog,); morpheme but they cannot hand they are unable to
- Free morphemes - can stand alone like single- stand alone as words in their
stand alone as individual morpheme words; own right (e.g. rasp- as in
words and carry meaning by - Bound morphemes - raspberry);
themselves, examples: like, cannot stand alone as - Bound root - morphemes
fortune, place; individual words, examples: - that cannot stand alone but
un, -ly, dis- (happily, dislike, carry the meaning, examples:
unhappy); struct in words: construct,
restructure;
* Productive vs. unproductive roots, e.g.: -ify vs. -ceive – many of the bound roots,
including -fer, -sist, and -ceive, are the result of English borrowings from Latin and are
not productive (currently not used to make new words);
* Roots vs. stems - A root – by definition it contains only one morpheme; A stem – may
contain more than one morpheme, but must have a lexical meaning; Affixes that follow a
stem are called suffixes, whereas affixes that precede a stem are called prefixes;
* Homophonous affixes – affixes that sound alike but have different meanings or
functions (affix -er which can be both derivational [used to derive an agent noun from a
verb – speak-speaker] and inflectional [marks comparative degree on adjectives and
adverbs – tall-taller, fast-faster]);
* Content vs. function words:
Content morphemes: Function morphemes:
- Are said to have more concrete meaning - Contain primarily grammatically relevant
than function morphemes; information;
- Include all derivational affixes, bound roots, - Include all inflectional affixes and free roots
and free roots that belong to the lexical that belong to lexical categories of
categories of noun, verb, adjective, and prepositions, determiners, pronouns, or
adverb; conjunctions;
- Content words – nouns, verbs, adjectives, - Function words – prepositions,
and adverbs; determiners, pronouns, and conjunctions;
3.
Morphological Processes:
- Affixation - process that involves adding prefixes (un-, in-, dis-) or suffixes (-ness, -ly) to
existing words in order to create new words or modify their meaning or their
grammatical function, examples: happy happiness, obedient disobedient;
- Compounding - combining two or more words to create a new word with a combined
meaning, example: sun + flower = sunflower, book + shelf = bookshelf; Compounds that
have words in the same order as phrases have primary stress on the 1st word only, while
individual words in phrases have independent primary stress (bláckbird compound;
bláck bi’rd phrase)
- Reduplication - a process of forming new words by doubling either the entire free
morpheme (total reduplication) or part of it (partial reduplication); Reduplicant – the
reduplicated piece;
- Alternations - morpheme-internal modification (man-men, goose-geese, foot-feet);
- Suppletion - a root that has one or more inflected forms phonetically unrelated to the shape
of the root (is-was, go-went);
4. Language Typologies:
- Analytic languages - these languages have a relatively simple grammar and rely on word
order and context to convey meaning (English). Word order is critically important to
encode syntactic roles; Made up of sequences of free morphemes – each word consists
of a single morpheme, used by itself with meaning and function intact; Purely analytic
languages are called isolating languages – do not use affixes to compose words; Word
order is used to show the functions of words in a sentence (non-analytic languages may
use morphology to mark these differences);
- Synthetic languages - these languages use affixes (prefixes and suffixes) and grammatical
cases to convey grammatical information (Polish and Russian). The word order is not
free, it is just less critically important to the meaning; Morphology is a key aspect in
synthetic languages because it encodes the meaning;
* Agglutinating - these languages have a highly regular system of affixation, in which
each affix represents a single grammatical feature (Turkish and Finnish). In these
languages, the morphemes are joined together relatively loosely, that is, it is
usually easy to determine where the boundaries between morphemes are;
Another characteristic feature – each bound morpheme carries only one piece of
information;
* Fusional - words are formed by adding bound morphemes to stems, just as in
agglutinative languages, but in fusional languages the affixes may not be easy to
separate from the stem; It is often hard to tell where one morpheme ends and the
next begins (Spanish); Another difference between agglutinative and fusional
languages – whereas agglutinative languages usually have only one meaning
indicated by each affix, in fusional languages a single affix more frequently
conveys several pieces of information simultaneously;
* Polysynthetic – these languages feature highly complex words that may be formed by
combining several stems and affixes; Complex words with extensive affixation;
Incorporating morphemes that represent different grammatical categories
(morphemes);
- Word order in synthetic languages - the word order is not free; it is just less critically
important to the meaning; Morphology is a key aspect in synthetic languages because it
encodes and conveys the meaning;
5. Word structure:
6. Morphology vs. Etymology – Etymology - the history of a word or phrase shown by tracing
its development and relationships; The scientific study of the origin and evolution of a
word's semantic meaning across time, including its constituent morphemes and
phonemes.
16.11.2023
1. Syntax:
- What is syntax? – a component of mental grammar which deals with how sentences and
other pieces of linguistic expressions can be constructed out of smaller phrases and
words; Subfield of linguistics which studies what the permissible syntactic combinations
of words are in a given language; Syntax is broadly concerned with how linguistic
expressions (pieces of language) combine with one another to form larger expressions;
- Grammaticality/acceptability (judgments): when a string of words really does form a
sentence of some language, we say it is grammatical (syntactically well-formed) in that
language; If some string of words does not form a sentence, we call it ungrammatical
(syntactically ill-formed) and mark it with a symbol *;
A grammaticality judgement – making a decision whether a string of words truly forms a
sentence of one’s native language; A reflection of speakers’ mental grammar; A fuzzy
concept;
- Syntax vs. Semantics and how compositionality is achieved in language: both syntax and
semantics are concerned with linguistic meaning; Different syntactic combinations
produce differences in meaning; In one sense, syntax and semantics are quite
independent from one another. It is possible to have a grammatical, syntactically well-
formed sentence with a bizarre meaning (syntax without semantics: Colorless green
ideas sleep furiously - Chomsky – syntactically speaking, it is a perfectly grammatical
sentence in English), and, conversely, it is possible to have a non-sentence (semantics
without syntax) whose meaning we can understand (Me bought dog*); There is another
way in which syntax is independent from semantics – the syntactic properties of
expressions cannot be predicted or explained on the basis of an expression’s meaning
(Sally ate vs. Sally devoured*) – eat does not require an object as opposed to devour; So
although these two verbs are very similar in meaning (semantics), their syntactic
properties are different;
- Words vs. Phrases:
- Principle of Compositionality: the overall meaning of phrases/sentences is derived from the
meaning of their components (i.e. words) in a certain syntactic configuration: The
Principle of Compositionality – the fact that the meaning of a sentence depends on the
meanings of the expressions it contains and on the way they are syntactically combined;
4. Syntactic Categories: a group of expressions that have very similar syntactic properties; All
expressions that belong to the same syntactic category have more or less the same
syntactic distribution;
- Syntactic distribution: refers to the set of syntactic environments in which an expression can
occur; If two expressions are interchangeable in all syntactic environments, we say that
they have the same syntactic distribution and therefore belong to the same syntactic
category;
- Phrase rules – used to capture patterns of syntactic combinations; Generative rules (how to
generate a sentence); They are similar to lexical entries (NP {she, Fluffy, Bob, Sally, …})
except that they contain only names of syntactic categories (e.g. NP or VP, etc.); They do
not contain any linguistic form (e.g. Fluffy, Bob, etc); A phrase structure that represents
the fact that if we combine a VP with a NP to its left, we can create a sentence, appears in
S NP VP; We can conveniently display that way that a sentence is built up from lexical
expressions using the phrase structure rules by means of a phrase structure tree:
- An example of a syntactic ambiguity in English: prepositional phrase attachment
ambiguity (The cop saw the man with the binoculars); Does the prepositional phrase refer to the
verb or to the object of the verb?
7. Dependency Syntax:
- Dependency grammars are a broad class of (mainly) syntactic formalisms which are
distinguished from the somewhat more established and widely known constituency-based
syntactic representations. In general, dependency grammars share the basic assumption “that
syntactic structure consists of lexical elements linked by binary asymmetrical relations called
dependencies”
- The idea that sentences are “wholes” organised in terms of binary governor – dependent
relations between their lexical elements were explicated in Tesniere (1959) and several other
authors including Hays (1960), Gaifman (1965), Robinson (1970), Melčuk (1988), and others.
Covington notes that on contrast to constituency grammar, whose origins can be traced to the
ancient Stoics, “dependency grammar has apparently been invented many times and in many
places”
8. Dependency Trees:
8. Dependency Syntax - we always have relations between words; a tight relationship
between words; The governor and the dependent relation; Always defines direct
relation between phrases;
30.11.2023
1. Phraseology:
- Branch of linguistics devoted to distinct aspects and types of structural units;
- Deals with one major theoretical question about human language;
- The essence of the language is its novelty – every time we say something, it is new;
- Rote recall – remembering language from memory (Chomsky); The vast majority of sentences
we produce are novel;
- Reusing language when we have a similar situation to the one we have already had
improving fluency and reducing ambiguity (ameliorating the chances of being understood);
- Idiomatic language – we tend to reuse idiomatic language;
- Phraseology - a branch of linguistic that is focused on identifying and classifying
phraseological units, units of prefabricated language (reused every time we use it);
• Connotative meaning:
- Expressive connotations:
* Derogatory – “mutton dressed as lamb,” “to breed like rabbits”;
* Taboo – “get stuffed,” “son of a bitch”;
* Euphemistic – “the great divide,” “to live in sin,” “of a certain age”;
* Jocular/humorous – “Darby and John,” “to have a bun in the oven”;
- Stylistic connotations:
* Colloquial/informal – green fingers; every man Jack; full of beans; fine and
dandy; before you can say lack Robinson; clear off!
* Slang – reach-me-downs; to kip down; on the never-never;
* Formal – the compliments of the season; a bone of contention; gainfully
employed; to be the question; under the aegis;
* Literary – the alpha and omega; hermetically sealed; irretrievably lost; between
Scylla and Charybdis;
* Archaic – in days of yore; as it came to pass; thou shalt not kill;
* Foreign – in casu belli; sine qua non; carte blanche; comme il faut;
- Register markers:
* Astronomy – black hole; red giant;
* Economics – a high flier; idle funds; intermittent dumping;
* Judicial – burned of proof; minister without portfolio; persona non grata;
* Medical – corpus luteum; benign tumour; Caesarian section; pepper-and-salt
funds (= fundus oculi);
• Beyond sentences:
- Jokes, recipes – telling jokes using the exact same wording;
- Stories and fables – retelling the story using the same wording;
- Formulaic language in works of literature – Homer’s “Iliad” (originally an oral story
written down – prefabricated language included);
- Poems, plays, movie scripts;
• Discourse formulas:
- Formulas (formulae):
* Pragmatic markers (“social cohesion devices): you know;
* Discourse markers (“text-linking devices”): I mean;
* Stance expressions;
* Modal particles: or something;
7.12.2023
1. Language Acquisition:
2. Acquisition Theories: a predominant theory assumes that part of our ability to acquire
language is innate and that children learn language by “inventing” the rules specific to their
language;
- Imitation Theory – inadequate;
- Reinforcement Theory – inadequate;
- Active Grammar Construction;
- Connections Theory;
- Social Interaction Theory;
- Lenneberg’s criteria:
1. The (linguistic) behaviour emerges before it is necessary – language is needed for
survival; Children start to speak between 1-2 years old (hence, at this early age there is
no need for survival);
2. Its appearance is not the result of a conscious decision – children do not make a
conscious choice about acquiring a native language;
3. Its emergence is not triggered by external events (though the surrounding
environment must be sufficiently “rich” for it to develop adequately) – native language is
not learnt as a result of something special triggering the learning (as opposed to playing
the piano); “Rich” environment – a child has to be exposed to the language;
4. Direct teaching and intensive practice have relatively little effect – children do not
seem to learn much if somebody points out to the mistakes they made;
5. There is a regular sequence of “milestones” (well-known stages of language
acquisition) as the behaviour develops, and these can usually be correlated with age and
other aspects of development – children master linguistic skills in a certain order;
6. There is likely to be a “critical period” for the acquisition of the behaviour;
4. The Critical Period – the term describes a period of time in an individual’s life during which
a behaviour (in this case language) must be acquired, i.e. the acquisition will fail if it is
attempted either before or after the critical period (Lenneberg); The critical period for
language acquisition – from birth to approximately the onset of puberty;
- Problematic evidence – there might be numerous variables affecting the child’s being unable
to speak after the critical period (abuse, etc);
5. Imitation Theory: claims that children learn language by listening to the speech around
them and reproducing what they hear; According to this theory, language acquisition
consists of memorising words and sentences of some language;
- Hitted? – some children might use words such as hitted instead of hit or goed instead of went;
This evidence goes against imitation theory since a child is highly unlikely to hear such
forms of words from their parents, caretakers, etc. Rather, it seems that the child who
says hitted has a rule in their internal grammar that adds -ed to a verb to make it past
tense;
6. Reinforcement Theory: asserts that children learn to speak like adults because they are
praised, rewarded, or otherwise reinforced when they use the right forms and are
corrected when they use wrong forms; This theory is contradicted by the fact that even
when adults do try to correct a child’s grammar, the attempts usually fail entirely;
7. Active Construction of a Grammar Theory: this theory suggests that children actually
invent the rules of grammar themselves; The theory assumes that the ability to develop
rules is innate, but the actual rules are based on the speech children hear around them;
Children listen to the language around them and analyse it to determine the patterns
that exist; A symbolic approach;
8. Connectionist Theories: this theory assumes that children learn language by creating
neural connections in the brain; Through these connections, the child learns
associations between words, meanings, sound sequences, and so on; Such theories
assume that the input children receive is indeed rich enough to learn language without
an innate mechanism to invent linguistic rules; Multimodal;
- Fring – frang/frought – forming an irregular form due to the child’s exposure to words like
sing, ring, and bring; Something that is not predicted by the Active Construction of a
Grammar Theory but is predicted by the Connectionist Theory;
9. Social Interaction Theory: assumes that children acquire language through social
interactions with older children and adults in particular; Placing a great deal of
emphasis on social interaction and the kind of input that children receive, instead of
assuming that simply being exposed to language will suffice; Adults use a child-directed
speech; Simplification of language;
- Phonology acquisition, examples of CV acquisition – since consonants and vowels are very
different sounds as regards their production, it makes them easy for babies to
pronounce this particular sequence of sounds; Very often, consonants like [l] and [r],
which share many properties of vowels and are thus difficult to distinguish from vowels,
are mastered last;
- How is children’s speech telegraphic? – because of the omission of function words, the
speech of young children is often called telegraphic (when you send a telegram, every
word you include costs you money; Therefore, you put in only the words you really need,
and not the ones that carry no new information); The principle of economy; However,
key sematic and syntactic roles are already included;
12. Child-directed speech: the problem is that young children know very little about the
structure and function of the language adults use to communicate with each other. As a
result, adult speakers often modify their speech to help children understand them.
Speech directed at children is called infant-directed speech or child-directed speech.
- Attention getters - consist of names and exclamations. For example, adults often use the
child’s name at the beginning of an utterance, as in Ned, there’s a car. Or, instead of the
child’s name, adults use exclamations like Look! or Hey!
The second class of attention getters consists of modulations that adults use to
distinguish utterances addressed to young children from utterances addressed to other
listeners. One of the most noticeable is the high-pitched voice adults use for talking to
small children.
Another modulation adults use is whispering.
- Attention holders - speakers often rely on gestures as well and may touch a child’s shoulder
or cheek, for example, as they begin talking. They also use gestures to hold a child’s
attention and frequently look at and point to objects they name or describe;
- There here and now - adults talk to young children mainly about the “here and now.” They
make running commentaries on what children do, either anticipating their actions, for
example, “Build me a tower now”, said just as a child picks up a box of building blocks, or
describing what has just happened: “That’s right, pick up the blocks”, said just after a
child has done so. Adults talk about the objects children show interest in. They name
them (“That is a puppy”), describe their properties (“He is very soft and furry”), and talk
about relations between objects (“The puppy is in the basket”). In talking about the “here
and now,” usually whatever is directly under the child’s eyes, adults are very selective
about the words they use.
- Taking turns - adults respond to infants during their very first months of life as though their
burps, yawns, and blinks count as turns in conversations. Whatever the infant does is
treated as a conversational turn, even though at this stage the adult carries the entire
conversation alone.
- Making corrections - the other type of correction adults make is of a child’s pronunciation. If
a child’s version of a word sounds quite different from the adult version, a listener may
have a hard time understanding what the child is trying to say. Grammatical errors tend
to go uncorrected as long as what the child says is true and pronounced intelligibly. In
correcting children’s language, adults seem to be concerned primarily with the ability to
communicate with a listener. In each instance, the adult speaker is concerned with the
truth of what the child has said;
- Prosody – when speaking to children, adults do this in four ways: they slow down; they use
short, simple sentences; they use a higher pitch of voice; and they repeat themselves
frequently.
- How important is child-directed speech? - it seems that child-directed speech can help
children acquire certain aspects of language earlier. For example, the hearing children of
deaf parents who use only sign language sometimes have little spoken language
addressed to them by adults until they enter nursery school.
Young Dutch children who watched German television every day did not acquire any
German. There are probably at least two reasons why children seem not to acquire
language from radio or television. First, none of the speech on the radio can be matched
to a situation visible to the child, and even on television people rarely talk about things
immediately accessible to view for the audience. Children therefore receive no clues
about how to map their own ideas onto words and sentences. Second, the stream of
speech must be very hard to segment: they hear rapid speech that cannot easily be
linked to familiar situations.
15.12.2023
1. Psycholinguistics - the study of how the human mind processes language in the acquisition,
perception, storage, and production of language.
- Psycholinguistics vs. Neurolinguistics - the study of the neural and electrochemical bases of
language development and language use. The study of language and the physical brain;
* Language and the brain - the human brain governs all human activities, including the
ability to comprehend and produce language. The brain is divided into two nearly
symmetrical halves, the right and left hemispheres. Each hemisphere is further divided
into four areas of the brain called lobes. The temporal lobe is associated with the
perception and recognition of auditory stimuli; the frontal lobe is concerned with higher
thinking and language production; and the occipital lobe is associated with many
aspects of vision. The parietal lobe is least involved in language perception and
production. The two hemispheres are connected by a bundle of nerve fibres called the
corpus callosum. The brain is covered by a one-quarter-inch thick membrane called the
cortex. Most of the language centres of the brain are contained in the cortex.
* Language disorders;
* Speech production;
* Speech perception;
* Lexical access;
* Sentence processing;
- The left hemisphere - the left side of the brain; The location of many language-controlling
parts of the brain for most people; receives and controls nerve input from the right half
of the body. Language is predominantly processed in the left hemisphere; Some left-
handed people might have some language-controlling parts of the brain in the right
hemisphere;
- Wernicke’s area – an older term for the Sylvian parietotemporal (SPT) area and posterior
parts of the superior temporal gyrus (STG). Is involved in converting auditory and
phonological representations into articulatory-motor representations. Comprehension
of written and spoken language;
- Broca’s area - the inferior frontal gyrus (IFG) (also known as Broca’s area) appears to be
responsible for organizing the articulatory patterns of language and directing the motor
cortex, which controls movement, when we want to talk (this involves the face, jaw, and
tongue in the case of spoken language, and the hands, arms, face, and body in the case of
signed language). Broca’s area also seems to control the use of inflectional morphemes,
like the plural and past tense markers, and function words, like determiners and
prepositions. Language processing and speech production;
- Lateralisation, neural plasticity - each of the brain’s hemispheres is responsible for
different cognitive functions. This specialization is referred to as lateralization.
Lateralization happens in early childhood and can be reversed in its initial stages if there
is damage to a part of the brain that is crucially involved in an important function. This
ability of the brain to adapt to damage and retrain regions is called neural plasticity.
- Contralateralisation in language production – this means that the right side of the body is
controlled by the left hemisphere, while the left side of the body is controlled by the
right hemisphere. This contralateral connection means that sensory information from
the right side of the body is received by the left hemisphere, while sensory information
from the left side of the body is received by the right hemisphere.
- an experiment that relies on the existence of contralateralization and is designed to test the
location of language processing centres. These tests show that responses to the right-ear
stimuli are quicker and more accurate when the stimuli are verbal, while responses to the left-
ear stimuli are quicker and more accurate when the stimuli are nonverbal.
* Split-brain experiments - further evidence for the locations of the language
processing centres comes from so-called split-brain patients. In one experiment, split-
brain patients are blindfolded, and an object is placed in one of their hands. The patients
are then asked to name the object. If an object is placed in a patient’s left hand, the
patient usually cannot identify the object verbally. If, however, the object is placed in the
patient’s right hand, he or she usually can name the object. When the object is in the
patient’s left hand, sensory information from holding the object, which in this case is
tactile information, reaches the right hemisphere. Since the corpus callosum is severed,
the information cannot be transferred to the left hemisphere; because the patient is then
unable to name the object despite being able to feel what it is, we conclude that the
language centres must be in the left hemisphere. When the object is in the patient’s right
hand, however, sensory information from holding the object reaches the left
hemisphere. In this case, the patient is able to name the object; therefore, the language
centres must be in the left hemisphere.
* Hemispherectomy - hemispherectomy, an operation in which one hemisphere or part
of one hemisphere is removed from the brain, also provides evidence for the location of
the language centres. It has been found that hemispherectomies involving the left
hemisphere result in aphasia much more frequently than those involving the right
hemisphere. This indicates that the left side of the brain is used to process language in
most people, while the right side has much less to do with language processing.
4. Speech production - when we send messages using language, that is, when we speak or sign
the brain is involved in planning what we want to say and in instructing the muscles
used for speaking or signing. This process of sending messages is called speech
production.
- Serial models (e.g. Fromkin’s 1971) – one of the most prominent models of speech
production. One of the earliest models proposing stages for speech production.
Fromkin’s model of speech production:
1. Meaning is identified.
2. Syntactic structure is selected.
3. Intonation contour is generated.
4. Content words are inserted.
5. Function words and affixes are inserted.
6. Phonetic segments are specified.
Fromkin’s model suggests that planning an utterance progresses from meaning through the
selection of a syntactic frame to the choice of allophones. Fromkin’s model assumes that
utterance planning goes through the proposed stages in the order given. Such a model is called
serial because the different stages of the model form a series;
- Parallel models (e.g. Levelt’s 1989) - one of the most prominent models of speech
production. Assumes that the different stages involved in planning are all processed
simultaneously and influence each other. Such models are called parallel.
Levelt’s model of speech production:
1. Conceptualization - the level that corresponds to Fromkin’s 1st stage. Here the concepts of
what a speaker wants to express are generated.
2. Formulation - at this level the concepts to be expressed are mapped onto a linguistic form.
- Grammatical encoding (selection of syntactic frame and lexical items) - at the
grammatical encoding level, a syntactic structure and lexical items are selected. Thus,
this corresponds to Fromkin’s stages 2, 4, and 5.
- Phonological encoding (specification of phonetic form) - at the phonological encoding
level, the phonetic form is specified. This corresponds to Fromkin’s stages 3 and 6.
3. Articulation - the third level is the process of articulation, which involves two steps
corresponding to grammatical encoding and phonological encoding.
Levelt’s model is different from Fromkin’s model mainly in that it allows positive feedback to
occur in both directions. In other words, later stages of processing can influence earlier stages.
- Production errors - something in the production process goes wrong, that is, when we make
a production error or “slip of the tongue.” Tell us about the psychological reality of
language; Examples like these provide evidence for the psychological reality of phones,
morphemes, and words. That is, phones, morphemes, and words are part of our mental
organization of the speech wave.
* Inadvertent errors as production errors - only inadvertent errors can tell us
something about speech production. By production error we mean any inadvertent flaws
in a speaker’s use of his or her language. It is important to note that production errors are
unintentional: we say something that we did not intend to say.
* Types of production errors:
Anticipation - occurs when a later unit is substituted for an earlier unit or
when a later unit is added earlier in an utterance (intended utterance – splicing
from one tape actual utterance – splacing from one tape);
Perservation - can be seen as the opposite of anticipations: occurs when an
earlier unit is substituted for a later unit or when an earlier unit is added later in
an utterance (intended utterance – splicing from one tape actual utterance –
splicing from one type);
Addition - involves the addition of extra units (intended utterance – spic and
span actual utterance – spic and splan);
Deletion – involves omission of the units (intended utterance – his immortal
soul actual utterance – his immoral soul);
Metathesis - is the switching of two units, each taking the place of the other
(intended utterance – fill the pool actual utterance – fool the pill);
Spoonerism - when a metathesis involves the first sounds of two separate
words, the error is called a spoonerism (intended utterance – dear old queen
actual utterance – queer old dean);
Shift - occurs when a unit is moved from one location to another (intended
utterance – she decides to hit it actual utterance – she decide to hits it);
Substitution - happens when one unit is replaced with another (intended
utterance – it is hot in here actual utterance – it is cold in here);
Blend - occurs when two words “fuse” into a single item (intended utterance –
grizzly/ghastly actual utterance – grastly);
Malapropism - malapropisms provide evidence that the mental lexicon is
organized in terms of sound as well as meaning. The choice of a next word is not
purely symbolic, it is also influenced by the phonetics (intended utterance –
spreading like wildfire actual utterance – spreading like wildflowers);
5. Speech perception - the process of receiving and interpreting messages is called speech
perception. Speech perception can be seen as the reverse of speech production: in
speech production, we have an idea that we turn into an utterance, whereas in speech
perception, we hear or see an utterance and decode the idea it carries.
- Dealing with variance – lack-of-invariance problem - problem in speech perception
because no sound is ever produced exactly the same way twice.
- Speaker normalisation - taking accent into account is one example of speaker
normalisation, the way we pay attention to what we know about the person talking
when we are trying to understand what she is saying. The modification of our
expectations or judgments about linguistic input to account for what we know about the
speaker.
- Categorical vs. continuous perception - one phenomenon that helps explain how we deal
with lack of invariance is categorical perception, which occurs when equal-sized
physical differences are not equal-sized psychologically. Phenomenon by which people
perceive entities differently after learning to categorize them: differences within
categories are compressed, and differences across categories are expanded. Many
experiments suggest that categorical perception also occurs in language, particularly in
consonant perception.
If we activate the voicing up to 30 th milliseconds, almost 100% of people are able to
recognise the sound. If we do it later, the identification drops to 0%. Whether the voicing
occurs in the first 30th milliseconds or not (categorical);
- Is it inherited or developed? – for children it is more continuous (the ability to produce and
distinguish sounds, but we lose it when we grow up because we adjust our language
abilities to our needs);
- Contextual perception – how we identify individual sounds depending on the context; Cat
and cool – the sound /k/ is pronounced differently depending on the following sound -
the context influences the sound quality; This means that how we identify an individual
sound depends on its context, that is, which sounds occur before and after it;
- Rate normalisation - the modification of our expectations or judgments about linguistic
input to account for what we know about the speech rate. Listeners adjust to a person’s
speaking rate incredibly fast, often within several hundred milliseconds, and decisions
about sound categories are then based on this rate adjustment.
- The McGurk Effect - it illustrates that we rely not only on the highly variable acoustic signal
but also on visual information to perceive sounds. The McGurk effect occurs when a
video showing a person producing one sound is dubbed with a sound-recording of a
different sound. The McGurk effect illustrates that, despite considerable variability in
the acoustic and visual signals, we are able to combine both types of information to
identify speech sounds. This proves that it is a mix of auditory and visual signals that we
use for interpreting speech (we listen, we watch);
- Phoneme restoration – being able to identify a sound that was not actually produced,
because the sound fits in the context of the utterance.
6. Lexical Access - process by which we determine which word we are hearing. The words that
we know make up our mental lexicon, and in order to determine which word we are
hearing we need to filter through this imaginary dictionary in our heads to arrive at just
the word the speaker intended. Lexical access is made even more difficult due to the fact
that many words might fit into a sentence at a particular place, some words sound very
similar or even identical, and it is often not clear where one word ends and the next
begins in the spoken stream of language.
- The Full Listing Hypothesis - hypothesis that every word is stored as a separate entry in the
mental lexicon.
- Word Recognition – how we perceive somebody else’s speech;
* Resting activation - baseline level of how likely it is that a word or a phoneme will be
recognized. More frequent words are more likely to be activated, less frequent words are
less likely to be activated;
* vs. Spreading activation - activation that flows from words just accessed to other
related words, raising (or sometimes inhibiting) the resting activation of those related
words. For example, if we have just heard car, activation will spread to tire, and it will be
a little easier to recognize the word tire for hundreds of milliseconds.
* Frequency priming - one of the most important factors that affect word recognition is
how frequently a word is encountered in a language. This frequency effect describes the
additional ease with which a word is recognized because of its more frequent usage. For
example, some words (such as better or TV) occur more often than others (such as
debtor or mortgage), and words that occur more frequently are easier to access.
* vs. Recency priming - people also recognise a word faster when they have just heard
it or read it than when they have not recently encountered it; this phenomenon is known
as repetition priming.
* Cohort models - model of lexical access in which possible words in the mental lexicon
are identified based on the initial sounds of the word; impossible words are eliminated
as the auditory input progresses. A word is accessed once all other competitor words
are eliminated. We generate the initial cohort, a list of all the words we know that begin
with this sound. As more sounds are heard, words that do not match the input will be
removed from the cohort, the list of remaining possible words consistent with the
incoming sound string. At some point, possibly even before the end of the spoken word,
only one item will be left in the cohort and we will recognise it. The point where this
happens is called the uniqueness point (a point in which only one word gets activated);
Cohort – a set of words that are likely to be activated; We predict what will be said next by the
context;
7. Sentence Processing:
- Lexical ambiguity - the phenomenon where a single word is the form of two or more distinct
linguistic expressions that differ in meaning or syntactic properties.
- Structural ambiguity - structural ambiguity occurs when a string of words has two or more
different possible parses resulting from different possible syntactic structures.
* Temporal ambiguity – type of a structural ambiguity that is present up until some
point during the processing of a sentence but that is resolved by the end of the sentence
(because, in fact, only one of the original parses is consistent with the entire sequence of
words). Temporal ambiguity is constantly present in everyday conversations.
* The Garden Path Effect - phenomenon by which people are fooled into thinking a
sentence has a different structure than it actually does because of a temporal
ambiguity. As listeners comprehend temporarily ambiguous sentences, they sometimes
momentarily recover a meaning that was not intended by the speaker. These mistakes in
syntactic parsing are called garden path effects because the syntax of the sentence has
led the comprehender “down the garden path” (to a spot where they can go no further
and must retrace their steps). Garden path sentences are temporarily ambiguous and
initially interpreted to have a different syntactic structure than they turn out to have
(While Marry was knitting the scarf fell off her lap);
* Global ambiguity - the ambiguity is not resolved by the end of the utterance. Without
additional context (such as intonation or preceding/following sentences), there is no
way to determine what the intended structure and meaning are.
Prepositional phrase ambiguity (a classic example of global ambiguity) – The cop saw the man
with the binoculars;
10.01.2024
Coherence: Cohesion:
- It is abstract as it deals with the ideas; - It is measurable as it deals with actual
written content;
- Qualitative property; - Quantitative property;
- Coherence deals with the semantics; - Cohesion properties focus only on the
grammatical and lexical structure of
sentences;
- But coherent content is always cohesive; - The cohesive content does not need to
be always coherent;
- While topic sentences, thesis statement, the - Repeated words/ideas, reference words,
summary are the techniques used to achieve transition signals, substitution, and
coherence. ellipsis are some of the techniques which
can be used to achieve cohesion;
- Coherence mainly deals with logic and - Cohesion focuses more on lexical syntax
appropriate organization of the sentences to and grammar in sentence formation.
form meaningful and understandable content;
2. Discourse Markers:
- Discourse markers – link 2 or more pieces of discourse (aka discourse-linking devices);
(Therefore, and so, etc);
- Pragmatic markers – used to maintain a relationship with an interlocutor (you know, yeah);
Direct influence;
- Modal Particles – lexical devices used to mark epistemic modality (or something, kind of);
Osłabianie wydźwięku tego co już zostało powiedziane;
- Link
- Link
3. Corpus Linguistics:
- An approach to studying language where the central focus is on language corpora (empirical
evidence);
- Corpus data;
- What is a linguistic corpus? – a set of (naturally-occurring) texts/speech
recordings/transcripts which is assumed to be representative of one or more genres,
registers and classes of linguistic phenomena; They are not totally random; 2 is a
minimum size of a corpus;
- Looking at a lot of language at once;
- Different corpora for spoken and written language;
- In vivo (naturally occurring)/in vitro (collected for the specific purposes)/and in silicio
(produced by language models) data (GTP-3);
- How serious are we about the corpus approach? From very serious to ignorant: corpus-
driven, corpus-based, corpus-informed, corpus-illustrated and corpus-ignorant
linguistics;
- Corpus annotation:
* Linguistic;
* Bibliographic – description of the text (author, publisher, publication date, title, etc.);
* Sociolinguistic – age, sex, education of a person, etc;
* Structural – sentences, paragraphs, etc;
- Monitor corpus – open-ended public corpus;
- Spoken corpus – transcribed recordings;
- Parallel corpus;
11.01.2024