• rizal syaiful h

Tugas pak awan introduction to linguistic (phonetics)


LINGUISTIC

Paper In Phonetics

Arranged by :

  1. Suryo diprojo 153211025

  2. Rizal syaiful h 153211035

Prodi Sastra Inggris

Fakultas Ilmu Tarbiyah dan Keguruan

Institut Agama Islam Negeri Surakarta

Tahun Ajaran 2016/2017

Preface

This is a course in PHONETICS , it is intended for student of linguistic and to those who a more concerned with studying the sound of English .The first chapter presents an overview of the part speech organs ,the second chapter presents an overview of the production of speech sound consisting of articulatory phonetics , acoustics phonetics and auditory phonetics. The Third chapter presents an overview of the system of voice. The first chapter presents an overview of the knowledge of phonetics.

The kind of project that is most useful for student of general linguistic is to give a description of the major phonetics characteristics in some other language. Student of English might profitably try to describe an accent of English that is very different from their own . each student might try to find a speaker of another language ( or another accent) with whom to work. Than using grammars , dictionaries , or whatever sources are available ,the student could try to compile a list of words illustrating the major characteristic of that language. A phonetician should be able to produce the sound he or she describes an to produce sound described by others . some people naturaly better at doing this than other , but eyeryone can improve his or her ablity to considerable extent by conscientiously working through exercises of the kind suggested here.

Introduction

As we know each language in the world has own style or form, because of it phonetics has function as a standardization language in the world in order to people can learn other language beside their mother language . Phonetics is one of the part of linguistic which learn about speech sound . Speech sounds are vibrating particles of air or sound waves or still in other words a variety of matter moving in space and time. Speech sounds are produced by human organs of speech.

CHAPTER 2 The production of speech sounds

2.1 Articulators above the larynx

All the sounds we make when we speak are the result of muscles contracting. The muscles in the chest

that we use for breathing produce the flow of air that is needed for almost all speech sounds; muscles in

the larynx produce many different modifications in the flow of air from the chest to the mouth. After

passing through the larynx, the air goes through what we call the vocal tract, which ends at the mouth

and nostrils; we call the part comprising the mouth the oral cavity and the part that leads to the nostrils

the nasal cavity. Here the air from the lungs escapes into the atmosphere. We have a large and complex

set of muscles that can produce changes in the shape of the vocal tract, and in order to learn how the

sounds of speech are produced it is necessary to become familiar with the different parts of the vocal

tract. These different parts are called articulators, and the study of them is called articulatory

phonetics.

Fig. 9 is a diagram that is used frequently in the study of phonetics. It represents the human head, seen

from the side, displayed as though it had been cut in half. You will need to look at it carefully as the

articulators are described, and you will find it useful to have a mirror and a good light placed so that you

can look at the inside of your mouth.

i) The pharynx is a tube which begins just above the larynx. It is about 2 cm long in women and

about 5 cm in men, and at its top end it is divided into two, one

part being the back of the oral cavity and the other being the beginning of the way through the nasal cavity. If you look in your mirror with your mouth open, you can see the back of the pharynx.

ii) The soft palate or velum is seen in the diagram in a position that allows air to pass through the nose and through the mouth. Yours is probably in that position now, but often in speech it is raised so that air cannot escape through the nose. The other important thing about the soft palate is that it is one of the articulators that can be touched by the tongue. When we make the sounds k, g the tongue is in contact with the lower side of the soft palate, and we call these velar consonants.

iii) The hard palate is often called the "roof of the mouth". You can feel its smooth curved surface with your tongue. A consonant made with the tongue close to the hard palate is called palatal. The sound j in 'yes' is palatal.

iv) The alveolar ridge is between the top front teeth and the hard palate. You can feel its shape with your tongue. Its surface is really much rougher than it feels, and is covered with little ridges. You can only see these if you have a mirror small enough to go inside your mouth, such as those used by dentists. Sounds made with the tongue touching here (such as t, d, n) are called alveolar.

v) The tongue is a very important articulator and it can be moved into many different places and different shapes. It is usual to divide the tongue into different parts, though there are no clear dividing lines within its structure. Fig. 7 shows the tongue on a larger scale with these parts shown: tip, blade, front, back and root. (This use of the word "front" often seems rather strange at first.)

vi) The teeth (upper and lower) are usually shown in diagrams like Fig. 9 only at the front of the mouth, immediately behind the lips. This is for the sake of a simple diagram, and you should remember that most speakers have teeth to the sides of their mouths, back almost to the soft palate. The tongue is in contact with the upper side teeth for most speech sounds. Sounds made with the tongue touching the front teeth, such as English T, D, are called dental.

vii)The lips are important in speech. They can be pressed together (when we produce the sounds p, b), brought into contact with the teeth (as in f, v), or rounded to produce the lip-shape for vowels like u:. Sounds in which the lips are in contact with each other are called bilabial, while those with lipto-teeth contact are called labiodental.

The seven articulators described above are the main ones used in speech, but there are a few other things to remember. Firstly, the larynx (which will be studied in Chapter 7) could also be described as an articulator - a very complex and independent one. Secondly, the jaws are sometimes called articulators; certainly we move the lower jaw a lot in speaking. But the jaws are not articulators in the same way as the others, because they cannot themselves make contact with other articulators. Finally, although there is practically nothing active that we can do with the nose and the nasal cavity when speaking, they are a very important part of our equipment for making sounds (which is sometimes called our vocal apparatus), particularly nasal consonants such as m, n. Again, we cannot really describe the nose and the nasal cavity as articulators in the same sense as (i) to (vii) above.

2.2 ACOUSTIC PHONETICS

We can hear that sounds with the same length can differ from one another in three ways. They can be the same of difference in pitch ,loudness and quality. Thus two vowel sounds may have exactly the same pitch and loudness. But might differ in that one might be (e) and the other (o). On the other hand, they might have the same vowel quality, but differ in that one was said on a higher pitch than the other or that one of them was spoken more loudly. In this chapter we will discuss each of these three aspects of speech sound and consider the techniques of experimental phonetics that may be used for recording them.

Sound Waves

Sound consist of small variation in air pressure that occur very rapidly one after another these variation are caused by action of the speaker vocal organts that are (for the most part) superimposed on the out going flow of lung air. Those in the case of voice sounds, the vibrating vocal cord chop up the stream of lung air show that pulses of realitively high pressure alternate with moment of lower pressure. In fricative sounds that air stream is forced through narrow gap show that it becomes turbulent. With irregularly occurring of sounds.

Pitch and Frecuency

Frecuency is a technical term for an acoustic property of a sounds- namely the number of complete repetitions (cycles) of variations in air pressure occurring in a second.

The Pitch of a sounds is that auditory property that annebles a listener to place it on a scale going from low to high. Without considering its acoustic property. In practice when a speech sound goes up in frequency, it also goes up in pitch (though equal steps of increasing frequency do not produce the effect of equal steps of increasing pitch).

Loudness and Intensity

In general the loudness of sounds depends on the size of the variations in air pressure that occur. The intensity is propotional to the average size, or amplitude, of the variations in air pressure.

2.3 Auditory phonetics

If articulatory phonetics studies the way in which speech sounds are produced, auditory phonetics focuses on the perception of sounds or the way in which sounds are heard and interpreted. Remembering our conventional division of linguistic communication into several stages of a process unfolding between two parties, the sender of the message and its addressee, we may say that while articulatory phonetics is mainly concerned with the speaker, auditory phonetics deals with the other important participant in verbal communication, the listener. It is again, obviously, a field of linguistic study which has to rely heavily on biology and more specifically on anatomy and physiology. We should say from the very beginning, however, that the mechanism and physiology of sound perception is a much hazier field that the corresponding processes related to the uttering of the respective sounds. This is so because speech production is a process that takes place roughly along the respiratory tract which is, comparatively, much easier to observe and study than the brain where most processes linked to speech perception and analysis occur. Our presentation so far has already revealed a fundamental characteristic of acoustic phonetics which essentially differentiates it from both articulatory and acoustic phonetics: its lack of unity. We are in fact dealing with two distinct operations which, however, are closely interrelated and influence each other: on the one hand we can talk about audition proper, that is the perception of sounds by our auditory apparatus and the transforming of the information into a neural sign and its sending to the brain and, on the other hand, we can talk about the analysis of this information by the brain which eventually leads to the decoding of the message, the understanding of the verbal message.5 When discussing the auditory system we can consequently talk about its peripheral and its central part, respectively. We shall have a closer look at both these processes and try to show why they are both clearly distinct and at the same time they are closely related.

Before the sounds we perceive are processed and interpreted by the brain, the first anatomical organ they encounter is the ear. The ear has a complex structure and its basic auditory6 functions include the perception of auditory stimuli, their analysis and their transmission further on to the brain. We can identify three components: the outer, the middle and the inner year. The outer ear is mainly represented by the auricle or the pinna and the auditory meatus or the outer ear canal. The auricle is the only visible part of the ear, constituting its outermost part, the segment of the organ projecting outside the skull. It does not play an essential role in audition, which is proved by the fact that the removing of the pinna does not substantially damage our auditory capacity. The auricle rather plays a protective role for the rest of the ear and it also helps us localize sounds. The meatus, or the outer ear canal is a tubular structure playing a double role: it, too, protects the next segments of the ear – particularly the middle ear – and it also functions as a resonator for the sound waves that enter our auditory system. The middle ear is a cavity within the skull including a number of little anatomical structures that have an important role in audition. One of them is the eardrum. This is a diaphragm or membrane to which sound waves are directed from outside and which vibrates, acting as both a filter and a transmitter of the incoming sounds. The middle ear also contains a few tiny bones: the mallet, the anvil and the stirrup. The pressure of the air entering our auditory system is converted by the vibration of the membrane (the eardrum) and the elaborate movement of the little bones that act as some sort of lever system into mechanical movement which is further conveyed to the oval window, a structure placed at the interface of the middle and inner ear. As pointed out above, the middle ear plays an important protection role. The muscles associated with the three little bones mentioned above contract in a reflex movement when sounds having a too high intensity reach the ear. Thus the impact of the too loud sounds is reduced and the mechanism diminishes the force with which the movement is transmitted to the structures of the inner ear. It is in the middle ear too, that a narrow duct or tube opens. Known as the Eustachian tube it connects the middle ear to the pharynx. Its main role is to act as an outlet permitting the air to circulate between the pharynx and the ear, thus helping preserve the required amount of air pressure inside the middle ear. The next segment is the inner ear, the main element of which is the cochlea, a cavity filled with liquid. The inner ear also includes the vestibule of the ear and the semicircular canals. The vestibule represents the central part of the labyrinth of the ear and it gives access to the cochlea. The cochlea is a coil-like organ, looking like the shell of a snail. At each of the two ends of the cochlea there is an oval window, while the organ itself contains a liquid. Inside the cochlea there are two membranes: the vestibular membrane and the basilar membrane. It is the latter that plays a central role in the act of audition.

Also essential in the process of hearing is the so-called organ of Corti, inside the cochlea, a structure that is the real auditory receptor. Simplifying a lot, we can describe the physiology of audition inside the inner ear as follows: the mechanical movement of the little bony structures of the middle ear (the mallet, the anvil and the stirrup) is transmitted through the oval window to the liquid inside the snail-like structure of the cochlea; this causes the basilar membrane to vibrate: the membrane is stiffer at one end than at the other, which makes it vibrate differently, depending on the pitch of the sounds that are received. Thus, low-frequency (grave) sounds will make vibrate the membrane at the less stiff (upper) end, while highfrequency (acute) sounds will cause the lower and stiffer end of the membrane to vibrate. The cells of the organ of Corti, a highly sensitive structure because it includes many ciliate cells that detect the slightest vibrating movement, convert these vibrations into neural signals that are transmitted via the auditory nerves to the central receptor and controller of the entire process, the brain.

The way in which the human brain processes auditory information and, in general, the mental processes linked to speech perception and production are still largely unknown. What is clear, however, regarding the perception of sounds by man’s auditory system, is that the human ear can only hear sounds having certain amplitudes and frequencies. If the amplitudes and frequencies of the respective sound waves are lower than the range perceptible by the ear, they are simply not heard. If, on the contrary, they are higher, the sensation they give is one of pain, the pressure exerted on the eardrums being too great. These aspects are going to be discussed below when the physical properties of sounds are analyzed. As to the psychological processes involved by the interpretation of the sounds we hear, our knowledge is even more limited. It is obvious that hearing proper goes hand in hand with the understanding of the sounds we perceive in the sense of organizing them according to patterns already existing in our mind and distributing them into the famous acoustic images that Saussure spoke of. It is at this level that audition proper intermingles with psychological processes because our brain decodes, interprets, classifies and arranges the respective sounds according to the linguistic (phonological) patterns already existing in our mind.7 It is intuitively obvious that if we listen to someone speaking an unknown language it will be very difficult for us not only to understand what they say (this is out of the question given the premise we started from) but we will have great, often insurmountable difficulties in identifying the actual sounds the person produced. The immediate, reflex reaction of our brain will be to assimilate the respective sounds to the ones whose mental images already exists in our brain, according to a very common cognitive reaction of humans that always have the tendency to relate, compare and contrast new information to already known information. Our discussion of the phoneme in a subsequent chapter will analyze this in further detail.

CHAPTER 3 The System Of Voice

3.1 Vowel and consonant

The words vowel and consonant are very familiar ones, but when we study the sounds of speech scientifically we find that it is not easy to define exactly what they mean. The most common view is that vowels are sounds in which there is no obstruction to the flow of air as it passes from the larynx to the lips. A doctor who wants to look at the back of a patient's mouth often asks them to say "ah"; making this vowel sound is the best way of presenting an unobstructed view. But if we make a sound like s, d it can be clearly felt that we are making it difficult or impossible for the air to pass through the mouth. Most people would have no doubt that sounds like s, d should be called consonants. However, there are many cases where the decision is not so easy to make. One problem is that some English sounds that we think of as consonants, such as the sounds at the beginning of the words 'hay' and 'way', do not really obstruct the flow of air more than some vowels do. Another problem is that different languages have different ways of dividing their sounds into vowels and consonants; for example, the usual sound produced at the beginning of the word 'red' is felt to be a consonant by most English speakers, but in some other languages (e.g. Mandarin Chinese) the same sound is treated as one of the vowels.

If we say that the difference between vowels and consonants is a difference in the way that they are produced, there will inevitably be some cases of uncertainty or disagreement; this is a problem that cannot be avoided. It is possible to establish two distinct groups of sounds (vowels and consonants) in another way. Consider English words beginning with the sound h; what sounds can come next after this h? We find that most of the sounds we normally think of as vowels can follow (e.g. e in the word 'hen'), but practically none of the sounds we class as consonants, with the possible exception of j in a word such as 'huge' Zd:ujh. Now think of English words beginning with the two sounds bI; we find many cases where a consonant can follow (e.g. d in the word 'bid', or l in the word 'bill'), 77 but practically no cases where a vowel may follow. What we are doing here is looking at the different contexts and positions in which particular sounds can occur; this is the study of the distribution of the sounds, and is of great importance in phonology. Study of the sounds found at the beginning and end of English words has shown that two groups of sounds with quite different patterns of distribution can be identified, and these two groups are those of vowel and consonant. If we look at the vowel-consonant distinction in this way, we must say that the most important difference between vowel and consonant is not the way that they are made, but their different distributions. It is important to remember that the distribution of vowels and consonants is different for each language.

We begin the study of English sounds in this course by looking at vowels, and it is necessary to say something about vowels in general before turning to the vowels of English. We need to know in what ways vowels differ from each other. The first matter to consider is the shape and position of the tongue. It is usual to simplify the very complex possibilities by describing just two things: firstly, the vertical distance between the upper surface of the tongue and the palate and, secondly, the part of the tongue, between front and back, which is raised highest. Let us look at some examples:

i) Make a vowel like the i: in the English word 'see' and look in a mirror; if you tilt your head back slightly you will be able to see that the tongue is held up close to the roof of the mouth. Now make an{ vowel (asin the word 'cat') and notice how the distance between the surface of the tongue and the roof of the mouth is now much greater. The difference between i: and as is a difference of tongue height, and we would describe i: as a relatively close vowel and as as a relatively open vowel. Tongue height can be changed by moving the tongue up or down, or moving the lower jaw up or down. Usually we use some combination of the two sorts of movement, but when drawing side-of-the-head diagrams such as Fig. 9 and Fig. 7 it is usually found simpler to illustrate tongue shapes for vowels as if tongue height were altered by tongue movement alone, without any accompanying jaw movement. So we would illustrate the tongue height difference between i: and ae as in Fig. 3.

3.2The Diphthong

As in ‘ high, buy’ moves toward a high front vowel, but in most forms of English it does not go much beyond a mind front vowel. A diphthong of this kind probably has a smaller change in quality than occurs in your normal pronunciation the diphthong in ‘ how ‘ usually starts with slightly more front quality than that in ‘ high’. But most speakers do not begin it with a fully front vowel such as in ‘ had ‘.

CHAPTER 4 The knowledge of phonetics

4.1 The “State” Of Phonetics Transcription

Phonetic transcriptions, broad and narrow, are a valuable tool that can be used to inform speech-language pathologists (SLPs) about the status of speech and language skills of clients. Phonetic transcription requires students to listen to spoken language and categorize individual speech sounds into phonemic categories despite the fact that the articulation and acoustic nature of the individual speech sounds may vary across linguistic contexts. Students desiring to become speech language pathologists are often required to study phonetics as it is pertinent to informing the diagnosis and treatment of individuals who may have speech/language disorders or differences (e.g., accents, dialects). Enrollment in a single phonetics course during an undergraduate communication sciences and disorders (CSD) program typically satisfies this requirement.

However, this single requirement may not be sufficient. Accordingly, the current state of phonetic transcription in the field of CSD necessitates further analyses and increased awareness in order to ensure students are duly prepared to effectively implement and interpret phonetic transcriptions. There are several matters related to phonetic transcriptions that must be discussed including students’ individuality in and ease of learning phonetics; the content of current phonetic courses; and the application and frequency of use of phonetic transcription skills in clinical practice

4.2 Sociophonetic

‘Sociophonetic’ variation is a case in point. This term is one that has come to be used extensively in the last few years, referring usually to variation in speech that correlates with social factors like speaker gender, age, or social class. It is still something of an exception, though, to find experimental phonetic or phonological studies that include such social dimensions in their methodology, or to find reference to sociophonetics in textbooks, university course descriptions, or conference announcements. There are several reasons why sociophonetic factors have remained peripheral to phonetics and phonology. Above all, the dominance of particular theoretical models and methodological traditions has meant that social factors have been partitioned de facto from the ‘purely linguistic’.

References

Chapter 2: Articulatory, Auditory and Acoustic Phonetics. Phonology.pdf. www.ebooks.unibuc.ro .Downloaded on January 18, 2017

https://www2.leeward.hawaii.edu/hurley/Ling102web/mod3_speaking/mod3docs/3_images/3_voca33.gif downloaded on January 18, 2017

http://2.bp.blogspot.com/-Dwjhomi3APw/TdTObXw6iVI/AAAAAAAAAB4/3ZyVZUywLPM/s1600/Diagram%2Bof%2Bspeech%2Borgans.gif downloaded on January 18, 2017

http://www.cs.tut.fi/~sgn14006/PDF/S01-phonetics downloaded on January 18, 2017

https://www.internationalphoneticassociation.org/sites/default/files/IPA_Kiel_2015.pdf downloaded on January 18, 2017

http://www.cs.tut.fi/~sgn14006/PDF/S01-phonetics . downloaded on January 18, 2017

International Society of Phonetic Sciences(ISPhS) 2010.The Phonetician .

K. Aoyama et al. (2004) Journal of Phonetics , Perceived phonetic dissimilarity and L2 speech learning: the case of Japanese /r/ and English /l/ and /r/ 32 .233–250

Lisa Davidson (2006) .Journal Of Phonetics .Phonology, phonetics, or frequency:Influences on the production of non-native sequences 34 104–137. www.elsevier.com/locate/phonetics

P.Ladefoged . 1982,1975. A Course In Phonetics second edition . Publish by Harcourt Brace Jevanovich,inc.

P. Foulkes, G. Docherty (2006) Journal of Phonetics : The social life of phonetics and phonology 34 .409–438.

Randolph C, October 07, 2015.Journal Of Phonetics and Audiology : Нe “State” of Phonetic Transcription in the Field of Communication Sciences and Disorders ,OMICS Publishing group.

R.PETER.(2009). English Phonetics and Phonology A practical course (4th ed.) ,Cambridge press.

T . Rietveld and C . Gussenhoven (1995). Journal of phonetics, Aligning pitch targets in speech synthesis : effects of syllable structure.

www.google.com

Chapter 1The parts of speech organs

1.1 Diagram of speech organs

1.2 Vocal fold


35 views

© English Letters IAIN Surakarta 2016

Address

Gedung E Lt 2 R. 204

Jl. Pandawa Pucangan Kartasura Sukoharjo 57169

Central Java

Indonesia

 

Socialize with us

  • facebook-square
  • Twitter Square
  • youtube-square
  • Instagram - Black Circle