Ссылка на архив

Types of tests used in English Language Teaching Bachelor Paper

University of Latvia

Faculty of Modern Languages

English Department


Types of Tests Used in English Language.

Bachelor Paper


Anželika Ozerova


Riga

2004


Declaration of academic Integrity

I hereby declare that this study is my own and does not contain any unacknowledged material from any source.

Signed:

12 May, 2004


Abstract.

The present paper attempts to investigate various types of tests and their application in the language classroom. The theoretical part deals with the basic data about testing, the comparison of such issues as assessment and valuation, reasons for testing, types of tests, such as diagnostic, progress, achievement, placement and proficiency tests; test formats and ways of testing.

It relates theory to practice by analyzing two proficiency tests: TOEFL and CFC tests. They are carefully discussed and compared to find any similarities or differences in their structure and design. The conclusions drawn are based on the theory and analyses of the tests. The data obtained indicate that the both tests though being sometimes different in their purpose, design and structure, are constructed according to the universally accepted pattern.

Table of Contents

Introduction …………………………………………………........................1

Chapter 1

What is test?……………………………………………………………………3

Chapter 2

2.1 Inaccurate tests……………...…………………………………………….7

2.2 Validity……………………..……………………………………………..8

2.3 Reliability………….. ……………………………………………………11

Chapter 3

3.1 Diagnostic tests………………………………. ………………………….13

3.2 Placement tests…………………………...……………………………….15

3.3 Progress tests……………………………………………...........................17

3.4 Achievement tests………………………..……………………………….18

3.5 Proficiency tests…………………………………………………………..20

Chapter 4

4.1 Direct and Indirect testing…..…………………………………………....22

4.2 Discrete point and integrative testing……………………………………..24

4.3 Criterion-refernced and Norm-referenced testing…………………………25

4.4 Objective and Subjective testing...………………………………………..26

4.5 Communicative language testing…………………………………………26

Chapter 5

5.1 Multiple choice tests………………………………………………………29

5.2 Short answer tests…………………………………………………………32

5.3 The Cloze tests and Gap-filling tests……………………………………..33

5.4 C-Test……………………………………………………………………..35

5.5 True/false items……………………………………………………………36

5.6 Dictation…………………………………………………………………...36

5.7 Listening Recall……………………………………………………………38

5.8 Testing Grammar through Error-recognition Items……………………….38

5.9 Controlled Writing…………………………………………………………39

5.10 Free Writing………………………………………………………………40

5.11 Test Formats Used in Testing Speaking Skills…………………………..41

Chapter 6

Analysis of the Test of English as a Foreign Language and Cambridge First

Certificate test according to test design criteria………………………………..43

Conclusions…………………………………………………………………...55

Theses. ………………………………………………………..........................57

Bibliography…………………………………………………….......................59

Appendix


Introduction

Among all words used in a classroom there is the only word that usually makes the students shudder: “test”. There is hardly a person who would claim that s/he favours tests and finds them very motivating. However, tests cannot be avoided completely, for they are inevitable elements of learning process. They are included into curriculum at schools and are to check the students’ level of knowledge and what they are able to do; they could be accomplished at the beginning of the study year and at the end of it; the students could be tested after working on new topics and acquiring new vocabulary. Moreover, the students are to face the tests in order to enter any foreign university or reveal the level of their English language skills for themselves. For that purpose they take specially designed tests that are Test of English as a Foreign Language, or TOEFL test (further in the text) and CFC (further in the text), or Cambridge First Certificate. Although, these tests can sometimes serve for different purposes and are unrelated, they are sometimes quite common in their design and structure. Therefore, the author of the paper is particularly interested in the present research, for she assumes it to be of a great significance not only for herself, but also for the individuals who are either involved in the field or just want to learn more about TOEFL and CFC tests, their structure, design and application. Therefore, the present research will display various aspects of the theory discussed, accompanied with the practical part vastly analyzed.

Thus, the goal of the present research is to investigate various types of test formats and ways of testing, focusing particularly on TOEFL and CFC tests, in order to see how the theory is used and could be applied in practice.

The hypothesis is as follows: Serving for almost similar purpose, however being sometimes different in their design and structure, the TOEFL and CFC tests are usually constructed according to the accepted universal pattern.

The enabling objectives are as follows:

· To review literature on the nature of tests in order to make theoretically well-motivated discussions on the choice of testing types;

· To analyse the selected types of tests, such as TOEFL and CFC tests;

· To draw relevant conclusions.

Methods of Research:

Theoretical:

1) Analytical and selective study of the theory available;

2) Juxtaposition of the ideas selected from theory and tested against practical evidences;

3) Drawing conclusions.

Practical:

· Selecting and adapting appropriate tests types, such as TOEFL and CFC, to exemplify the theory.

The paper consists of six chapters each including sub-chapters. Chapter 1 discusses the general data about tests. Chapter 2 describes reliability and validity. Chapter 3 focuses on various types of tests. Chapter 4 deals with ways of testing. Chapter 5 speaks on four language skills. Chapter 6 offers the practical part of the paper.

Chapter 1

What is test?

Hicks (2000:155) considers that the role of tests is very useful and important, especially in language learning. It is a means to show both the students and the teacher how much the learners have learnt during a course. The author of the paper agrees with the statement, for she believes that in order to see whether the students have acquired the material and are making constant progress, the teacher will inevitably have to test his/her learners. It does not mean that a usual test format with a set of activities will be used all the time. To check the students’ knowledge the teacher can apply a great range of assessment techniques, including even the self-evaluation technique that is so beloved and favoured by the students. Moreover, according to Heaton (1990:6), tests could be used to display the strength and weaknesses of the teaching process and help the teacher improve it. They can demonstrate what should be paid more attention to, should be worked on and practised. Furthermore, the tests results will display the students their weak points, and if carefully guided by the teacher, the students will be even able to take any remedial actions.

Thompson (Forum, 2001) believes that students learn more when they have tests. Here we can both agree and disagree. Certainly, preparing for a test, the student has to study the material that is supposed to be tested, but often it does not mean that such type of learning will obligatory lead to acquisition and full understanding of it. On the opposite, it could often lead to the pure cramming. That, consequently, will result in a stressful situation the student will find her/himself before or during the test, and the final outcome will be a complete deletion of the studied material. We can base that previous statement on our own experience: when working at school, the author of the present research had encountered such examples for many times.

However, very often the tests can facilitate the students’ acquisition process, i.e.: the students are to be checked the knowledge of the irregular verbs forms. Being constantly tested by means of a small test, they can learn them successfully and transfer them to their long-term memory, as well. Although, according to Thompson tests decrease practice and instruction time. What he means is that the students are as if limited; they are exposed to practice of a new material, however, very often the time implied for it is strictly recommended and observed by a syllabus. That denotes that there will be certain requirements when to use a test. Thus, the students find themselves in definite frames that the teacher will employ. Nevertheless, there could be advantages that tests can offer: they increase learning, for the students are supposed to study harder during the preparation time before a test.

Thompson (ibid.) quotes Eggan, who emphasises the idea that the learners study hard for the classes they are tested thoroughly. Further, he cites Hilles, who considers that the students want and expect to be tested. Nonetheless, this statement has been rather generalized. Speaking about the students at school, we can declare that there is hardly a student who will truly enjoy tests and their procedure. Usually, what we will see just sore faces when a test is being mentioned. According to Thompson, the above-mentioned idea could be applied to the students who want to pass their final exams or to get a certificate in Test of English as a Foreign Language (TOEFL) or First Certificate (FCE). Mostly this concerns adults or the students who have their own special needs, such as going abroad to study or work. This again supports the idea that motivation factor plays a significant role in the learning process.

Moreover, too much of testing could be disastrous. It can entirely change the students’ attitude towards learning the language, especially if the results are usually dissatisfying and decrease their motivation towards learning and the subject in general.

Furthermore, as Alderson (1996:212) assumes, we should not forget that the tests when administered receive less support from the teacher as it is usually during the exercises in a usual language classroom. The students have to cope themselves; they cannot rely on the help of the teacher if they are in doubt. During a usual procedure when doing various activities the students know they can encounter the teacher’s help if they require it. They know the teacher is always near and ready to assist, therefore, no one is afraid to make a mistake and try to take a chance to do the exercises. However, when writing a test and being left alone to deal with the test activities, the students panic and forget everything they knew before. The author of the paper believes that first what the teacher should do is to teach the students to overcome their fear of tests and secondly, help them acquire the ability to work independently believing in their own knowledge. That ability according to Alderson is the main point, “the core meaning” of the test. The students should be given confidence. Here we can refer to Heaton (1990:7) who conceives, supported by Hicks, that students’ encouragement is a vital element in language learning. Another question that may emerge here is how to reach the goal described above, how to encourage the students. Thus, at this point we can speak about positive results. In fact, our success motivates us to study further, encourages us to proceed even if it is rather difficult and we are about to lose confidence in ourselves. Therefore, we can speak about the tests as a tool to increase motivation. However, having failed for considerable number of times, the student would definitely oppose the previous statement. Hence, we can speak about assessment and evaluation as means for increasing the students’ motivation.

Concerning Hicks (2000:162), we often perceive these two terms – evaluating and assessment – as two similar notions, though they are entirely different. She states that when we assess our students we commonly are interested in “how and how much our students have learnt”, but when we evaluate them we are concerned with “how the learning process is developing”. These both aspects are of great importance for the teacher and the students and should be correlated in order to make evaluation and assessment “go hand in hand”. However, very frequently, the teachers assess the students without taking the aspect of evaluation into account. According to Hicks, this assessment is typically applied when dealing with examinations that take place either at the end of the course or school year. Such assessment is known as achievement test. With the help of these tests the teacher receives a clear picture of what his/her students have learnt and which level they are comparing with the rest of the class. The author of the paper agrees that achievement tests are very essential for comparing how the students’ knowledge has changed during the course. This could be of a great interest not only for the teacher, but also for the authorities of the educational establishment the teacher is employed by. Thus, evaluation of the learning process is not of the major importance here. We can speak about evaluation when we deal with “small” tests the teachers use during the course or studying year. It is a well-known fact that these tests are employed in order to check how the learning process is going on, where the students are, what difficulties they encounter and what they are good at. These tests are also called “diagnostic” tests; they could be of a great help for the teacher: judging from the results of the test, analysing them the teacher will be able to improve or alter the course and even introduce various innovations. These tests will define whether the teacher can proceed with the new material or has to stop and return to what has not been learnt sufficiently in order to implement additional practice.

With respect to Hicks, we can display some of her useful and practical ideas she proposes for the teachers to use in the classroom. In order to incorporate evaluation together with assessment she suggests involving the students directly into the process of testing. Before testing vocabulary the teacher can ask the students to guess what kind of activities could be applied in the test. The author of the paper believes that it will give them an opportunity to visage how they are going to be tested, to be aware of and wait for, and the most important, it will reduce fear the students might face. Moreover, at the end of each test the students could be asked their reflections: if there was a multiple choice, what helped them guess correctly, what they used for that – their schemata or just pure guessing; if there was a cloze test - did they use guessing from the context or some other skills, etc. Furthermore, Hicks emphasises that such analysis will display the students the way they are tested and establish an appropriate test for each student. Likewise, evaluation will benefit the teacher as well. S/he not only will be able to discover the students’ preferences, but also find out why the students have failed a particular type of activity or even the whole test. The evaluation will determine what is really wrong with the structure or design of the test itself. Finally, the students should be taught to evaluate the results of the test. They should be asked to spot the places they have failed and together with the teacher attempt to find out what has particularly caused the difficulties. This will lead to consolidation of the material and may be even to comprehension of it. And again the teacher’s role is very essential, for the students alone are not able to cope with their mistakes. Thus, evaluation is inevitable element of assessment if the teacher’s aim is to design a test that will not make the students fail, but on the contrary, anticipate the test’s results.

To conclude we can add alluding to Alderson (1996:212) that the usual classroom test should not be too complicated and should not discriminate between the levels of the students. The test should test what was taught. The author of the paper has the same opinion, for the students are very different and the level of their knowledge is different either. It is inappropriate to design a test of advanced level if among your learners there are those whose level hardly exceeds lower intermediate.

Above all, the tests should take the learners’ ability to work and think into account, for each student has his/her own pace, and some students may fail just because they have not managed to accomplish the required tasks in time.

Furthermore, Alderson assumes (ibid.) that the instructions of the test should be unambiguous. The students should clearly see what they are supposed and asked to do and not to be frustrated during the test. Otherwise, they will spend more time on asking the teacher to explain what they are supposed to do, but not on the completing of the tasks themselves. Finally, according to Heaton (1990:10) and Alderson (1996:214), the teacher should not give the tasks studied in the classroom for the test. They explain it by the fact, that when testing we need to learn about the students’ progress, but not to check what they remember. The author of the paper concurs the idea and assumes that the one of the aims of the test is to check whether the students are able to apply their knowledge in various contexts. If this happens, that means they have acquired the new material.

Chapter 2

Reliability and validity

2.1 Inaccurate tests

Hughes (1989:2) conceives that one of the reasons why the tests are not favoured is that they measure not exactly what they have to measure. The author of the paper supports the idea that it is impossible to evaluate someone’s true abilities by tests. An individual might be a bright student possessing a good knowledge of English, but, unfortunately, due to his/her nervousness may fail the test, or vice versa, the student might have crammed the tested material without a full comprehension of it. As a result, during the test s/he is just capable of producing what has been learnt by tremendous efforts, but not elaboration of the exact actual knowledge of the student (that, unfortunately, does not exist at all). Moreover, there could be even more disastrous case when the student has cheated and used his/her neighbour’s work. Apart from the above-mentioned there could be other factors that could influence an inadequate completion of the test (sleepless night, various personal and health problems, etc.)

However, very often the test itself can provoke the failure of the students to complete it. With the respect to the linguists, such as Hughes (1989) and Alderson (1996), we are able to state that there are two main causes of the test being inaccurate:

· Test content and techniques;

· Lack of reliability.

The first one means that the test’s design should response to what is being tested. First, the test must content the exact material that is to be tested. Second, the activities, or techniques, used in the test should be adequate and relevant to what is being tested. This denotes they should not frustrate the learners, but, on the contrary, facilitate and help the students write the test successfully.

The next one denotes that one and the same test given at a different time must score the same points. The results should not be different because of the shift in time. For example, the test cannot be called reliable if the score gathered during the first time the test was completed by the students differs from that administered for the second time, though knowledge of the learners has not changed at all. Furthermore, reliability can fail due to the improper design of a test (unclear instructions and questions, etc.) and due to the ways it is scored. The teacher may evaluate various students differently taking different aspects into consideration (level of the students, participation, effort, and even personal preferences.) If there are two markers, then definitely there will be two different evaluations, for each marker will possess his/her own criteria of marking and evaluating one and the same work. For example, let us mention testing speaking skills. Here one of the makers will probably treat grammar as the most significant point to be evaluated, whereas the other will emphasise the fluency more. Sometimes this could lead to the arguments between the makers; nevertheless, we should never forget that still the main figure we have to deal with is the student.

2.2. Validity

Now we can come to one of the important aspects of testing – validity. Concerning Hughes, every test should be reliable as well as valid. Both notions are very crucial elements of testing. However, according to Moss (1994) there can be validity without reliability, or sometimes the border between these two notions can just blur. Although, apart from those elements, a good test should be efficient as well.

According to Bynom (Forum, 2001), validity deals with what is tested and degree to which a test measures what is supposed to measure (Longman Dictionary, LTAL). For example, if we test the students writing skills giving them a composition test on Ways of Cooking, we cannot denote such test as valid, for it can be argued that it tests not our abilities to write, but the knowledge of cooking as a skill. Definitely, it is very difficult to design a proper test with a good validity, therefore, the author of the paper believes that it is very essential for the teacher to know and understand what validity really is.

Regarding Weir (1990:22), there are five types of validity:

· Construct validity;

· Content validity

· Face validity

· Wash back validity;

· Criterion-related validity.

Weir (ibid.) states that construct validity is a theoretical concept that involves other types of validity. Further, quoting Cronbach (1971), Weird writes that to construct or plan a test you should research into testee’s behaviour and mental organisation. It is the ground on which the test is based; it is the starting point for a constructing of test tasks. In addition, Weird displays the Kelly’s idea (1978) that test design requires some theory, even if it is indirect exposure to it. Moreover, being able to define the theoretical construct at the beginning of the test design, we will be able to use it when dealing with the results of the test. The author of the paper assumes that appropriately constructed at the beginning, the test will not provoke any difficulties in its administration and scoring later.

Another type of validity is content validity. Weir (ibid.) implies the idea that content validity and construct one are closely bound and sometimes even overlap with each other. Speaking about content validity, we should emphasise that it is inevitable element of a good test. What is meant is that usually duration of the classes or test time is rather limited, and if we teach a rather broad topic such as “computers”, we cannot design a test that would cover all the aspects of the following topic. Therefore, to check the students’ knowledge we have to choose what was taught: whether it was a specific vocabulary or various texts connected with the topic, for it is impossible to test the whole material. The teacher should not pick up tricky pieces that either were only mentioned once or were not discussed in the classroom at all, though belonging to the topic. S/he should not forget that the test is not a punishment or an opportunity for the teacher to show the students that they are less clever. Hence, we can state that content validity is closely connected with a definite item that was taught and is supposed to be tested.

Face validity, according to Weir (ibid.), is not theory or samples design. It is how the examinees and administration staff see the test: whether it is construct and content valid or not. This will definitely include debates and discussions about a test; it will involve the teachers’ cooperation and exchange of their ideas and experience.

Another type of validity to be discussed is wash back validity or backwash. According to Hughes (1989:1) backwash is the effect of testing on teaching and learning process. It could be both negative and positive. Hughes believes that if the test is considered to be a significant element, then preparation to it will occupy the most of the time and other teaching and learning activities will be ignored. As the author of the paper is concerned this is already a habitual situation in the schools of our country, for our teachers are faced with the centralised exams and everything they have to do is to prepare their students to them. Thus, the teacher starts concentrating purely on the material that could be encountered in the exam papers alluding to the examples taken from the past exams. Therefore, numerous interesting activities are left behind; the teachers are concerned just with the result and forget about different techniques that could be introduced and later used by their students to make the process of dealing with the exam tasks easier, such as guessing form the context, applying schemata, etc.

The problem arises here when the objectives of the course done during the study year differ from the objectives of the test. As a result we will have a negative backwash, e.g. the students were taught to write a review of a film, but during the test they are asked to write a letter of complaint. However, unfortunately, the teacher has not planned and taught that.

Often a negative backwash may be caused by inappropriate test design. Hughes further in his book speaks about multiple-choice activities that are designed to check writing skills of the students. The author of the paper is very confused by that, for it is unimaginable how writing an essay could be tested with the help of multiple choices. Testing essay the teacher first of all is interested in the students’ ability to apply their ideas in writing, how it has been done, what language has been used, whether the ideas are supported and discussed, etc. At this point multiple-choice technique is highly inappropriate.

Notwithstanding, according to Hughes apart form negative side of the backwash there is the positive backwash as well. It could be the creation of an entirely new course designed especially for the students to make them pass their final exams. The test given in a form of final exams imposes the teacher to re-organise the course, choose appropriate books and activities to achieve the set goal: pass the exam. Further, he emphasises the importance of partnership between teaching and testing. Teaching should meet the needs of testing. It could be understand in the following way that teaching should correspond the demands of the test. However, it is a rather complicated work, for according to the knowledge of the author of the paper the teachers in our schools are not supplied with specially designed materials that could assist them in their preparation the students to the exams. The teachers are just given vague instructions and are free to act on their own.

The last type that could be discussed is criterion-related validity. Weir (1990:22.) assumes that it is connected with test scores link between two different performances of the same test: either older established test or future criterion performance. The author of the paper considers that this type of validity is closely connected with criterion and evaluation the teacher uses to assess the test. It could mean that the teacher has to work out definite evaluation system and, moreover, should explain what she finds important and worth evaluating and why. Usually the teachers design their own system; often these are points that the students can obtain fulfilling a certain task. Later the points are gathered and counted for the mark to be put. Furthermore, the teacher can have a special table with points and relevant marks. According to our knowledge, the language teachers decide on the criteria together during a special meeting devoted to that topic, and later they keep to it for the whole study year. Moreover, the teachers are supposed to make his/her students acquainted with their evaluation system for the students to be aware what they are expected to do.

2.3 Reliability

According to Bynom (Forum, 2001) reliability shows that the test’s results will be similar and will not change if one and the same test will be given on various days. The author of the paper is of the same mind with Bynom and presumes the reliability to be the one of the key elements of a good test in general. For, as it has been already discussed before, the essence of reliability is that when the students’ scores for one and the same test, though given at different periods of time and with a rather extended interval, will be approximately the same. It will not only display the idea that the test is well organized, but will denote that the students have acquired the new material well.

A reliable test, according to Bynom, will contain well-formulated tasks and not indefinite questions; the student will know what exactly should be done. The test will always present ready examples at the beginning of each task to clarify what should be done. The students will not be frustrated and will know exactly what they are asked to perform. However, judging form the personal experience, the author of the paper has to admit, that even such hints may confuse the students; they may fail to understand the requirements and, consequently, fail to complete the task correctly. This could be explained by the fact that the students are very often inattentive, lack patience and try to accomplish the test quickly without bothering to double check it.

Further, regarding to Heaton (1990:13), who states that the test could be unreliable if the two different markers mark it, we can add that this factor should be accepted, as well. For example, one representative of marking team could be rather lenient and have different demands and requirements, but the other one could appear to be too strict and would pay attention to any detail. Thus, we can come to another important factor influencing the reliability that is marker’s comparison of examinees’ answers. Moreover, we have to admit a rather sad fact but not the exceptional one that the maker’s personal attitude towards the testee could impact his/her evaluation. No one has to exclude various home or health problems the marker can encounter at that moment, as well.

To summarize, we can say that for a good test possessing validity and reliability is not enough. The test should be practical, or in other words, efficient. It should be easily understood by the examinee, ease scored and administered, and, certainly, rather cheap. It should not last for eternity, for both examiner and examinee could become tired during five hours non-stop testing process. Moreover, testing the students the teachers should be aware of the fact that together with checking their knowledge the test can influence the students negatively. Therefore, the teachers ought to design such a test that it could encourage the students, but not to make them reassure in their own abilities. The test should be a friend, not an enemy. Thus, the issue of validity and reliability is very essential in creating a good test. The test should measure what it is supposed to measure, but not the knowledge beyond the students’ abilities. Moreover, the test will be a true indicator whether the learning process and the teacher’s work is effective.

Chapter 3

Types of tests

Different scholars (Alderson, 1996; Heaton, 1990; Underhill, 1991) in their researches ask the similar question – why test, do the teachers really need them and for what purpose. Further, they all agree that test is not the teacher’s desire to catch the students unprepared with what they are not acquainted; it is also not the motivating factor for the students to study. In fact, the test is a request for information and possibility to learn what the teachers did not know about their students before. We can add here that the test is important for the students, too, though they are unaware of that. The test is supposed to display not only the students’ weak points, but also their b sides. It could act as an indicator of progress the student is gradually making learning the language. Moreover, we can cite the idea of Hughes (1989:5) who emphasises that we can check the progress, general or specific knowledge of the students, etc. This claim will directly lead us to the statement that for each of these purposes there is a special type of testing. According to some scholars (Thompson, 2001; Hughes, 1989; Alderson, 1996; Heaton, 1990; Underhill, 1991), there are four traditional categories or types of tests: proficiency tests, achievement tests, diagnostic tests, and placement tests. The author of the paper, once being a teacher, can claim that she is acquainted with three of them and has frequently used them in her teaching practice.

In the following sub-chapters we are determined to discuss different types of tests and if possible to apply our own experience in using them.

3.1. Diagnostic tests

It is wise to start our discussion with that type of testing, for it is typically the first step each teacher, even non-language teacher, takes at the beginning of a new school year. In the establishment the author of the paper was working it was one of the main rules to start a new study year giving the students a diagnostic test. Every year the administration of the school had stemmed a special plan where every teacher was supposed to write when and how they were going to test their students. Moreover, the teachers were supposed to analyse the diagnostic tests, complete special documents and provide diagrams with the results of each class or group if a class was divided. Then, at the end of the study year the teachers were demanded to compare the results of them with the final, achievement test (see in Appendix 1). The author of the paper has used this type of test for several times, but had never gone deep into details how it is constructed, why and what for. Therefore, the facts listed below were of great value for her.

Referring to Longman Dictionary of LTAL (106) diagnostic tests is a test that is meant to display what the student knows and what s/he does not know. The dictionary gives an example of testing the learners’ pronunciation of English sounds. Moreover, the test can check the students’ knowledge before starting a particular course. Hughes (1989:6) adds that diagnostic tests are supposed to spot the students’ weak and b points. Heaton (1990:13) compares such type of test with a diagnosis of a patient, and the teacher with a doctor who states the diagnosis. Underhill (1991:14.) adds that a diagnostic test provides the student with a variety of language elements, which will help the teacher to determine what the student knows or does not know. We believe that the teacher will intentionally include the material that either is presumed to be taught by a syllabus or could be a starting point for a course without the knowledge of which the further work is not possible. Thus, we fully agree with the Heaton’s comparison where he contrasts the test with a patient’s diagnosis. The diagnostic test displays the teacher a situation of the students’ current knowledge. This is very essential especially when the students return from their summer holidays (that produces a rather substantial gap in their knowledge) or if the students start a new course and the teacher is completely unfamiliar with the level of the group. Hence, the teacher has to consider carefully about the items s/he is interested in to teach. This consideration reflects Heaton’s proposal (ibid.), which stipulates that the teachers should be systematic to design the tasks that are supposed to illustrate the students’ abilities, and they should know what exactly they are testing. Moreover, Underhill (ibid.) points out that apart from the above-mentioned the most essential element of the diagnostic test is that the students should not feel depressed when the test is completed. Therefore, very often the teachers do not put any marks for the diagnostic test and sometimes even do not show the test to the learners if the students do not ask the teacher to return it. Nevertheless, regarding our own experience, the learners, especially the young ones, are eager to know their results and even demand marks for their work. Notwithstanding, it is up to the teacher whether to inform his/her students with the results or not; however, the test represents a valuable information mostly for the teacher and his/her plans for designing a syllabus.

Returning to Hughes (ibid.) we can emphasise his belief that this type of test is very useful for individual check. It means that this test could be applicable for checking a definite item; it is not necessary that it will cover broader topics of the language. However, further Hughes assumes that this test is rather difficult to design and the size of the test can be even impractical. It means that if the teacher wants to check the students’ knowledge of Present simple, s/he will require a great deal of examples for the students to choose from. It will demand a tiresome work from the teacher to compose such type of the test, and may even confuse the learners.

At that point we can allude to our experience in giving a diagnostic test in Form 5. It was the class the teacher had worked before and knew the students and their level rather good. However, new learners had joined the class, and the teacher had not a slightest idea about their abilities. It was obvious that the students worried about how they would accomplish the test and what marks would they receive. The teacher had ensured them that the test would not be evaluated by marks. It was necessary for the teacher to plan her future work. That was done to release the tension in the class and make the students get rid of the stress that might be crucial for the results. The students immediately felt free and set to work. Later when analysing and summarizing the results the teacher realized that the students’ knowledge was purely good. Certainly, there were the place the students required more practice; therefore during the next class the students were offered remedial activities on the points they had encountered any difficulties. Moreover, that was the case when the students were particularly interested in their marks.

To conclude, we can conceive that interpreting the results of diagnostic tests the teachers apart from predicting why the student has done the exercises the way s/he has, but not the other, will receive a significant information about his/her group s/he is going to work with and later use the information as a basis for the forming syllabus.

3.2 Placement tests

Another type of test we are intended to discuss is a placement test. Concerning Longman Dictionary of LTAL again (279-280) we can see that a placement test is a test that places the students at an appropriate level in a programme or a course. This term does not refer to the system and construction of the test, but to its usage purpose. According to Hughes (1989:7), this type of test is also used to decide which group or class the learner could be joined to. This statement is entirely supported by another scholar, such as Alderson (1996:216), who declares that this type of test is meant for showing the teacher the students’ level of the language ability. It will assist to put the student exactly in that group that responds his/her true abilities.

Heaton (ibid.) adheres that the following type of testing should be general and should purely focus on a vast range of topics of the language not on just specific one. Therefore, the placement test typically could be represented in the form of dictations, interviews, grammar tests, etc.

Moreover, according to Heaton (ibid.), the placement test should deal exactly with the language skills relevant to those that will be taught during a particular course. If our course includes development of writing skills required for politics, it is not appropriate to study writing required for medical purposes. Thus, Heaton (ibid.) presumes that is fairly important to analyse and study the syllabus beforehand. For the placement test is completely attributed to the future course programme. Furthermore, Hughes (ibid.) stresses that each institution will have its own placement tests meeting its needs. The test suitable for one institution will not suit the needs of another. Likewise, the matter of scoring is particularly significant in the case of placement tests, for the scores gathered serve as a basis for putting the students into different groups appropriate to their level.

At this point we can attempt to compare a placement test and diagnostic one. From the first sight these both types of tests could look similar. They both are given at the beginning of the study year and both are meant for distinguishing the students’ level of the current knowledge. However, if we consider the facts described in sub-chapter 2.1 we will see how they are different. A diagnostic test is meant for displaying a picture of the students’ general knowledge at the beginning of the study year for the teacher to plan further work and design an appropriate syllabus for his/her students. Whereas, a placement test is designed and given in order to use the information of the students’ knowledge for putting the students into groups according to their level of the language. Indeed, they are both used for teacher’s planning of the course their functions differ. A colleague of mine, who works at school, has informed me that they have used a placement test at the beginning of the year and it appeared to be relevant and efficient for her and her colleague’s future teaching. The students were divided according to their English language abilities: the students with better knowledge were put together, whereas the weaker students formed their own group. It does not mean discrimination between the students. The teachers have explained the students the reason for such actions, why it was necessary – they wanted to produce an appropriate teaching for each student taking his/her abilities into account. The teachers have altered their syllabus to meet the demands of the students. The result proved to be satisfying. The students with better knowledge progressed; no one halted them. The weaker students have gradually improved their knowledge, for they received due attention than it would be in a mixed group.

3.3 Progress test

Having discussed two types of tests that are usually used at the beginning, we can approach the test typically employed during the study year to check the students’ development. We will speak about a progress test. According to Alderson (1996:217), progress test will show the teacher whether the students have learnt the recently taught material successfully. Basically, the teacher intends to check certain items, not general topics covered during the school or study year. Commonly, it is not very long and is determined to check the recent material. Therefore, the teacher might expect his/her learners to get rather high scores. The following type is supposed to be used after the students have learnt either a set of units on a theme or have covered a definite topic of the language. It will display the teacher whether the material has been successfully acquired or the students need additional practice instead of starting a new material.

A progress test will basically display the activities based on the material the teacher is determined to check. To evaluate it the teacher can work out a certain system of points that later will compose a mark. Typically, such tests do not influence the students’ final mark at the end of the year.

The authorities of school demand the teachers to conduct progress tests, as well. However, the teachers themselves decide on the necessity of applying them. Nevertheless, we can claim that progress test is inevitable part of the learning process. We can even take a responsibility to declare that progress test facilitate the material acquisition in a way. The students preparing for the test look through the material again and there is a chance it can be transferred to their long-term memory.

Further, we can come to Alderson (ibid.) who presumes that such type of testing could function as a motivating fact for the learners, for success will develop the students’ confidence in their own knowledge and motivate them study further more vigorously. In case, there will be two or three students whose scores are rather low, the teacher should encourage them by providing support in future and imply the idea that studying hard will allow them to catch up with the rest of the students sooner or later. The author of the paper basing on her experience agrees with the statement, for she had noticed that weaker students when they had managed to write their test successfully became proud of their achievement and started working better.

However, if the majority of the class scores a rather low grade, the teacher should be cautious. This could be a signal that there is either something wrong with the teaching or the students are low motivated or lazy.

3.4 Achievement tests

Apart from a progress test the teachers employ another type – achievement test. According to Longman Dictionary of LTAL (3), an achievement test is a test, which measures a language someone has learned during a specific course, study or program. Here the progress is significant and, therefore, is the main point tested.

Alderson (1996:219) posits that achievement tests are “more formal”, whereas Hughes (1989:8) assumes that this type of tests will fully involve teachers, for they will be responsible for the preparation of such tests and giving them to the learners. He repeats the dictionary defining the notion of achievement tests, adding just that success of the students, groups of students, or the courses.

Furthermore, Alderson (ibid.) conceives that achievement tests are mainly given at definite times of the school year. Moreover, they could be extremely crucial for the students, for they are intended either to make the students pass or fail the test.

At this instant the author of the paper is determined to compare a progress and achievement test. Again if we look at these two types they might seem similar, however, it is not so. Drawing on the facts listed above (see sub-chapter 2.3) we can report that a progress test is typically used during the course to check the acquisition of an excerpted material. An achievement test checks the acquisition of the material, as well. Although, it is far different in its application time. We basically use an achievement test at the end of the course to check the acquisition of the material covered during the study year, not bits of it as it is with a progress test.

Quoting Hughes (ibid.) we can differentiate between two kinds of achievement tests: final and progress tests. Final tests are the tests that are usually given at the end of the course in order to check the students’ achieved results and whether the objectives set at the beginning have been successfully reached. Further Hughes highlights that ministries of education, official examining boards, school administration and even the teachers themselves design these tests. The tests are based on the curriculum and the course that has been studied. We assume, that is a well-known fact that teachers usually are responsible for composing such tests, and it requires a careful work.

Alternatively, Alderson (ibid.) mentions two usage types of achievement tests: formative and summative. The notion of a formative test denotes the idea that the teacher will be able after evaluating the results of the test reconsider his/her teaching, syllabus design and even slow down the pace of studying to consolidate the material if it is necessary in future. Notwithstanding, these reconsiderations will not affect the present students who have taken the test. They will be applied to the future syllabus design.

Summative usage will deal precisely with the students’ success or failure. The teacher will immediately can take up remedial activities to improve a situation.

Further, Alderson (ibid.) and Heaton (1990:14) stipulate that designing an achievement test is rather time-consuming, for the achievement test is basically devised to cover a broad topic of the material covered during the course. In addition, one and the same achievement test could be given to more than one class at school to check both the students’ progress and the teachers’ work. At that point it is very essential to consider the material covered by different classes or groups. You cannot ask the students what they have not been taught. Heaton (ibid.) emphasises the close cooperative work of the teachers as a crucial element in test design. However, in the school the author of the paper used to work the teachers did not cooperate in designing achievement tests. Each teacher was free to write the test that best suits his/her children.

Developing the topic, we can focus on Hughes’ idea that there is an approach how to design a test; it is called syllabus-content approach. The test is based on a syllabus studied or a book taken during the course. This test could be described as a fair test, for it focuses mainly on the detailed material that the students are supposed to have studied. Hughes (ibid.) points out that if the test is inappropriately designed, it could result in unsuccessful accomplishment of it. Sometimes the demands of the test may differ from the objectives of the course. Therefore, the test should be based directly on the objectives of the course. Consequently, it will influence the choice of books appropriate to the syllable and syllable itself. The backwash will be positive not only for the test, but also for the teaching. Furthermore, we should mention that the students have to know the criteria according to which they are going to be evaluated.

To conclude we shall state again that achievement tests are meant to check the mastery of the material covered by the learners. They will be great helpers for the teacher’s future work and will contribute a lot to the students’ progress.

3.5 Proficiency tests

The last type of test to be discussed is a proficiency test. Regarding Longman Dictionary of LTAL (292) proficiency test is a test, which measures how much of a language a person knows or has learnt. It is not bound to any curriculum or syllabus, but is intended to check the learners’ language competence. Although, some preparation and administration was done before taking the test, the test’s results are what being focused on. The examples of such tests could be the American Testing of English as Foreign Language test (further in the text TOEFL) that is used to measures the learners’ general knowledge of English in order to allow them to enter any high educational establishments or to take up a job in the USA. Another proficiency test is Cambridge First Certificate test that has almost the same aim as TOEFL.

Hughes (1989:10) gives the similar definition of proficiency tests stressing that training is not the thing that is emphasised, but the language. He adds that ‘proficient’ in the case of proficiency tests means possessing a certain ability of using the language according to an appropriate purpose. It denotes that the learner’s language ability could be tested in various fields or subjects (art, science, medicine, etc.) in order to check whether the learner could suit the demands of a specific field or not. This could refer to TOEFL tests. Apart from TOEFL we can speak about Cambridge First Certificate test, which is general and does not concern any specific field. The aim of this test is to reveal whether the learners’ language abilities have reached a certain standard set. The test could be taken by anyone who is interested in testing the level of language knowledge. There are special tests levels, which can be chosen by a candidate. If a candidate has passed the exam s/he can take another one of a different level. However, these entire tests are not free of charge, and in order to take it an individual has to pay for them.

Regarding Hughes (ibid.) who supposes that the only similar factor about such tests that they are not based on any courses, but are intended to measure the candidates’ suitability for a certain post or course at the university, we can add that in order to pass these tests a candidate has to attend special preparatory courses.

Moreover, Hughes (ibid.) believes that the proficiency tests affect learners’ more in negative way, than in positive one.

The author of the paper both agrees and does not agree with the Hughes’ proposed statement. Definitely, this test could make the testee depressed and exhausted by taking a rather long test. Moreover, the proficiency tests are rather impartial; they are not testee-friendly.

However, there is a useful factor amongst the negative ones. It is preparation to proficiency tests, for it involves all language material starting from grammar finishing with listening comprehension. All four skills are being practised during the preparation course; various reading task and activities have been incorporated; writing has been stressed focusing on all possible types of essays, letters, reviews, etc. Speaking has been practiced as well. The whole material has been consolidated for many times.

To summarize we can claim that there are different types of tests that serve for different purposes. Moreover, they all are necessary for the teacher’s work, for them, apart from a proficiency test, could contribute to successful material acquisition by learners.

Chapter 4

Ways of testing

In this chapter we will attempt to discuss various types of testing and if possible compare them. We will start with the most general ones and move to more specific and detailed ways of testing.

4.1 Direct and indirect testing

The first types of testing we are intended to discuss are direct and indirect testing. First, we will try to define each of them; secondly, we will endeavour to compare them.

We will commence our discussion with direct testing that according to Hughes (1989:14) means the involvement of a skill that is supposed to be tested. The following view means that when applying the direct testing the teacher will be interested in testing a particular skill, e.g. if the aim of the test is to check listening comprehension, the students will be given a test that will check their listening skills, such as listening to the tape and doing the accompanying tasks. Such type of test will not engage testing of other skills. Hughes (ibid.) emphasises the importance of using authentic materials. Though, we stipulate that the teacher is free to decide him/herself what kind of material the students should be provided with. It the teacher’s aim is to teach the students to comprehend the real, native speech, s/he will apply the authentic material in teaching and later, logically, in tests. Developing the idea we can cite Bynom (2001:8) who assumes that direct testing introduces real-life language through authentic tasks. Consequently, it will lead to the usage of role-plays, summarising the general idea, providing the missing information, etc. Moving further and analysing the statements made by the linguists (Bynom, 2001; Hughes,1989) we can posit the idea that direct testing will be task-oriented, effective and easy to manage if it tests such skills as writing or speaking. It could be explained by the fact that the tasks intended to check the skills mentioned above give us precise information about the learners’ abilities. Moreover, we can maintain that when testing writing the teacher demands the students to write a certain task, such as an essay, a composition or reproduction, and it will be precisely the point the teacher will be intended to check. There will be certain demands imposed on writing test; the teacher might be just interested in the students’ ability to produce the right layout of an essay without taking grammar into account, or, on the contrary, will be more concerned with grammatical and syntactical structures. What concerns testing speaking skills, here the author of the paper does not support the idea promoted by Bynom that it could be treated as direct testing. Definitely, you will have a certain task to involve your speaking skills; however, speaking is not possible without employment of listening skills. This in turn will generate the idea that apart from speaking skills the teacher will test the students’ ability to understand the speech s/he hears, thus involving speaking skills.

It is said that the advantages of direct testing is that it is intended to test some certain abilities, and preparation for that usually involves persistent practice of certain skills. Nevertheless, the skills tested are deprived from the authentic situation that later may cause difficulties for the students in using them.

Now we can shift to another notion - indirect testing. It differs from direct one in the way that it measures a skill through some other skill. It could mean the incorporation of various skills that are connected with each other, e.g. listening and speaking skills.

Indirect testing, regarding to Hughes, tests the usage of the language in real-life situation. Moreover, it suits all situations; whereas direct testing is bound to certain tasks intended to check a certain skill. Hughes (ibid.) assumes that indirect testing is more effective than direct one, for it covers a broader part of the language. It denotes that the learners are not constrained to one particular skill and a relevant exercise. They are free to elaborate all four skills; what is checked is their ability to operate with those skills and apply them in various, even unpredictable situations. This is the true indicator of the learner’s real knowledge of the language.

Indirect testing has more advantages that disadvantages, although the only drawback according to Hughes is that such type of testing is difficult to evaluate. It could be frustrating what to check and how to check; whether grammar should be evaluated higher, than composition structure or vice versa. The author of the paper agrees with that, however, basing on her experience at school again, she must claim that it is not so easy to apply indirect testing. This could be rather time-consuming, for it is a well-known fact that the duration of the class is just forty minutes; moreover, it is rather complicated to construct indirect test – it demands a lot of work, but our teachers are usually overloaded with a variety of other duties. Thus, we can only hope on the course books that supply us with a variety of activities that involve cooperation of all four skills.

4.2 Discrete point and integrative testing

Having discussed the kinds of testing that deal with general aspects, such as certain skills and variety of skills in cooperation, we can come to the more detailed types as discrete point and integrative testing. According to Longman Dictionary of LTAL (112), discrete point test is a language test that is meant to test a particular language item, e.g. tenses. The basis of that type of tests is that we can test components of the language (grammar, vocabulary, pronunciation, and spelling) and language skills (listening, reading, speaking, and writing) separately. We can declare that discrete point test is a common test used by the teachers in our schools. Having studied a grammar topic or new vocabulary, having practiced it a great deal, the teacher basically gives a test based on the covered material. This test usually includes the items that were studied and will never display anything else from a far different field. The same will concern the language skills; if the teacher’ aim is to check reading skills; the other skills will be neglected. The author of the paper had used such types of tests herself, especially after a definite grammar topic was studied. She had to construct the tests herself basing on the examples displayed in various grammar books. It was usually gap-filling exercises, multiple choice items or cloze tests. Sometimes a creative work was offered, where the students had to write a story involving a certain grammar theme that was being checked. According to her observance, the students who studied hard were able to complete them successfully, though there were the cases when the students failed. Now having discussed the theory on validity, reliability and types of testing, it is even more difficult to realize who was really to blame for the test failures: either the tests were wrongly designed or there was a problem in teaching. Notwithstanding, this type was and still remains to be the most general and acceptable type in schools of our country, for it is easy to design, it concerns a certain aspect of the language and is easy to score. If we speak about types of tests we can say that this way of testing refers more to a progress test (You can see the examples of such type of test in Appendix 2).

Nevertheless, according to Bynom (2001:8) there is a certain drawback of discrete point testing, for it tests only separated parts, but does not show us the whole language. It is true, if our aim is to incorporate the whole language. Though, if we are to check the exact material the students were supposed to learn, then why not use it.

Discussing further, we have come to integrative tests. According to Longman Dictionary of LTAL, the integrative test intends to check several language skills and language components together or simultaneously. Hughes (1989:15) stipulates that the integrative tests display the learners’ knowledge of grammar, vocabulary, spelling together, but not as separate skills or items.

Alderson (1996:219) poses that, by and large, most teachers prefer using integrative testing to discrete point type. He explains the fact that basically the teachers ei