TZA_2014_EQUIPIE-BL_v03_M
Education Quality Improvement Programme Impact Evaluation Baseline Survey 2014-2015
Name | Country code |
---|---|
Tanzania | TZA |
School survey (baseline)
The Education Quality Improvement Programme in Tanzania (EQUIP-T) Impact Evaluation Baseline Survey is the first of three rounds of the EQUIP-T impact evaluation and was conducted in 2014. The first follow-up survey (EQUIP-T Impact Evaluation Midline Survey) will be conducted in 2016 and the second and final follow-up survey (EQUIP-T Impact Evaluation Endline Survey) will be conducted in 2018. The EQUIP-T Impact Evaluation is designed and implemented by Oxford Policy Management Ltd.
The EQUIP-T Impact Evaluation is designed to measure impact of the EQUIP-T programme over time on pupil learning and selected teacher behaviour and school leadership and management outcomes.
Sample survey data [ssd]
Version 2.3: Edited, anonymous dataset for public distribution.
2021-11
Version 2.3 consists of four edited and anonymised datasets (at school, teacher, pupil and lesson level) with the responses to a small number of questions removed (see 'List of Variables Excluded from the EQUIP-T IE Baseline Survey Dataset' provided under Technical Documents); these were removed due to data quality issues or because no or only incomplete records existed. The datasets also contain selected constructed indicators prefixed by n_. These constructed indicators are included to save data users time as they require complex reshaping and extraction of data from multiple sources (but they could be generated by data users if preferred). Note that the second version of the archived dataset did not include the data from the pupil learning assessment (which were kept confidential until the completion of the impact evaluation in 2020). This third version of the public datasets includes the data from the pupil learning assessment conducted at baseline. The archived pupil dataset and associated questionnaire 'EQUIP-T IE Pupil Background and Learning Assessment (PB) Baseline Questionnaire' have therefore been updated in this new version.
The following variables were added:
All variables from p_a1_1 to p_k6_021 (these are the variables related to the learning assessment that we had kept confidential at baseline)
The following variables that were constructed by the OPM analysis team were also added:
· perraschK
· n_p_perfbandK
· perraschM_miss
· n_p_perfbandM
· n_sc_povertyscore
· n_sc_belowpoverty
The weight variables are: 'weight_school' and 'weight_pupil'
Version history
Changes from version 1 to version 2 datasets
I. bl_v2_2_school
The weight variables are: 'weight_school'
The following variables were modified:
The following variables were added:
The following variables were dropped:
II. Dataset 2: bl_v2_2_teacher
The following variables were modified:
III. Dataset 3: bl_v2_2_pupil
The following variables were modified:
IV. Dataset 4: bl_v2_2_lesson
No variables were added or dropped
The following variables were modified:
The scope of the EQUIP-T IE Baseline Survey includes:
-HEAD TEACHER/HEAD COUNT/SCHOOL RECORDS: Head teacher background information, qualifications, frequency/type of school planning/management in-service training received, availability and contents of whole school development plan, existence and types of teacher performance rewards, frequency of staff meetings, district and ward supervision and support to the school, head teacher motivation, teacher attendance (from school records and by headcount on the day of the survey), teacher punctuality, pupil attendance (from school records and by headcount on the day of the survey), pupil enrolment, availability of different types of school records, school characteristics, infrastructure and funding.
-PUPIL: Pupil background information, Kiswahili Early Grade Reading Assessment (EGRA) and Early Grade Mathematics Assessment (EGMA) based on standards 1 and 2 national curriculum requirements. Note: The same pupils were assessed in both Kiswahili and mathematics.
-PARENT: Information on household characteristics, household assets, mother's literacy and writing.
-TEACHERS WHO TEACH STANDARDS 1-3 KISWAHILI AND/OR MATHEMATICS : Interview including background information, qualifications, frequency/type of in-service training received, frequency/nature of lesson observation and nature of feedback, frequency/nature of performance appraisal and teacher motivation.
-TEACHERS WHO TEACH STANDARDS 1-3 KISWAHILI: Kiswahili subject knowledge assessment (teacher development needs assessment) based on the primary school Kiswahili curriculum standards 1 to 7 but with limited materials from standards 1 and 2.
-TEACHERS WHO TEACH STANDARDS 1-3 MATHEMATICS: Mathematics subject knowledge assessment (teacher development needs assessment) based on the primary school mathematics curriculum standards 1-7 but with limited materials from standards 1 and 2.
-TEACHERS WHO TEACH STANDARDS 4-7 MATHEMATICS: Mathematics subject knowledge assessment (teacher development needs assessment) based on the primary school mathematics curriculum standards 1-7 but with limited materials from standards 1 and 2.
-LESSON OBSERVATION: Standard 2 Kiswahili and mathematics lesson observations of inclusive behaviour of teachers with respect to pupil gender, key teacher behaviours in the classroom, availability of lesson plan and availability of seating, textbooks, exercise books, pens/pencils during the lesson.
Topic | Vocabulary |
---|---|
Education | World Bank |
Primary Education | World Bank |
The survey is representative of the 17 EQUIP-T programme treatment districts. The survey is NOT representative of the eight control districts. For more details see the section on Representativeness and OPM. 2015. EQUIP-Tanzania Impact Evaluation: Final Baseline Technical Report, Volume I: Results and Discussion and OPM. 2015. EQUIP-Tanzania Impact Evaluation. Final Baseline Technical Report, Volume II: Methods and Technical Annexes.
-Dodoma Region: Bahi DC, Chamwino DC, Kongwa DC, Mpwapwa DC
-Kigoma Region: Kakonko DC, Kibondo DC
-Shinyanga Region: Kishapu DC, Shinyanga DC
-Simiyu Region: Bariadi DC, Bariadi TC, Itilima DC, Maswa DC, Meatu DC
-Tabora Region: Igunga DC, Nzega DC, Sikonge DC, Uyui DC
-Arusha Region: Ngorongoro DC
-Mwanza Region: Misungwi DC
-Pwani Region: Rufiji DC
-Rukwa Region: Nkasi DC
-Ruvuma Region: Tunduru DC
-Singida Region: Ikungi DC, Singida DC
-Tanga Region: Kilindi DC
Name |
---|
Oxford Policy Management Ltd |
Name |
---|
Department for International Development UK |
Because the EQUIP-T regions and districts were purposively selected (see OPM. 2015. EQUIP-Tanzania Impact Evaluation: Final Baseline Technical Report, Volume I: Results and Discussion.), the IE sampling strategy used propensity score matching (PSM) to: (i) match eligible control districts to the pre-selected and eligible EQUIP-T districts (see below), and (ii) match schools from the control districts to a sample of randomly sampled treatment schools in the treatment districts. The same schools will be surveyed for each round of the IE (panel of schools) and standard 3 pupils will be interviewed at each round of the survey (no pupil panel).
Eligible control and treatment districts were those not participating in any other education programme or project that may confound the measurement of EQUIP-T impact. To generate the list of eligible control and treatment districts, all districts that are contaminated because of other education programmes or projects or may be affected by programme spill-over were excluded as follows:
-All districts located in Lindi and Mara regions as these are part of the EQUIP-T programme, but the impact evaluation does not cover these two regions;
-Districts that will receive partial EQUIP-T programme treatment or will be subject to potential EQUIP-T programme spill-overs;
-Districts that are receiving other education programmes/projects that aim to influence the same outcomes as the EQUIP-T programme and would confound measurement of EQUIP-T impact;
-Districts that were part of pre-test 1 (two districts); and
-Districts that were part of pre-test 2 (one district).
To be able to select an appropriate sample of pupils and teachers within schools and districts, the sampling frame consisted of information at three levels:
-District level;
-School level; and
-Within school level.
The sampling frame data at the district and school levels was compiled from the following sources: the 2002 and 2012 Tanzania Population Censuses, Education Management Information System (EMIS) data from the Ministry of Education and Vocational Training (MoEVT) and the Prime Minister's Office for Regional and Local Government (PMO-RALG), and the UWEZO 2011 student learning assessment survey. For within school level sampling, the frames were constructed upon arrival at the selected schools and was used to sample pupils and teachers on the day of the school visit.
Because the treatment districts were known, the first step was to find sufficiently similar control districts that could serve as the counterfactual. PSM was used to match eligible control districts to the pre-selected, eligible treatment districts using the following matching variables: Population density, proportion of male headed households, household size, number of children per household, proportion of households that speak an ethnic language at home, and district level averages for household assets, infrastructure, education spending, parental education, school remoteness, pupil learning levels and pupil drop out.
In the second stage, schools in the treatment districts were selected using stratified systematic random sampling. The schools were selected using a probability proportional to size approach, where the measure of school size was the standard two enrolment of pupils. This means that schools with more pupils had a higher probability of being selected into the sample. To obtain a representative sample of programme treatment schools, the sample was implicitly stratified along four dimensions:
-Districts;
-PSLE scores for Kiswahili;
-PSLE scores for mathematics; and
-Total number of teachers per school.
As in stage one, a non-random PSM approach was used to match eligible control schools to the sample of treatment schools. The matching variables were similar to the ones used as stratification criteria: Standard two enrolment, PSLE scores for Kiswahili and mathematics, and the total number of teachers per school.
The midline and endline surveys will be conducted for the same schools as the baseline survey (a panel of schools). However, the IE will not have a panel of pupils as a pupil only attends standard three once (unless repeating). Thus, the IE will have a repeated cross-section of pupils in a panel of schools.
In the final stage, pupils and teachers were sampled within schools using systematic random sampling (SRS) based on school registers. Per school, 15 standard 3 pupils were sampled. In addition teachers were sampled at each school: Up to three teachers who teach standards 1-3 mathematics, up to three teachers who teach standards 1-3 Kiswahili, and up to three teachers who teach mathematics to standards 4-7. Often teachers in standards 1-3 teach both Kiswahili and mathematics. If this was the case, these teachers were sampled for both the Kiswahili and mathematics teacher development needs assessment (TDNA). The within school sampling was assisted by selection tables automatically generated within the computer assisted personal interviewing (CAPI) survey instruments.
If a selected school could not be surveyed it was replaced. In the process of sampling, the impact evaluation team drew a replacement sample of schools, which was used for this purpose (reserve list) and the use of this list was carefully controlled. Five out of the 200 original sample schools were replaced during the fieldwork (see Annex E in OPM. 2015. EQUIP-Tanzania Impact Evaluation. Final Baseline Technical Report, Volume I: Methods and Technical Annexes).
The actual sample sizes are as follows:
-200 schools (100 treatment and 100 control) surveyed.
-2,987 standard 3 pupils were assessed in both Kiswahili and mathematics.
-2,893 poverty scorecards were administered to the assessed pupils' parent(s).
-681 teachers who teach standards 1 to 3 Kiswahili and/or mathematics were interviewed
-510 teachers who teach standards 1 to 3 Kiswahili were administered the Kiswahili teacher development needs assessment (TDNA).
-505 teachers who teach standards 1 to 3 mathematics were administered the mathematics TDNA.
-564 teachers who teach standards 4-7 mathematics were administered the mathematics TDNA.
-397 standard 2 Kiswahili and mathematics lessons were observed.
For intended sample sizes and response rates see section on Response rates.
The results from the treatment schools are representative of government primary schools in the 17 EQUIP-T programme treatment districts. However, the results from the schools in the eight control districts are NOT representative because these districts were not randomly sampled but matched to the 17 treatment districts using propensity score matching.
In general the unit response rates were high: 200 schools (100% of the intended sample but with five replacements) were surveyed and 2,987 standard 3 pupils were assessed (99.6% of the intended sample) and poverty scorecards for assessed pupils' parents were administered for 2,893 of these pupils (96.4% of the intended sample).
Many schools in reality had three or fewer teachers who teach standards one to three and therefore the theoretical power calculations were conducted assuming two and three teachers per school for minimum intended sample sizes of 400 (small sample size) and 600 (large sample size) respectively (for details see OPM. 2015. EQUIP-Tanzania Impact Evaluation. Final Baseline Technical Report, Volume II: Methods and Technical Annexes). The actual number of interviews with teachers of standards one to three was 681.
For the teacher development needs assessments (TDNA) in Kiswahili and mathematics, 510 teachers who teach standard 1-3 Kiswahili (85% of intended large sample size); 505 teachers who teach standard 1-3 mathematics (84% of intended sample large sample size); and 564 teachers who teach standard 1-3 mathematics (94% of intended large sample size) completed the TDNA for the appropriate subject. Non-responses were primarily due to there being fewer than three teachers at the school (either because it was a very small school or because of teacher absence). Only in a few rare cases did a teacher refuse to complete the assessment.
The observed lessons were not sampled. Out of the intended 400 lesson observations (2 per school), 397 were conducted (99% of the intended sample).
Item response rates were generally high. Exceptions include: Actual school open days (11% missing), pupil age, which was self-reported by pupils (16% missing); and capitation grant payments per pupil expected in 2012 and in 2013 respectively (32% missing).
For the intended and actual number of observations for the indicators presented in the Results section of the EQUIP-T Impact Evaluation Baseline Report, Volume I: Results and Discussion, see Annex H in OPM. 2015. EQUIP-Tanzania Impact Evaluation. Final Baseline Technical Report, Volume II: Methods and Technical Annexes.
The survey is only representative of the EQUIP-T programme treatment area (see section on Representativeness) and therefore survey weights were only constructed for schools, pupils and teachers in the treatment group (not for the control group).
To obtain results that are representative of the EQUIP-T programme treatment areas, treatment estimates should be weighted using the provided survey weights that are normalised values of the inverse probabilities of selection into the sample for each unit of analysis. The relevant probabilities of selection differ depending on whether analysis is carried out at school, pupil or teacher level, and survey weights for each of these units of analysis are included in the datasets.
The probability of being selected of each school depended on the total number of schools being selected and its size relative to the total number of enrolled pupils across all schools in the programme areas. Formally, the probability of a given school being selected into the sample equals the total number of schools sampled multiplied by the ratio of the number of pupils in the given school and the total number of pupils in all schools in the relevant programme areas. The school weights are appropriately normalised inverses of these probabilities.
Note: See section on Weighting for the strata, weights and finite population correction factor variables included in the dataset.
15 standard 3 pupils were randomly sampled at each school. The probability of selection of a pupil in a given school equals the school weight (defined above) multiplied by the ratio of the number of pupils selected per school (15 in all schools except in the schools that had less than 15 pupils present on that day) and the total number of eligible pupils in the given school. The pupil weights are appropriately normalised inverses of these probabilities.
Note: See section on Weighting for the strata, weights and finite population correction factor variables included in the dataset.
The probability of selection of a teacher in a given school equals the school weight (defined above) multiplied by the ratio of the number of teachers that were selected for a given teacher instrument per school and the total number of teachers eligible for the given instrument. The teacher weights are appropriately normalised inverses of these probabilities.
NOTE:
-For data from the teacher interviews the teacher interview weights should be used: weight_tchint.
-For data from the teacher development needs assessment (TDNA) the teacher tdna weights should be used: weight_tdna
-For data from the teacher roster the teacher roster weights should be used: weight_teacherroster. Since all teachers in each school are included in the roster, this means that the selection probability for each teacher is equal to one in this case.
Note that the teacher interview and teacher TDNA weights, weight_tchint and weight_tdna respectively, are identical. In principle, teacher level weights should vary depending on the type of teacher being considered. For example, for teacher development needs assessment (TDNA) Kiswahili, the probability of a teacher being selected into the sample would depend on the total number of Kiswahili teachers in standard 1-3 per school, and for the TDNA mathematics, the probability of a teacher being selected would depend on the number of mathematics teachers in the appropriate standards per school. However, calculating different teacher level weights for each of the types of teacher would make the analysis set-up complex and more prone to errors. Moreover, since the total number of eligible teachers relative to the number of teachers sampled is relatively high in most schools, weights would not vary substantially across the different teacher types. Therefore, only one teacher level weight was calculated that takes into account the size of each school in terms of teachers that could be selected into either of the two TDNAs. We create two separate variables for these two types of teachers, because at midline the teacher interview weights are different than the teacher TDNA weights (because no sampling took place for teacher interviews), and therefore for consistency we create two separate weight variables.
Note: See section on Weighting for the strata, weights and finite population correction factor variables included in the dataset.
The survey weights should be used within a survey set-up that takes into account stratification, clustered sampling and finite population corrections.
Stratification during sampling was used at the primary sampling level, that is, at school level, and not at the lower levels (pupil and teacher). For the estimation set-up, strata for schools are defined by districts and teacher-body size terciles. Although, during sampling, schools were implicitly stratified by primary school leaving examination (PSLE) scores as well, this is a continuous variable that cannot be used to define strata in the estimation set-up.
Clustering is only relevant for pupil and teacher level data, as schools were the primary sampling units within the eligible programme treatment districts. School pupil data is also hierarchical in nature with pupils clustered within schools. Hence, for pupil and teacher estimates, clustering is set at the school level.
Because large proportions of the total eligible population were sampled in many schools at the teacher and pupil levels, the estimation set-up should also account for the finite population correction (FPC) factor. This FPC factor is the square root of the ratio of the population from which the sample is drawn minus the size of the sample and the population from which the sample is drawn minus one. In the case of school level data, the FPC factor is constant across all schools, as the sample of schools was drawn from a constant population of all eligible schools in the programme treatment areas. However, for teacher and pupil level data, the FPC factor changes depending on the school, as population sizes and, in the case of teacher level data, sample sizes vary as well.
In the EQUIP-T IE datasets the stratification, weight, FPC and treatment status variables are as follows:
-The strata variable is: strata
-The school weights variable is: weight_school
-The school finite population correction factor is: fpc_school
-The pupil weight variable is: weight_pupil
-The pupil finite population correction factor is: fpc_pupil
-The teacher interview weight variable is: weight_tchint
-The teacher interview finite population correction factor is: fpc_tchint
-The teacher development needs assesment (TDNA) weight variable is: weight_tdna
-The teacher development needs assesment (TDNA) finite population correction factor is: fpc_tdna
-The teacher roster weight variable is: weight_teacherroster
-The teacher roster finite population correction factor is: fpc_teacherroster
-The treatment status variable is: treatment where 0=control school and 1=treatment school.
The EQUIP-T IE administered eight different questionnaires in the sampled primary schools using Computer-Assisted Personal Interviewing (CAPI) except for the teacher development needs assessment (TDNA) questionnaires, which were administered on paper as these take the form of mock pupil tests in Kiswahili and mathematics.
-HEAD TEACHER INTERVIEW (EQUIP-T IE head teacher (HT) questionnaire): Head teacher background information, qualifications, frequency/type of school planning/management in-service training received, availability of different types of school records, pupil enrolment, school start and closing time, number of streams, use of shifting, school timetable, availability and contents of whole school development plan, school committee, parent-teacher association, existence and types of teacher performance rewards, financing, frequency of staff meetings, district and ward supervision and support to the school, head teacher motivation, school infrastructure, teacher sampling and pupil sampling.
-HEAD COUNT (EQUIP-T IE head count (HC) questionnaire): Teacher attendance by headcount on the day of the survey, teacher punctuality and pupil attendance (by headcount on the day of the survey), and school infrastructure.
-SCHOOL RECORDS (EQUIP-T IE school records (SR) questionnaire): Teacher attendance from school records, pupil attendance and enrolment, school open days.
-PUPIL BACKGROUND (EQUIP-T IE pupil background and learning assessment (PB) questionnaire): Pupil background information, Kiswahili Early Grade Reading Assessment (EGRA) and Early Grade Mathematics Assessment (EGMA) based on standards 1 and 2 national curriculum requirements. Note: The same pupils were assessed in both Kiswahili and mathematics.
-POVERTY SCORECARD (EQUIP-T IE poverty scorecard (PS) questionnaire): Information on household characteristics, household assets, mother's literacy and writing. Note: The respondent was a sampled pupil's parent(s) but if not present the pupil was administered this instrument instead.
-TEACHERS WHO TEACH STANDARD 1-3 KISWAHILI AND/OR MATHEMATICS INTERVIEW (EQUIP-T IE teacher interview (TI) questionnaire): Teacher background information, qualifications, frequency/type of in-service training received, frequency/nature of lesson observation and nature of feedback, frequency/nature of performance appraisal and teacher motivation.
-TEACHERS WHO TEACH STANDARD 1-3 KISWAHILI (EQUIP-T IE teacher development needs assessment (TDNA) questionnaire): Kiswahili subject knowledge assessment (teacher development needs assessment) based on the primary school Kiswahili curriculum standards 1 to 7 but with limited materials from standards 1 and 2.
-TEACHERS WHO TEACH STANDARD 1-3 MATHEMATICS (EQUIP-T IE teacher development needs assessment (TDNA) questionnaire): Mathematics subject knowledge assessment (teacher development needs assessment) based on the primary school mathematics curriculum standards 1-7 but with limited materials from standards 1 and 2.
-TEACHERS WHO TEACH STANDARD 4-7 MATHEMATICS (EQUIP-T IE teacher development needs assessment (TDNA) questionnaire): Mathematics subject knowledge assessment (teacher development needs assessment) based on the primary school mathematics curriculum standards 1-7 but with limited materials from standards 1 and 2.
-LESSON OBSERVATION (EQUIP-T IE lesson observation (LO) questionnaire): Standard 2 Kiswahili and mathematics lesson observations of inclusive behaviour of teachers with respect to pupil gender, key teacher behaviours in the classroom, availability of lesson plan and availability of seating, textbooks, exercise books and pens/pencils during the lesson.
To develop the quantitative survey questionnaires the OPM impact evaluation team collected and reviewed relevant existing survey questionnaires, discussed with national and international experts the different types of questionnaires, visited schools during the November 2013 and January 2014 missions, worked with a national team of experts to develop the pupil assessment and the teacher development needs assessments (TDNA), and conducted three pre-tests of the questionnaires (for more details see section 3.3.2 in OPM. 2015. EQUIP-Tanzania Impact Evaluation. Final Baseline Technical Report, Volume II: Methods and Technical Annexes). In regard to the development of the questionnaires it should be noted that the Reference Group for the EQUIP-T IE was provided with penultimate drafts of the survey questionnaires and its comments guided the questionnaire revisions and finalisation.
The IE administered an early grade reading assessment (EGRA) and an early grade mathematics assessment (EGMA) style questionnaire developed in Tanzania specifically for the IE to standard 3 pupils. These questionnaires capture core standard 1 and 2 curriculum skills in Kiswahili literacy: reading, reading comprehension and listening comprehension and writing, and in mathematics. The learning assessment questionnaires developed for the IE will be used to test standard 3 pupils in 2016 and 2018 as well.
---Testing the same pupils in both subjects---
The IE initially considered assessing different pupils in the two subjects: Kiswahili and mathematics, to avoid pupil fatigue and to ensure that the maximum assessment administration time per pupil would be 45 minutes given the young age of the pupils. However based on feedback from the Reference Group for the Impact Evaluation and pre-test 1 and pre-test 2 findings, the IE design was revised to test the same standard 3 pupil in both subjects while ensuring a feasible test administration time.
---Item difficulty---
Given the relatively young age of the pupils and type of learning assessment, the IE administered the pupil learning assessments one-on-one (one enumerator testing one pupil at a time). Following the learning assessment development in late December 2013, the draft (paper) pupil assessment instruments were pre-tested to ensure that each item was testing the appropriate curriculum level (standards 1 and 2) and to assess whether the difficulty range of items allows for discrimination for a range of pupil performance. For the latter aspect, the pre-test schools were purposively selected to include both high and low performing schools with respect to education performance. After the first pre-test, analysis was conducted to determine item difficulty, functioning and reliability, and some items were discarded based on this analysis. These revised pupil learning assessments were then pre-tested again (using CAPI) and further revised, and were pre-tested a third time and revised further during the enumerator training and pilot, before finalisation.
To collect household characteristics and data to be used to construct poverty measures the OPM impact evaluation team considered two approaches. First, an asset approach where pupils are asked about their home, for example, "Where do you normally get your water from at home?" and responses are used to construct an asset index. Second, a poverty scorecard approach where parents are asked to come to the school and answer questions about household characteristics such as "How many tables does your household own?" (proposed by the Reference Group for the Impact Evaluation). Both approaches were pre-tested by the OPM team and showed discrepancies between the answers provided by pupils and their parents to the same questions. This result combined with the fact that a poverty scorecard with a mapping to poverty likelihoods already existed for Tanzania led the OPM impact evaluation team to select the poverty scorecard approach with parents as the respondents.
The teacher development needs assessment (TDNA) measures teachers' primary school subject knowledge in Kiswahili and mathematics. Investigation of suitable questionnaires was guided in part by the possibility that teacher could perceive an assessment covering the pupil curriculum as threatening and partly by the fact that assessments measuring teacher subject knowledge based on the pupil curriculum has not been done in Tanzania in the past. Accordingly, the OPM impact evaluation team decided to use a format that is an anonymous, non-threatening mock pupil test that teachers mark. This type of questionnaire has been used successfully in other countries, for instance Nigeria. The IE baseline TDNA questionnaire is based on the format used by a TDNA developed by Dr David Johnson at Oxford University for DFID-Nigeria. The OPM impact evaluation team worked with the same national team that developed the pupil learning assessments to develop the TDNA. This work was guided by the Test Specification Framework designed by the OPM impact evaluation team and the questionnaires were revised based on the pre-test findings.
---Groups of teachers selected for the TDNA---
The TDNA questionnaires developed for the IE are used to assess subject knowledge of teachers who teach Kiswahili and mathematics to standard 1-3 and of teachers who teach mathematics to standard 4-7. The reason for assessing these groups of teachers that the EQUIP-T programme will provide: Kiswahili in-service teacher training (INSET) for standard 1-3 curriculum levels to standard 1-3 teachers (programme phase 1); mathematics INSET for standard 1-3 curriculum levels to standard 1-3 teachers (programme phase 1); and mathematics INSET for standard 4-7 curriculum levels to standard 4-7 teachers (programme phase 2).
---Item difficulty---
The TDNA questionnaires were pre-tested to ensure that they cover the whole primary school curriculum (only a minimum number of items for the standards 1 and 2 curriculum levels were included as these items are very basic), and that the difficulty range of items allows for discrimination for a range of teacher performance to avoid potential ceiling and floor effects. After the first pre-test, analysis was conducted to analyse item difficulty, functioning and reliability, and some items were replaced based on the analysis. The revised TDNA questionnaires were pre-tested a second time using CAPI, and then further revised, and were then pre-tested a third time, and then further revised during the enumerator training, before finalisation.
To develop the questionnaire the OPM impact evaluation team reviewed existing lesson observation questionnaires including the work done in Tanzania by UNICEF. There are three choices which have been shown to work well in the Tanzanian context: (i) frequency observation; (ii) timeline analysis; (iii) computerised observation schedule (gives frequency and duration information). Based on observations from the school visits during the November 2013 scoping mission and pre-test 1, the OPM impact evaluation team chose the frequency observation option for the lesson observation questionnaire.
The first pre-test (January 2014) tested the initial set of draft questionnaires before the start of programming these into CAPI, and took place in eight primary schools in two districts. This pre-test was conducted by a national team consisting of Kiswahili and mathematics specialists from the University of Dar es Salaam, primary school teachers and an expert who had previously developed pupil learning questionnaires for the UWEZO learning assessment and Kiufunza project in Tanzania, accompanied by two OPM impact evaluation team members. The questionnaires were revised based on the pre-test 1 findings. The aims of this pre-test were to:
-Assess whether the difficulty range of the items in the draft pupil learning assessment corresponds to the Tanzania primary 1 and 2 curriculum levels for Kiswahili and mathematics;
-Test whether the draft pupil learning assessment items discriminate adequately among pupils;
-Assess whether the difficulty range of the draft items in the draft TDNAs corresponds to the Tanzania primary school curriculum for Kiswahili and mathematics;
-Test whether the draft TDNA items discriminate adequately among teachers;
-Test appropriate time of administration for the pupil learning assessments and TDNAs; and
-Test accuracy of all questionnaires in terms of precision and clarity of wording, translation, answer categories etc.
The second pre-test (February 2014) was conducted in four schools in one region by two teams consisting of local enumerators, OPM impact evaluation team members and OPM Tanzania survey team members. The aims of this pre-test were to:
-Further test accuracy of the questionnaires in terms of precision and clarity of wording, translation, answer categories etc.;
-Test technical aspects of administering the questionnaires using CAPI;
-Test questionnaires and procedure from an education perspective;
-Test and improve fieldwork protocols and logistics;
-Learn about the administration time of the questionnaires for fieldwork planning purposes:
-Assess adequacy of materials and equipment;
-Learn about potential challenges in the field;
-Gather field experience to improve training delivery;
-Further test appropriate time of administration for the pupil learning assessments; and
-Test whether it is appropriate to administer both assessments (Kiswahili and maths) to the same pupil.
The third and final pre-test was conducted in March 3-5, 2014 by a team of OPM impact evaluation team members, OPM Tanzania survey team members and local enumerators to test the revised CAPI software, gain further insight for the enumerator training and collect additional information to inform the baseline fieldwork planning.
The questionnaires were further revised, in particular translations, during the enumerator training in March, 2014.
The complete questionnaires are provided under External Resources except for the pupil learning assessment and TDNA instruments for which the assessment items (questions) have been removed as these are confidential until the completion of the EQUIP-T Impact Evaluation Baseline Survey in 2018. However, a summary note describing the learning domains assessed by these two instruments is provided under External Resources.
Start | End | Cycle |
---|---|---|
2014-03-27 | 2015-05-13 | Baseline |
Start End Cycle
2014-03 2014-05 Baseline
Name |
---|
Oxford Policy Management Ltd |
Each baseline field team consisted of one team supervisor, two enumerators and one driver. Each team used a four-wheel drive vehicle to travel from school to school.
The role of the field supervisor was to manage their field teams, supplies and equipment, maps and listings, coordination with local authorities and head teachers and to make arrangements accommodation and transport. The supervisor also spot checked enumerators' work, kept field control documents, and supervised the sending of the data files to the OPM Tanzania office for further validation and checking.
Several additional supervision and quality assurance (QA) mechanisms were put in place for the duration of the data collection:
-Supervision by members from the OPM Tanzania survey team of all field teams during the initial fieldwork period;
-Supervision of all field teams by an external survey expert (the fieldwork QA supervisor), and assistant survey manager during the second week of fieldwork, and fieldwork supervision by the fieldwork QA supervisor throughout the duration of the data collection;
-Two additional training sessions were held over weekends for field teams that were identified as needing extra support;
-For the duration of the fieldwork one OPM staff member worked full time, travelling between the twelve regions, to supervise the field teams;
-Additional QA visits as needed from the OPM Tanzania survey manager and assistant survey manager to support relatively weaker teams; and
-Continuous support and feedback from the OPM Tanzania survey team via phone and email.
The use of computer assisted personal interviewing techniques (CAPI) for the data collection meant that checks and flags could be built into the CAPI software to preclude enumerators entering inadmissible data values and allow in-the-field validation of each questionnaire section in CAPI during the interviews.
Moreover, the use of CAPI meant that the raw data was received almost immediately from the field by the OPM Tanzania survey team in Dar es Salaam so that identified errors could be corrected while the teams were still in the field. Every two days the field teams sent their data files back to the OPM Tanzania survey team and the assistant data manager validated and checked the quality of the data received. This included:
-Verifying that the correct number of data files had been received;
-Opening each file in the custom-designed CAPI software to validate the data according to the built-in checks and flags; and
-Communicating to team supervisors how to correct any detected data issues.
The EQUIP-T IE baseline survey was conducted by OPM's Tanzania office working closely with the OPM impact evaluation team that designed the IE and conducted the baseline analysis. For full details on the data collection see Annex E in OPM. 2015. EQUIP-Tanzania Impact Evaluation. Final Baseline Technical Report, Volume II: Methods and Technical Annexes.
To recruit enumerators and team supervisors with appropriate background and skills, priority was given to candidates with prior experience conducting education surveys and/or a background in teaching. Good understanding of English language was an essential requirement. 51 people were trained to ensure selection of enumerators and team supervisors with appropriate skills and to have a reserve pool to select from if needed during the fieldwork. The final field teams were selected at the end of the enumerator training based on: active participation during the training, ability to follow the field work procedures and administer interviews during the training practice days; ability to use computer assisted personal interviewing techniques (CAPI); and availability for the entire fieldwork duration. 45 enumerators and team supervisors were selected.
Each baseline field team consisted of:
-One team supervisor;
-Two enumerators (of which one was an educator responsible for conducting the lesson observations); and
-One driver.
Training of enumerators for the fieldwork was conducted in Morogoro region March 13-22, 2014 led by a team comprising OPM impact evaluation team members and the OPM Tanzania office survey team.
The training was split into three parts:
-Training of team supervisors and enumerators on how to conduct the head teacher interview (conducted by supervisors) and the lesson observation (conducted by educators);
-Training of all enumerators in using the CAPI instruments and tablets (including the in-the-field data validation built into CAPI); data management procedures; overall fieldwork procedures and the instruments (excluding the head teacher interview and lesson observation); and
-A two-day pilot in Singida region.
The training had two modalities: classroom-based training sessions interspersed with field practice. During the classroom-based sessions each question in each of the instruments was presented and procedures explained. Feedback from the enumerators were collected daily by the OPM team and addressed to further improve instrument translation, the enumerator manual and clarify as needed. The instruments covered during each classroom-based session were tested in the field the following day(s) to practice what had been learnt in the classroom. OPM team members supervised the field teams during the field practice and debriefs were held after.
The pilot was conducted immediately after the enumerator training. The objective was to ensure that team supervisors learnt how to organise themselves and their team members in the field and that team supervisors and enumerators experienced a full two days in a school practicing how to collect all the required data within the allocated timeframe. During the first two days of fieldwork OPM survey team members supervised the field teams and held daily debriefs to discuss challenges faced and to ensure supervisors were able to send the collected data back to the OPM Tanzania office for checking.
To provide the field teams with a reference tool and clear directions during the data collection, an enumerator manual for the fieldwork implementation was developed by the OPM impact evaluation team and OPM Tanzania survey team. The enumerator manual covers codes of conduct, the use of CAPI, data validation and transfer procedures, instructions on sampling within schools as well as instrument descriptions and instructions on how to administer each instrument.
The fieldwork covered 25 districts in 12 regions in Tanzania (see section on Geographic coverage) and the sample consisted of 200 government primary schools.
The fieldwork model and execution required detailed planning. All schools had to be visited within a short timeframe from end of March to mid-May before national examinations. The sampled schools had mid-term breaks on different dates depending on the region as well as mid-term examinations leading up to the mid-term break. The fieldwork plan was kept flexible to allow for revision whenever an unscheduled event like road destruction, floods, or out of school sports events and other activities occurred.
On average, it took one field team (see section on Field teams) two days to complete the data collection for one school and the interviews were conducted in Kiswahili.
For supervision and quality assurance during the data collection see section on Supervision.
All sampled schools, head teachers, teachers and pupils were uniquely identified by ID codes assigned either before the fieldwork (region, district and school IDs), or at the time of the school visit using automated tables in CAPI (teacher, lesson observation and pupil IDs). The first set of data checking activities included (using Stata):
-Checking of all IDs;
-Checking for missing observations;
-Checking for missing item responses where none should be missing; and
-First round of checks for inadmissible/out of range values.
This resulted in four edited datasets (school/head teacher level, pupil level, teacher level and lesson observation level) sent to the OPM impact evaluation team for further checking and analysis.
The four edited datasets received from the OPM Tanzania survey team were subject to a second set of checking and cleaning activities. This included checking for out of range responses and inadmissible values not captured by the filters built into the CAPI software or the initial data checking process. This also involved recoding of non-responses due to the questionnaire design and rules of questionnaire administration for the pupil learning assessment and teacher development needs assessment.
A comprehensive data checking and analysis system was created including a logical folder structure, the development of a detailed data documentation guide and template syntax files (in Stata), to ensure data checking and cleaning activities were recorded, that all analysts used the same file and variable naming conventions, variable definitions, disaggregation variables and weighted estimates appropriately.
Name |
---|
Oxford Policy Management Ltd |
Name | URL | |
---|---|---|
Oxford Policy Management Ltd | http://www.opml.co.uk/ | admin@opml.co.uk |
The data files have been anonymised and are available as a Public Use Dataset. They are accessible to all for statistical and research purposes only, under the following terms and conditions:
The original collector of the data, Oxford Policy Management Ltd, and the relevant funding agencies bear no responsibility for use of the data or for interpretations or inferences based upon such uses.
Oxford Policy Management. Education Quality Improvement Programme in Tanzania Impact Evaluation Baseline Survey 2014, Version 2.3 of the public use dataset (December 2021).
The user of the data acknowledges that the original collector of the data, the authorised distributor of the data, and the relevant funding agency bear no responsibility for use of the data or for interpretations or inferences based upon such uses.
(c) 2021, Oxford Policy Management Ltd
Name | URL | |
---|---|---|
Oxford Policy Management Ltd | admin@opml.co.uk | http://www.opml.co.uk/ |
DDI_TZA_2014_EQUIPIE-BL_v03_M
Name | Affiliation | Role |
---|---|---|
Springham, Stephi | Oxford Policy Management Ltd | Data Analyst |
Pettersson, Gunilla | Birk Consulting | Education Lead |
2021-12-02
Version 2.3 (December 2021). Edited version based on Version 2.1 (March 2017)
The following fields:were updated:
This site uses cookies to optimize functionality and give you the best possible experience. If you continue to navigate this website beyond this page, cookies will be placed on your browser. To learn more about cookies, click here.