NGA_2014_TDPITCIE-BL_v01_M
Teacher Development Programme In-Service Training Component Impact Evaluation 2014
Baseline Survey
Name | Country code |
---|---|
Nigeria | NGA |
Impact Evaluation Survey
The quantitative baseline survey for the phase 1 Impact Evaluation (IE) of the Teacher Development Programme (TDP) In-Service Training component is the first of two rounds, for which data collection took place in October 2014 - January 2015. At the time of writing, the endline follow-up survey is planned to begin in June 2018.
Sample survey data [ssd]
The Primary Sampling Units (PSUs) of the survey are TDP-eligible state primary schools, at which level some analysis is performed (for example, characteristics of schools and head teachers).
However, the main units of analysis are:
· Teachers (selected prior to the PSUs and teaching grades 1-3 in any of the three subjects: English, maths, or science)
· Pupils (in grade 3 at baseline, and taught English, math or science by at least one of the 'selected' teachers).
· Lessons taught by the selected teachers (not sampled)
· TDNAs administered to the selected teachers (not sampled)
Please refer to the 'Sampling Procedure' section for more details.
Version 2.1: Edited, anonymous dataset for public distribution - 1st version
2016-06-17
Version 2.1 is the first public use dataset to become available to the public. Since there is a requirement to keep item responses from two of the survey instruments confidential until completion of the impact evaluation, the corresponding variables have been removed from the dataset and the questionnaires not included at this time. However, a number of indicators calculated from these variables have been included for the data user.
The survey administered five different instruments covering head teacher and teacher interviews, lesson observations, a Teacher Development Needs Assessment (TDNA) and a Grade 2 Pupil Assessment (administered to grade 3 pupils) of English literacy, numeracy and scientific literacy.
Both the Head Teacher and Teacher interviews covered:
The Head Teacher interview included questions on teacher attendance from school records, while the Teacher interview included self-reported absenteeism.
The Head Teacher interview also included questions on:
The lesson observation instrument covered:
The TDNA (in English, maths and science & technology) instrument included questions to assess subject knowledge and ability to measure and analyse pupil academic progress.
The pupil instrument covered:
Item responses for the Grade 2 Pupil Assessment and TDNA have not been included in this version of the public dataset due to a requirement to keep responses confidential until completion of the impact evaluation. The corresponding questionnaires are also not included at this time. However, a number of indicators calculated from these variables have been included for the data user (described further below).
Topic | Vocabulary |
---|---|
Education | World Bank |
Primary Education | World Bank |
Secondary Education | World Bank |
The survey was carried out in the TDP phase 1 states (Jigawa, Katsina and Zamfara) but the results are NOT representative at the state-level, i.e. state-level estimates do not represent the average situation in a given state.
The data are not representative at any geographical level.
The target populations (the groups for which one would like to generalise the study findings) are the schools eligible for the TDP in treatment and control groups in the three states, and the eligible teachers and pupils within these schools.
Please refer to the 'Sampling Procedure' section for more details on the definition of eligibility.
Name |
---|
Oxford Policy Management Ltd |
Name | Role |
---|---|
Department for International Development | Funding |
(1) Aim of sampling design
The aim of the sampling design was to define a valid counterfactual 'control' group from which comparisons could be made with a 'treatment' group that participate in the TDP. The control group would not participate in the TDP in-service training but would have background characteristics which are, on average, similar to the treatment group that do participate in TDP in-service training.
The sampling design of the IE was based on a quasi-experimental 'constrained randomisation' approach. 'Constrained randomisation' means that certain parameters of the IE were already fixed - for example, the Local Government Areas (LGAs) where the programme operates. In addition, pre-determined groups of schools fulfilling certain criteria (described below) would constitute the sampling frame - this is in contrast to a fully randomised design approach where one might expect the random drawing of groups (or clusters) of schools from a list of all state primary schools in the region under study.
Randomisation was conducted only in allocating groups of schools to 'treatment' or 'control' status.
The sample design was determined to a large extent by practical programme considerations, and also by the available budget.
(2) Construction of sampling frame: Eligible primary schools
The sampling frame was constructed from scratch through the stages described below. The intended size of the frame was 1008 primary schools eligible for the TDP (504 'treatment' schools and 504 'control' schools) and would constitute the target population (or universe) of eligible schools, from which a sample of treatment and control schools would be drawn for the survey.
Stage 1: Selection of LGAs
In each state, 14 LGAs where the programme would operate had already been pre-determined by the TDP as per arrangements with the States.
· Jigawa: 14 out of 27 LGAs
· Katsina: 14 out of 34 LGAs
· Zamfara: 14 out of 14 LGAs
Stage 2: Selection of sets of primary schools
In each of the 14 LGAs in each state, 2 sets of 12 eligible primary schools each were to be selected;
To be eligible for the TDP: (1) each school should have one head teacher and at least another three teachers; (2) each school should have at least 8 grade-3 pupils.
Schools within each set were identified according to geographical proximity in order to facilitate any training and periodic meetings of teachers within each set, and to create a broader peer network within the locality.
It was the intention that the two sets of schools within each LGA would be selected to be broadly similar. State Education Boards (SUBEBs) were responsible for the selection and were provided with guidelines to assist them, such as taking into account the location of the schools (urban/rural), the size of the schools in terms of classrooms and pupils, presence of a School Based Management Committee (SBMC), and state of school infrastructure. In the case of Jigawa, nearly all schools would have had exposure to the also DFID-funded Education Sector Support Programme in Nigeria (ESSPIN). Therefore, care was taken to balance the level of exposure to ESSPIN across the pairs of sets in each LGA.
Stage 3: Selection of eligible teachers
Before the selection of schools which would participate in the TDP or not, the LGEA and head teacher from each school in every set was required to identify three other teachers who would potentially receive TDP support in addition to him/herself, based on the following criteria:
· Classroom teaching at early grade-level (grades 1-3); and
· Classroom teaching in any of the three subjects: English, maths, or science.
Stage 4: Random allocation of treatment/control sets
After receiving lists of school sets and teachers from the TDP coordinators, the IE team randomly assigned one set of schools among every pair of sets to TDP 'treatment' status using a random number generator. The other set would therefore be assigned 'control' status.
This would result in 14 x 3 = 42 'treatment' sets of 12 schools each (504 'treatment' schools in total) and correspondingly 42 'control' sets of 12 schools each (504 'control' schools in total). In 'treatment' schools, all head teachers and identified teachers in the previous stage would receive TDP support.
(3) Drawing of the samples for the baseline survey
Selection of schools
This was performed in one stage, using implicit stratification by state, LGA and treatment/control status. In other words, each set of 12 schools described above was considered a stratum (42 'treatment' sets and 42 'control' sets).
4 schools were randomly selected from each set.
This yielded an intended sample size of 14 x 4 = 56 treatment schools in each state, and correspondingly 56 control schools in each state.
Thus the total intended sample size across all 3 states was 56 x 3 = 168 treatment schools and correspondingly 168 control schools = grand total of 336 schools.
Selection of teachers
At each sampled school, the head teacher and teachers identified during the construction of the sampling frame would be interviewed. Each teacher and head teachers who teach, would also be observed while they taught a lesson. Following the completion of the school survey, the teachers and head teachers (irrespective of whether they teach or not), would be administered a teacher development needs assessment (TDNA) at an examination centre.
Thus the intended numbers of interviews, lesson observations, and TDNAs were as follows:
· Head teacher interviews: 336
· Head teacher lesson observations: up to 336
· Teacher interviews: 336 x 3 = 1008
· Teacher lesson observations: 336 x 3 = 1008
· TDNAs: 336 x 4 = 1344
Selection of pupils
In order to assess pupil learning levels for this baseline survey, eight of all pupils who started grade 3 in September 2014 and who were being taught English, maths or science by at least one 'selected' teacher during that term, would be randomly selected for a combined English, maths and scientific literacy learning assessment.
The pupils would be drawn from a sampling frame consisting of all eligible grade 3 pupils present in school on the day of the survey recorded by data collectors, using a random number generator programmed into their Computer-Assisted Personal Interviewing (CAPI) software.
Thus the intended pupil sample size was 336 x 8 = 2688
Panel component
It is planned that the same teachers and head teachers, and pupils who were surveyed at the baseline will be surveyed again at the endline in June 2018. This will allow measurement of the impact of the TDP on teacher effectiveness, and on pupil learning between grade 3 and grade 6. While the pupils who had recently started grade 3 at the time of the baseline survey in October 2014 were administered a grade 2 level learning assessment, they will be administered a grade 6 level assessment at the endline in 2018 (when they will be in grade 6) that will include a limited number of grade 2 level items to maintain direct comparability with the baseline.
(4) Non-response and replacement strategies
Schools
5 schools were found to not be eligible and were thus removed from the sampling frame. Reasons included:
· No eligible grade 3 pupils
· No teachers who teach grades 1-3 in English, maths, or science
· School found to be an IQTE school, or special school for children with disabilities
Replacements were made for these schools from the same set (stratum) from which they were drawn, albeit now with fewer than 12 schools.
In other cases, replacements were made for schools:
· Which were closed for the duration of the survey team's stay in the LGA; or
· For which there were security concerns.
Teachers
3 different scenarios for unavailability of 'selected' eligible teachers arose (2 and 3 were due to outdated lists which had been used to make the teacher selection):
(1) A 'selected' eligible teacher was not present on the day of the survey due to short-term absence - data collectors attempted to re-visit the school at a later date.
(2) A sampled school was found to be very small and had fewer than 4 (but at least 1) eligible teacher(s) - all were interviewed (if possible).
(3) A 'selected' eligible teacher was on long-term absence, had been transferred elsewhere, had died or was unidentified. After consultation with the programme it was decided that data collectors would ask the head teacher to name a replacement teacher as per the selection criteria.
Lesson observations and TDNAs
Replacements were not possible.
The number of lesson observations/TDNAs would be reduced according to the number of 'selected' eligible teachers on short-term absence who could not be revisited, or where teacher replacements could not be made.
In addition, head teachers/teachers were not always available to have all three instruments (interview, lesson observation and TDNA) administered to them.
The head teachers and all selected teachers from 7 of the sampled schools did not show up at the examination centres for the TDNA.
Pupils
If a school was found to have only 1-8 eligible grade 3 pupils present on the day of the survey, all were selected for assessment on the day.
Reasons for less than full response are expected to equally affect both treatment and control clusters and thus, this is unlikely to affect the randomisation design. However, the sample as a whole might be subject to selectivity bias if the schools, teachers and pupils who were ultimately included in the sample are systematically different from the rest of the population of TDP treatment and control schools, teachers and pupils.
School
Final sample = 330 (Intended sample = 336)
Head teacher (interview)
Final sample = 330 (Intended sample = 336)
Teacher (interview)
Final sample = 908 (Intended sample = 1008)
Lesson observation and TDNA
Lesson observations (teachers and head teachers who teach): Final sample = 1070 (Intended sample = 1344)
TDNAs (teachers and head teachers who teach): Final sample = 1158 (Intended sample = 1344)
Pupil
Final sample = 2575 (Intended sample = 2688)
Since the aim of the survey was to make an inference about eligible schools, the 5 ineligible schools (mentioned previously) were removed from the sampling frame of 1008 schools.
Therefore the weighting procedures were designed to adjust the school weights only for non-response of eligible schools, after correction was made to the total number of eligible schools in the affected sets (strata).
Appropriate weights were assigned to each sampled school, teacher and pupil. The weights were equal to the inverse of the overall sampling probabilities, taking into account each stage of selection. The school, teacher and pupil weights were calculated at the school level.
------------------------------------------------------------------------
School and Head teacher (interview)
***------------------------------------------------------------------------
· In the case of strata in which fewer than 4 sampled schools were successfully interviewed, the weighting formula automatically adjusts the weight for non-response.
· In the case of strata in which fewer than 12 schools were found to be eligible, the weighting formula also adjusts for the smaller number of eligible schools.
Each sampled school has one head teacher, so the head teacher has the same weight as the school.
Name of weight variable = weight_school
Stata SVY settings:
svyset [pw=weight_school], psu(id_school) strata(strata) singleunit(scaled) fpc(fpc_school)
------------------------------------------------------------------------
Teacher (interview)
***------------------------------------------------------------------------
The teacher weights for the 4 'selected' eligible teachers in each sampled school are generally equal to the school weights.
· In the case of small schools with fewer than 4 (but at least 1) eligible teacher(s), the teacher weight would also be equal to the school weight if all these teachers are successfully tested and observed.
· In the case of sampled schools where 'selected' eligible teachers could not be interviewed due to short-term absence even after revisits, it was necessary to adjust the weight for non-response.
Name of weight variable = weight_teacher
Stata SVY settings:
svyset [pw=weight_teacher], psu(id_school) strata(strata) singleunit(scaled) fpc(fpc_school)
------------------------------------------------------------------------
Head Teacher/Teacher (interview)
***------------------------------------------------------------------------
Two further data-files have been compiled for analysis combining both head teachers and teachers, with the corresponding weights included:
(1) Teachers and all Head Teachers: Name of weight variable = weight_httall
Stata SVY settings:
svyset [pw=weight_httall], psu(id_school) strata(strata) singleunit(scaled) fpc(fpc_school)
(2) Teachers and Head Teachers who teach: Name of weight variable = weight_htt
Stata SVY settings:
svyset [pw=weight_htt], psu(id_school) strata(strata) singleunit(scaled) fpc(fpc_school)
------------------------------------------------------------------------
Lesson observation and TDNA
***------------------------------------------------------------------------
These weights were calculated in a similar way as the teacher interview weights, taking into account the number of eligible teachers who had been successfully observed/taken the TDNA. In addition, 6 lesson observations were removed from the data at the analysis stage since the recorded lesson duration was less than 9 minutes, and the weight adjusted accordingly.
(1) Lesson observation: Name of weight variable = weight_lo
Stata SVY settings:
svyset [pw=weight_lo], psu(id_school) strata(strata) singleunit(scaled) fpc(fpc_school)
(2) TDNA: Name of weight variable = weight_tdna
Stata SVY settings:
svyset [pw=weight_tdna], psu(id_school) strata(strata) singleunit(scaled) fpc(fpc_school)
------------------------------------------------------------------------
Pupil
***------------------------------------------------------------------------
The calculation involved multiplying the school weight by the inverse of the within-school probability of selection for the sample of eligible pupils.
In the case of small schools with 1-8 eligible grade 3 pupils, all were selected for the survey and the weight calculated accordingly.
Name of weight variable = weight_pupil
Stata SVY settings:
svyset [pw=weight_pupil], psu(id_school) strata(strata) singleunit(scaled) fpc(fpc_school) || _n fpc(fpc_pupil)
The survey administered five different instruments covering:
The themes covered by the survey instruments are described in the 'Scope' section.
All instruments were administered using Computer-Assisted Personal Interviewing (CAPI), except for the TDNA which was administered on paper to mimic real life marking of pupil tests and preparation of worksheets. The TDNA was administered at examination centres after the completion of the quantitative baseline survey.
(1) Development of the questionnaire(s)
OPM had the responsibility for managing the process of questionnaire development
PUPIL ASSESSMENT: The English literacy and numeracy items in the pupil assessment were adapted from the grade 2 learning assessments used for ESSPIN's biennial composite school survey (CS). These assessments have been used to collect pupil learning data for ESSPIN's baseline in 2010 and follow-up surveys in 2012 and 2014 respectively. The scientific literacy items in the test were jointly developed by EDOREN's education consultants and TDP's instrument developers, based on the grade 2 science and technology curriculum prescribed by NERDC and the Universal Basic Education Commission (UBEC).
TDNA: The approach to measurement of teacher subject knowledge in the quantitative baseline is closely founded on the teacher assessment framework, TDNA instruments, benchmark of expected teacher professional working knowledge, and levels of achievement framework developed previously by a reference group of national educators and international experts for two education programmes currently being funded by DFID in Nigeria, namely ESSPIN and GEP.
The Head Teacher and Teacher interview instruments were developed by EDOREN consultants.
The lesson observation instrument was developed by EDOREN consultants in collaboration with TDP.
Two pre-tests were conducted (one in Abuja, one in Kaduna). EDOREN consultants were responsible for revising instruments based on feedback from pre-tests.
(2) Languages
The head teacher and teacher interviews were conducted in Hausa. The pupil numeracy and scientific literacy assessments were also administered in Hausa, as well as a number of the English literacy items except those where comprehension of the question in English was essential to the competency being tested by the item. The TDNA was administered in English.
Start | End | Cycle |
---|---|---|
2014-10-27 | 2015-01-16 | Baseline |
Name |
---|
Oxford Policy Management Ltd |
State Universal Basic Education Board - Jigawa |
State Universal Basic Education Board - Katsina |
State Universal Basic Education Board - Zamfara |
Please see section on 'Notes on Data Collection'
(1) Pre-fieldwork requirements
Detailed enumerator manuals were developed for all instruments (particularly detailed for the pupil tests and lesson observations).
Data collectors were staff seconded from the SUBEBs of the three TDP phase-1 states for five weeks of data collection, selected on the basis of a written quiz to test survey skills and experience, and IT skills. Data collectors were especially trained to collect high quality data while protecting the identities and interests of vulnerable groups (e.g. disabled children).
The fieldwork plan required 48 data collectors (16 per state) but additional data collectors were trained as a contingency/backup. In total around 70 data collectors were trained. The final 48 were selected based on their general level of participation during the training as well as performance on a series of quizzes/tests conducted during the training. The contingency data collectors were not required in the end.
There were two field supervisors ('state coordinators') per state. There was an overall fieldwork manager in charge of all the states (but especially focusing on Zamfara and Katsina) and a deputy fieldwork manager (in charge of Jigawa). There were three roving IT assistants across the three states to troubleshoot CAPI related issues if any. The number of supervisors was adequate in relation to the number of enumerators.
(2) Timetable for field activities
A timetable giving the time frame for each field activity was made available to all team members (except data collectors) prior to commencement of field work. The allocation of time in the timetable was sufficient.
(3) Fieldwork training
To ensure consistency in administering various interviews, tests and observations, rigorous and uniform training for all enumerators and their supervisors were conducted using the enumerator manuals as reference material. This training also included sessions on duty-of-care and security, CAPI and how to upload daily data to the server (supervisors only).
Three field pilots took place in Kaduna state in the month preceding the start of data collection.
(4) Fieldwork implementation (data collection)
Four cars were available per state, carrying 4 data collectors and a supervisor/IT assistant each. The fieldwork manager and deputy had a car each to themselves to conduct field supervision. Adequate transport was arranged, with standard duty-of-care procedures in place to ensure that fuel, oil and vehicle maintenance issues were dealt with effectively.
Field coordinators were trained to download data daily from enumerators' CAPI devices and transmit them via the internet to survey data managers in Abuja. Data managers, in turn, conducted daily checks on the data for errors such as incomplete questionnaires, incorrect school/teacher/pupil IDs and ID duplication. These were reported back to the field coordinators for rectification the next day while they were still in the same LGA and before they moved to the next LGA on their fieldwork plan. Field coordinators were also trained to complete a survey monitoring form (one per school) to provide more qualitative information such as what time the team reached the school, whether any issues were experienced in accessing the school, whether and why teachers were replaced, reasons for fewer than expected number of respondents in the school, need for revisiting the school (say, if a sampled teacher was ill and away), and so on. Data was transmitted to the lead quantitative researcher on a weekly basis for running further data checks while the team was still in the field - these included checking for inconsistent values not picked up by CAPI which the field teams were then requested to clarify with respondents on the phone or by revisits during the final/contingency week of fieldwork. In summary, a large part of the data cleaning task was completed and issues rectified on a continuous basis while the teams were still in the field. At the end of fieldwork, further data cleaning was conducted by data managers (e.g. duplication of sampling units, incorrect IDs, etc.) and passed on to lead researcher. Quality of data received was in fairly workable condition, partly due to CAPI checks built into the questionnaires and partly due to daily checks by data managers and rectification in the field.
Data cleaning and analysis were conducted from December 2014 through March 2015 by a small team based in the OPM office in Oxford.
All statistical analyses were performed with Stata, using its 'svy' facilities for survey data analysis to account for the sampling design.
Name | Affiliation |
---|---|
Alia Aghajanian | Oxford Policy Management Ltd |
Name | URL | |
---|---|---|
Oxford Policy Management Ltd | http://www.opml.co.uk/ | admin@opml.co.uk |
The data files have been anonymised and are available as a Public Use Dataset. They are accessible to all for statistical and research purposes only, under the following terms and conditions:
The original collector of the data, Oxford Policy Management Ltd, and the relevant funding agencies bear no responsibility for use of the data or for interpretations or inferences based upon such uses.
Use of the dataset must be acknowledged using a citation which would include:
Example,
Oxford Policy Management Ltd. Nigeria Teacher Development Programme In-Service Training Impact Evaluation Baseline Survey (TDPITCIE-BL) 2014, Ref. NGA_2014_TDPITCIE-BL_v01_M. Dataset downloaded from [url] on [date].
The user of the data acknowledges that the original collector of the data, the authorised distributor of the data, and the relevant funding agency bear no responsibility for use of the data or for interpretations or inferences based upon such uses.
(c) 2016, Oxford Policy Management Ltd
Name | Affiliation | URL | |
---|---|---|---|
Sourovi De, Project Manager | Oxford Policy Management Ltd | sourovi.de@opml.co.uk | http://www.opml.co.uk/people-partners/consultants/sourovi-de |
DDI_NGA_2014_TDPITCIE-BL_v01_M
Name | Affiliation | Role |
---|---|---|
Stephi Springham | Oxford Policy Management Ltd | Documentation of study and preparation of dataset to be published |
2016-06-17
Version 01 (June 2016)
Version 02 (August 2016). Edited version based on Version 01 DDI (DDI-OPM-A0302-NGA-R1BL-V01) that was done by Oxford Policy Management Ltd.
This site uses cookies to optimize functionality and give you the best possible experience. If you continue to navigate this website beyond this page, cookies will be placed on your browser. To learn more about cookies, click here.