The evaluation considers a range of outcomes of the ADA program, including production and profitability, investment and technology adoption, employment and wages, and access to credit and markets. Though it was originally designed as a rigorous impact evaluation that incorporated a randomized design, the evaluation was not able to undertake a rigorous statistical
analysis of the program on these outcomes for a number of reasons, including the small overall size of the program, changes during implementation that compromised the original evaluation design, and the timing of the evaluation. Instead, the evaluation uses a mixed methods approach combining qualitative data with descriptive quantitative analysis to assess the impact of the project.
Qualitative data collection included focus group discussions and in-depth interviews that collected detailed information from a total of 69 respondents. Respondents were recruited from among those who responded to the ADA survey and were grouped together by type of grantee (PP, VA/VCI, and FSC as separate groups) and by characteristics of interest based on responses to the ADA survey (those that reported an increase in income, those that didn't respond to income questions, those that closed their businesses, exporters, and machinery ring grantees).These interviews and focus groups were transcribed and analyzed using the specialized software package NVivo to systematically categorize responses and identify commonalities. Themes of interest to the evaluation were identified and then coded in all of the transcriptions. Summaries of responses by code and respondent type were completed and interesting cases were highlighted, providing some concrete examples of project results and/or feedback that also served in helping interpret the quantitative data.
Kind of data
Sample survey data [ssd]
Anonymized dataset for public distribution
The program was implemented nationally.
Unit of analysis
Small, medium and large agribusinesses (MCG grantees and non-grantees)
Applicants to the ADA program from all application rounds (9 in total) in Georgia.
Producers and sponsors
NORC at the University of Chicago
Millennium Challenge Corporation
Funded evaluation and Round 3 data collection
Milllennium Challenge Georgia
Funded Round 1 and Round 2 data collection
The frame for the survey is the list of all applicants. It was supplied by CNFA, the program implementer, along with the scores from the initial evaluation, various statuses assigned by CNFA, and various items of information taken from the applications.
Each of the four applicant types were considered as separate strata, that is, primary producers (PPs), farm service centers (FSCs), value adders (VAs) and value chain enterprises (VCHs).
For PPs, one comparison case was selected for each new treatment case. A propensity score matching (PSM) methodology was used to select the comparison cases, using binary logistic regression. The dependent variable was the event of being a treatment case. The independent variables, all available from data supplied by CNFA on the frame, were:
* the amount of matching contribution the applicant proposed to make
* the current turnover of the business when it made its application
* the number of employees of the business when it made its application
* whether the business was able to secure credit
* the year in which the business was established
* whether the business was located in a village or larger town
* the type of activity the business was proposing to be engaged in
* the round in which the applicant applied
For each PP treatment case, the comparison case with the closest PSM score was selected for inclusion in the survey sample, as long as it had not been selected for interview previously.
For the other applicant types (FSCs, VAs and VCHs), stratified random sampling was used to select comparison cases. Because the populations were relatively small, two comparison cases were selected for each treatment case. Selection of comparison cases was to be made within the same strata in which the treatment cases occurred. The strata were defined in terms of the current turnover of the business when it made its application and the year in which the business was established. Type of activity was also used to define the strata for VAs and VCHs.
The following sampling rules were applied:
1. Include all businesses that had been interviewed in Round 1 from ADA application waves 1 to 7.
a) Interviewees from ADA application waves 8 and 9 were excluded because those interviews had been conducted too recently to expect significant change to have taken place in the meantime.
b) Selections were made in terms of "businesses" rather than "applications" because some businesses had applied several times. Where a selected business had made multiple applications, the most recent application was nominally selected for inclusion in the survey, regardless of whether that application or an earlier one was the basis of interview in ADA application waves 1 to 7. The most recent one was chosen because it would have the most up-to-date contact information.
c) 199 applications were selected on this basis.
2. Include treatments from any ADA application wave that had not yet been interviewed in Round 1. Some of these were previously non-response and some appeared to have wrongly claimed to have been previously interviewed on the basis of another application. 29 applications were selected on this basis.
3. Include applicants that scored 70+ (passing score) in ADA application waves 1-7, that have not yet been interviewed, but that are not previous nonresponse. Most appear to have wrongly claimed to have been previously interviewed on the basis of another application. 8 applications were selected on this basis.
4. PPs and VAs were not fully enumerated in Round 1, and the process used to randomly select applicants with a score less than 70 has not enabled the probability of selection to be derived. Therefore, for Round 2, select a random sample of 100 PPs and 25 VAs applications, where (i) neither they nor any related application was interviewed in ADA application waves 8 or 9, and (ii) neither they nor any related application received a score of 70+. If the selected application has not already been selected under condition 1 above, include in the Round 2 Survey.
a) 78 PP applications were selected on this basis, that is, 22 of the 100 were already selected under condition 1 above.
b) 18 VA applications were initially selected on this basis, that is, 7 of the 25 were already selected under condition 1 above.
However, as there were only 20 eligible VAs to be chosen under this condition, all 20 were included and so the VAs became fully enumerated.
In total there were 334 applications selected for inclusion in the survey.
The frame and summary information about the selections are included in the external resource "Followup frame and selections.xlsx".
The sample frame was created by NORC and included all cases that were part of the sample in Round 1 and all the cases that were part of the sample in Round 2. The sample comprised of treatment and control groups with three main types of businesses in each group. Overall 600 face-to-face interviews were planned to be conducted for Round 3. This sample frame was then put through a re-listing exercise to update it since the list of business status and contact information included many incorrect telephone numbers and addresses, there was turnover in owners/managers of agribusinesses, and some had shut down.
For the relisting exercise, ACT first tried calling the phone numbers, then conducted field visits to the listed addresses. If still unable to locate the business, ACT regional coordinators contacted local authorities/representatives. Upon contacting the business, updated information about the business status, location, and contact information was collected for use during the main data collection. This updated list was the sample used for data collection.
Deviations from sample design
It should be noted that the model for PPs was re-estimated many times and some comparison cases were selected on the basis of the PSM scores generated in each of those runs. First of all, it had to be re-estimated for each wave of the survey, as new applicants appeared in the frame and new treatment cases were chosen by CNFA. Secondly, many applicants did not have values for all the independent variables, and therefore the model was re-estimated a number of times with varying reduced sets of independent variables.
In practice, the strata were defined with too much detail and comparison cases often could not be found in the same strata as treatment cases. Therefore strata had to be combined. This was done in an ad hoc way, with the result that the probability of selection is not available and corresponding sampling weights cannot be calculated.
By wave 4, it was also found that the pool of comparison cases was so small for FSCs and VCHs that all cases had to be included in the sample, that is, these categories are fully enumerated. This then applies to wave 5 also.
Selection of comparison cases was on a quota basis, that is, there was substitution for non-responding selections and for selections that no longer existed as separate entities.
This occurred because some green-field proposals never commenced operations, because some businesses ceased operations, and because some businesses merged with or had always operated jointly with other applicants that had already been interviewed.
During the course of the survey, two notable changes were made to the frame. First, it was discovered that one applicant had not been included in the CNFA Masterlist. This was a VCI applicant and it was therefore added to the survey. Second, it was discovered during interview (and subsequently confirmed) that applicant #318 should have been classified as a PP and not as a VA.
The details of response rates (for each variable separately) are provided in the external resources.
Case response rates are included in the data file as a separate variable. It was calculated by counting all answered codes in each case, also counting every answered code (except refuse -1 and don't know -2) and then dividing the second number by the first. The detailed information about the variable response rates (for each variable separatly) you can find in external resources -"Variable Response Rates_Follow-Up.xlsx".
In total, 335 applicants were selected for interview. Of these, 217 (65%) were interviewed face to face. A further 51 (15%) had ceased business or were merged with other businesses but were interviewed briefly by telephone. There were 33 noncontacts (including 2 businesses that are known to have been sold to new owners) and 32 refusals (including one ceased or merged business). Details are available in the external resource “Selection and field disposition summary.xlsx”.
The response rate was calculated based on the number of complete cases out of the number of eligible cases in the sample. A case was considered complete using the American Association for Public Opinion Research (AAPOR) standard where 80% of the questions that were supposed to be answered have a valid answer (non-valid answers consisted of “don’t know”, refusals, or interviewer errors that skipped the question), taking into consideration the skip patterns.
The overall response rate was 68.3% (397 complete cases out of 593 eligible cases). The response rates for each group are as
Primary producer treatment: 75% (114 out of 151)
Primary producer comparison: 43% (155 out of 362)
Value Adders/Value Chain initiative treatment: 71% (34 out of 48)
Value Adders/Value Chain initiative comparison: 34% (56 out of 163)
Farm Service Center treatment: 71% (24 out of 34)
Farm Service Center comparison: 21% (14 out of 68)
Round 1: Not calculated.
Round 2: A census was conducted of FSCs, VAs VCIs. While there is a random selection of additional PPs in the sample, not all selections were chosen through the random process, and so no weights have been derived for Round 2.
Round 3: Not calculated.
Dates of collection
Round 1, wave 1 (application rounds 1-5)
Round 1, wave 2 (application round 6)
Round 1, wave 3 (applicaiton round 7)
Round 1, wave 4 (application round 8)
Round 1, wave 5 (application round 9)
Data collection supervision
Round 1 and 2:
The fieldwork activities were implemented by 33 interviewers of IPM. 9 regional supervisors coordinated the interviewers. The role of the supervisors was to control the processes, to improve those mistakes that were made at the beginning of the fieldworks and to oversee the rest of the fieldwork process. Also the supervisors were controlling interviewers in the field in order to observe how well they have acknowledged the specifics of the survey, how well they cope with their duties, how appropriately do they proceed the relations with respondents in each target group and whether they are sending the correct messages to respondents, avoiding misinterpretation of the survey objective and false expectations regarding the project.
Interviews were conducted by individual interviewers and submitted to regional coordinators. ACT used a team of 9 regional coordinators and 35 interviewers. There was one regional coordinator for each region who worked with teams of 3-7 people, depending on the size of the sample in that region. Completed questionnaires were submitted to Tbilisi office for further revision. Control group based on Tbilisi office randomly selected 25% of the completed questionnaires per interviewer for control activities. 15% of the interviews were controlled via telephone while 10% of the respondents were visited and asked fixed set of questions to check data validity. The revision specialist was responsible for reviewing each questionnaire, checking for missed questions, skip errors, fields incorrectly completed, and checking for inconsistencies in the data.
The ADA evaluation uses data collected using a farm- and business-level survey, the ADA Survey, designed specifically for the evaluation of the program. The ADA Survey is a longitudinal panel survey comprised of three diffrent surveys and three different rounds. The three different surveys are:
- Primary Producer Survey: enterprise descriptives; production inputs, volumes, and values; marketing distribution channels; number of employees and salaries; enterprise assets; transportation accessibility; utilities; business growth potential; sources of finance; impact of natural disasters.
- Farm Service Center Survey: enterprise descriptives; geographic catchment area; value of products and services provided; machinery and equipment; number of employees and salaries; enterprise assets; transportation accessibility; utilities; business growth potential; sources of finance; impact of natural disasters.
- Value Adder and Value Chain Initiative Grantee Survey: enterprise descriptives; production inputs, volumes, and values; marketing distribution channels; number of employees and salaries; enterprise assets; transportation accessibility; utilities; business growth potential; sources of finance; impact of natural disasters.
Questionnaires were designed in English and then translated, tested through cognitive interviews (for Round 3 only), piloted prior to every round, and fielded in Georgian. Questionnaires in English and Georgian are provided in the external resources.
Institute for Polling and Marketing
Millennium Challenge Georgia contractor
Analysis and Consulting Team
Round 1 and Round 2:
Data editing took place at a number of stages throughout the processing, including:
a) Primary logical control of the questionnaire:
At the end of each week the interviewers submitted the completed questionnaire to the regional supervisors. In the presence of the interviewers the supervisors checked the accuracy of filling the questionnaires and logical skip patterns. Where mistakes were discovered, a question without a response or ambiguous information, they called the respondents and checked the data.
b) Secondary logical control of the questionnaire:
From the beginning of the field period, at the end of each week and in some cases after two weeks, the regional supervisors sent the questionnaires checked by them to Tbilisi. The secondary logical control of these questionnaires and coding was conducted in the IPM office.
Data editing took place at number of stages throughout the processing, including:
a) Primary revision of the completed questionnaires by regional coordinators
b) Comprehensive checking of the questionnaire by revision specialist
c) Coding of the questionnaire
d) Structural checking of SPSS data files
100% double data entry was completed with the exception of the first waves of Round 1. From the ADA application wave 8 (Round 1, wave 4 of data collection), based on DQR recommendations, the decision was made to do 100% double data entry. Detailed information is provided in the external resources.
Monitoring & Evaluation Division of the Millennium Challenge Corporation
Millennium Challenge Corporation
Review of Metadata
NORC at the University of Chicago
Compiled original metadata files, additions to MCC-compiled metadata file
Institute for Polling and Marketing
Produced original metadata for Rounds 1 and 2
Analysis and Consulting Team
Produced original metadata for Round 3
Version 1.1 (August 2014) Added missing fields and edited content.
Version 2.0 (April 2015). Edited version based on Version 01 (DDI-MCC-GEO-AG-2014-v1.1) that was done by Millennium Challenge Corporation.