Central Statistical Office (CS))

Created on
**July 08, 2014**
Last modified
**July 08, 2014**
Page views
**31975**
Download
**3391**
* * Documentation in PDF
* * Study website
* * Metadata
DDI/XML
JSON

BLZ_1991_FHS_v01_M

Family Health Survey 1991

Name | Country code |
---|---|

Belize | BLZ |

Other Household Health Survey [hh/hea]

The 1991 Belize Family Health Survey was the first national sample survey designed to provide information on fertility, infant mortality, family planning, and the use of maternal and child health services in Belize.

Belize is one of the countries in Latin America that was not included in the World Fertility Survey, the Contraceptive Prevalence Survey project, or the Demographic and Health Survey program during the 1970's and 1980's. As a result, data on contraceptive prevalence and the use of maternal and child health services in Belize has been limited. The 1991 Family Health Survey was designed to provide health professionals and international donors with data to assess infant and child mortality, fertility, and the use of family planning and health services in Belize.

The objectives of the 1991 Family Health Survey were to:

- obtain national fertility estimates;

- estimate levels of infant and child mortality;

- estimate the percentage of mothers who breastfed their last child and duration of breastfeeding;

- determine levels of knowledge and current use of contraceptives for a variety of social and demographic background variables and to determine the source where users obtain the methods they use;

- determine reasons for nonuse of contraception and estimate the percentage of women who are at risk of an unplanned pregnancy and, thus, in need of family planning services; and

- examine the use of maternal and child health services and immunization levels for children less than 5 years of age and to examine the prevalence and treatment of diarrhea and acute respiratory infections among these children.

The objectives of the 1991 Family Health Survey were to:

- obtain national fertility estimates;

- estimate levels of infant and child mortality;

- estimate the percentage of mothers who breastfed their last child and duration of breastfeeding;

- determine levels of knowledge and current use of contraceptives for a variety of social and demographic background variables and to determine the source where users obtain the methods they use;

- determine reasons for nonuse of contraception and estimate the percentage of women who are at risk of an unplanned pregnancy and, thus, in need of family planning services; and

- examine the use of maternal and child health services and immunization levels for children less than 5 years of age and to examine the prevalence and treatment of diarrhea and acute respiratory infections among these children.

Sample survey data [ssd]

National

Name | Affiliation |
---|---|

Central Statistical Office (CS)) | Ministry of Finance |

Name |
---|

Division of Reproductive Health, Centers for Disease Control |

Name | Abbreviation |
---|---|

United States Agency for International Development | USAID |

The 1991 Belize Family Health Survey was an area probability survey with two stages of selection. The sampling frame for the survey was the quick count of all households in the country conducted in 1990 by the Central Statistical Office in preparation for the 1991 census. Two strata, or domains, were sampled independently: urban areas and rural areas. In the first stage of selection for the urban domain, a systematic sample with a random start was used to select enumeration districts in the domain with probability of selection proportional to the number of households in each district. In the second stage of selection, households were chosen systematically using a constant sampling interval (4.2350) across all of the selected enumeration districts. The enumeration districts selected for the rural domain were the same as those that had been selected earlier for the 1990 Belize Household Expenditure Survey. The second stage selection of rural households was conducted the same way it was for the urban domain but used a constant sampling interval of 2.1363. In order to have a self-weighting geographic sample, 3,106 urban households and 1,871 rural households were selected for a total of 4,977 households.

Only one woman aged 15-44 per household was selected for interview. Each respondent's probability of selection was inversely proportional to the number of eligible women in the household. Thus, weighting factors were applied to compensate for this unequal probability of selection. In the tables presented in this report, proportions and means are based on the weighted number of cases, but the unweighted numbers are shown.

Only one woman aged 15-44 per household was selected for interview. Each respondent's probability of selection was inversely proportional to the number of eligible women in the household. Thus, weighting factors were applied to compensate for this unequal probability of selection. In the tables presented in this report, proportions and means are based on the weighted number of cases, but the unweighted numbers are shown.

Of the 4,977 households selected, 4,566 households were visited. Overall, 8 percent of households could not be located, and 7 percent of the households were found to be vacant. Less than 3 percent of the households refused to be interviewed. Fifty-five percent of sample households includeed at least one woman aged 15-44. Complete interviews were obtained in 94 percent of the households that had an eligible respondent, for a total of 2,656 interviews. Interview completion rates did not vary by residence.

Start | End |
---|---|

1991-01-15 | 1999-02-19 |

Face-to-face [f2f]

Because the Central Statistical Office was involved in other projects, namely an on-going Household Expenditure Survey and preparations for the national census scheduled to begin in May 1991, the time allocated for implementing and completing this survey was extremely limited. Thus, pretesting of the questionnaire and training of supervisors and interviewers were completed in just 10 days.

Normally, for a survey of this magnitude and complexity, two weeks are set aside for training and three months for field work. Originally, only three weeks were allocated for fieldwork, but after the second week of fieldwork it was extended to five weeks. The accelerated schedule for fieldwork essentially eliminated the possibility of providing feedback to field supervisors and interviewers on inconsistencies and omissions found in questionnaires at the data entry point. Consequently, extensive editing of the survey data set was required following fieldwork.

Fieldwork was conducted from January 15 to February 19, 1991.

Normally, for a survey of this magnitude and complexity, two weeks are set aside for training and three months for field work. Originally, only three weeks were allocated for fieldwork, but after the second week of fieldwork it was extended to five weeks. The accelerated schedule for fieldwork essentially eliminated the possibility of providing feedback to field supervisors and interviewers on inconsistencies and omissions found in questionnaires at the data entry point. Consequently, extensive editing of the survey data set was required following fieldwork.

Fieldwork was conducted from January 15 to February 19, 1991.

The estimates for a sample survey are affected by two types of errors: (1) sampling error and (2) non-sampling error. Non-sampling error is the result of mistakes made in carrying out data collection and data processing, including the failure to locate and interview the right household, errors in the way questions are asked or understood, and data entry errors. Although quality control efforts were made during the implementation of the Family Health Survey to minimize this type of error, non-sampling errors are impossible to avoid and difficult to evaluate statistically.

Sampling error is defined as the difference between the true value for any variable measured in a survey and the value estimated by the survey. Sampling error is a measure of the variability between all possible samples that could have been selected from the same population using the same sample design and size. For the entire population and for large subgroups, the Family Health Survey is large enough that the sampling error for most estimates is small. However, for small subgroups, sampling errors are larger and may affect the reliability of the estimates. Sampling error is usually measured in terms of the standard error for a particular statistic (mean, proportion, or ratio), which is the square root of the variance. The standard error can be used to calculate confidence intervals for estimated statistics. For example, the 95 percent confidence interval for a statistic is the estimated value plus or minus 1.96 times the standard error for the estimate.

The standard errors of statistics estimated using a multistage cluster sample design, such as that used in the Family Health Survey, are more complex than are standard errors based on simple random samples, and they tend to be somewhat larger than the standard errors produced by a simple random sample. The increase in standard error due to using a multi-stage cluster design is referred to as the design effect, which is defined as the ratio between the variance for the estimate using the sample design that was used and the variance for the estimate that would result if a simple random sample had been used. Based on experience with similar surveys, the design effect generally falls in a range from 1.2 to 2.0 for most variables.

Table E.1 of the Final Report presents examples of what the 95 percent confidence interval on an estimated proportion would be, under a variety of sample sizes, assuming a design effect of 1.6. It presents half-widths of the 95 percent confidence intervals corresponding to sample sizes, ranging from 25 to 3200 cases, and corresponding to estimated proportions ranging from .05/.95 to .50/.50. The formula used for calculating the half-width of the 95 percent confidence interval is:

(half of 95% C.I.) = (1.96) SQRT {(1.6)(p)(1-p) / n},

where p is the estimated proportion, n is the number of cases used in calculating the proportion, and 1.6 is the design effect. It can be seen, for example, that for an estimated proportion of 0.30, and a sample of size of 200, half the width of the confidence interval is 0.08, so that the 95 percent confidence interval for the estimated proportion would be from 0.22 to 0.38. If the sample size had been 3200, instead of 200, the 95 percent confidence interval would be from 0.28 to 0.32.

The actual design effect for individual variables will vary, depending on how values of that variable are distributed among the clusters of the sample. These can be calculated using advanced statistical software for survey analysis.

Sampling error is defined as the difference between the true value for any variable measured in a survey and the value estimated by the survey. Sampling error is a measure of the variability between all possible samples that could have been selected from the same population using the same sample design and size. For the entire population and for large subgroups, the Family Health Survey is large enough that the sampling error for most estimates is small. However, for small subgroups, sampling errors are larger and may affect the reliability of the estimates. Sampling error is usually measured in terms of the standard error for a particular statistic (mean, proportion, or ratio), which is the square root of the variance. The standard error can be used to calculate confidence intervals for estimated statistics. For example, the 95 percent confidence interval for a statistic is the estimated value plus or minus 1.96 times the standard error for the estimate.

The standard errors of statistics estimated using a multistage cluster sample design, such as that used in the Family Health Survey, are more complex than are standard errors based on simple random samples, and they tend to be somewhat larger than the standard errors produced by a simple random sample. The increase in standard error due to using a multi-stage cluster design is referred to as the design effect, which is defined as the ratio between the variance for the estimate using the sample design that was used and the variance for the estimate that would result if a simple random sample had been used. Based on experience with similar surveys, the design effect generally falls in a range from 1.2 to 2.0 for most variables.

Table E.1 of the Final Report presents examples of what the 95 percent confidence interval on an estimated proportion would be, under a variety of sample sizes, assuming a design effect of 1.6. It presents half-widths of the 95 percent confidence intervals corresponding to sample sizes, ranging from 25 to 3200 cases, and corresponding to estimated proportions ranging from .05/.95 to .50/.50. The formula used for calculating the half-width of the 95 percent confidence interval is:

(half of 95% C.I.) = (1.96) SQRT {(1.6)(p)(1-p) / n},

where p is the estimated proportion, n is the number of cases used in calculating the proportion, and 1.6 is the design effect. It can be seen, for example, that for an estimated proportion of 0.30, and a sample of size of 200, half the width of the confidence interval is 0.08, so that the 95 percent confidence interval for the estimated proportion would be from 0.22 to 0.38. If the sample size had been 3200, instead of 200, the 95 percent confidence interval would be from 0.28 to 0.32.

The actual design effect for individual variables will vary, depending on how values of that variable are distributed among the clusters of the sample. These can be calculated using advanced statistical software for survey analysis.

The user of the data acknowledges that the original collector of the data, the authorized distributor of the data, and the relevant funding agency bear no responsibility for use of the data or for interpretations or inferences based upon such uses.

DDI_BLZ_1991_FHS_v01_M_WBDG

2011-12-21

This site uses cookies to optimize functionality and give you the best possible experience. If you continue to navigate this website beyond this page, cookies will be placed on your browser. To learn more about cookies, click here.