817 resultados para SAMPLE SURVEYS
Resumo:
The ability to evaluate effects of factors on outcomes is increasingly important for a class of studies that control some but not all of the factors. Although important advances have been made in methods of analysis for such partially controlled studies,work on designs for such studies has been relatively limited. To help understand why, we review main designs that have been used for such partially controlled studies. Based on the review, we give two complementary reasons that explain the limited work on such designs, and suggest a new direction in this area.
Resumo:
Five case study communities in both metropolitan and regional urban locations in Australia are used as test sites to develop measures of 'community strength' on four domains: Natural Capital; Produced Economic Capital; Human Capital; and Social and Institutional Capital. The paper focuses on the fourth domain. Sample surveys of households in the five case study communities used a survey instrument with scaled items to measure four aspects of social capital - formal norms, informal norms, formal structures and informal structures - that embrace the concepts of trust, reciprocity, bonds, bridges, links and networks in the interaction of individuals with their community inherent in the notion social capital. Exploratory principal components analysis is used to identify factors that measure those aspects of social and institutional capital, while a confirmatory analysis based on Cronbach's alpha explores the robustness of the measures. Four primary scales and 15 subscales are identified when defining the domain of social and institutional capital. Further analysis reveals that two measures - anomie, and perceived quality of life and wellbeing - relate to certain primary scales of social capital.
Resumo:
In 2001/02 five case study communities in both metropolitan and regional urban locations in Australia were chosen as test sites to develop measures of community strength on four domains: natural capital; produced economic capital; human capital; and social and institutional capital. Secondary data sources were used to develop measures on the first three domains. For the fourth domain social and institutional capital primary data collection was undertaken through sample surveys of households. A structured approach was devised. This involved developing a survey instrument using scaled items relating to four elements: formal norms; informal norms; formal structures; and informal structures which embrace the concepts of trust, reciprocity, bonds, bridges, links and networks in the interaction of individuals with their community inherent in the notion social capital. Exploratory principal components analysis was used to identify factors that measure those aspects of social and institutional capital, with confirmatory analysis conducted using Cronbach's Alpha. This enabled the construction of four primary scales and 15 sub-scales as a tool for measuring social and institutional capital. Further analyses reveals that two measures anomie and perceived quality of life and wellbeing relate to certain primary scales of social capital.
Resumo:
Research on production systems design has in recent years tended to concentrate on ‘software’ factors such as organisational aspects, work design, and the planning of the production operations. In contrast, relatively little attention has been paid to maximising the contributions made by fixed assets, particularly machines and equipment. However, as the cost of unproductive machine time has increased, reliability, particularly of machine tools, has become ever more important. Reliability theory and research has traditionally been based in the main on electrical and electronic equipment whereas mechanical devices, especially machine tools, have not received sufficiently objective treatment. A recently completed research project has considered the reliability of machine tools by taking sample surveys of purchasers, maintainers and manufacturers. Breakdown data were also collected from a number of engineering companies and analysed using both manual and computer techniques. Results obtained have provided an indication of those factors most likely to influence reliability and which in turn could lead to improved design and selection of machine tool systems. Statistical analysis of long-term field data has revealed patterns of trends of failure which could help in the design of more meaningful maintenance schemes.
Resumo:
As the hotel industry grows more competitive, quality guest service becomes an increasingly important part of managers' responsibility measuring the quality of service delivery is facilitated when managers know what types of assessment methods are available to them. The authors present and discuss the following available measurement techniques and describe the situations where they best meet the needs of hotel managers: management observation, employee feedback programs, comment cards, mailed surveys, personal and telephone interviews, focus groups, and mystery shopping.
Resumo:
The purpose of this study was to examine the effects of the use of technology on students’ mathematics achievement, particularly the Florida Comprehensive Assessment Test (FCAT) mathematics results. Eleven schools within the Miami-Dade County Public School System participated in a pilot program on the use of Geometers Sketchpad (GSP). Three of these schools were randomly selected for this study. Each school sent a teacher to a summer in-service training program on how to use GSP to teach geometry. In each school, the GSP class and a traditional geometry class taught by the same teacher were the study participants. Students’ mathematics FCAT results were examined to determine if the GSP produced any effects. Students’ scores were compared based on assignment to the control or experimental group as well as gender and SES. SES measurements were based on whether students qualified for free lunch. The findings of the study revealed a significant difference in the FCAT mathematics scores of students who were taught geometry using GSP compared to those who used the traditional method. No significant differences existed between the FCAT mathematics scores of the students based on SES. Similarly, no significant differences existed between the FCAT scores based on gender. In conclusion, the use of technology (particularly GSP) is likely to boost students’ FCAT mathematics test scores. The findings also show that the use of GSP may be able to close known gender and SES related achievement gaps. The results of this study promote policy changes in the way geometry is taught to 10th grade students in Florida’s public schools.
Resumo:
Este es un trabajo de carácter académico que busca identificar la viabilidad del relanzamiento de la revista de manualidades “Costureando”, nacida en la ciudad de Medellín y enfocada a un público específico: mujeres de estratos 4, 5 y 6 de Colombia, amantes a las manualidades y a las actividades de sano entretenimiento. En este documento se presenta el producto mencionado anteriormente y sobre él se hace un análisis enfocado a entender su viabilidad de venta en el mercado actual. Este análisis se logra a través del planteamiento de unos objetivos específicos que orientan la recolección de información, para ser respondidos a través de unas conclusiones y recomendaciones finales; que nacen de los resultados de investigaciones en fuentes secundarias, observaciones en puntos de venta, encuestas a una muestra poblacional y entrevistas con expertos.
Resumo:
Over the period 2008 to 2010, NaFIRRI carried out a number of socio-economic studies on the Kyoga lakes to provide an update of the socio-economic conditions of the fisheries and also to address specific areas of fisheries socio-economic issues and development concerns. The data collection was conducted using Key informant interviews, questionnaire sample surveys, Focus Group Discussions, secondary data searches and field observations. The objective of this fact sheet is, therefore, to provide key information from these studies for use at national, district, community levels as well as by other interested stakeholders.
Resumo:
Introduction: The National Oceanic and Atmospheric Administration’s Biogeography Branch has conducted surveys of reef fish in the Caribbean since 1999. Surveys were initially undertaken to identify essential fish habitat, but later were used to characterize and monitor reef fish populations and benthic communities over time. The Branch’s goals are to develop knowledge and products on the distribution and ecology of living marine resources and provide resource managers, scientists and the public with an improved ecosystem basis for making decisions. The Biogeography Branch monitors reef fishes and benthic communities in three study areas: (1) St. John, USVI, (2) Buck Island, St. Croix, USVI, and (3) La Parguera, Puerto Rico. In addition, the Branch has characterized the reef fish and benthic communities in the Flower Garden Banks National Marine Sanctuary, Gray’s Reef National Marine Sanctuary and around the island of Vieques, Puerto Rico. Reef fish data are collected using a stratified random sampling design and stringent measurement protocols. Over time, the sampling design has changed in order to meet different management objectives (i.e. identification of essential fish habitat vs. monitoring), but the designs have always remained: • Probabilistic – to allow inferences to a larger targeted population, • Objective – to satisfy management objectives, and • Stratified – to reduce sampling costs and obtain population estimates for strata. There are two aspects of the sampling design which are now under consideration and are the focus of this report: first, the application of a sample frame, identified as a set of points or grid elements from which a sample is selected; and second, the application of subsampling in a two-stage sampling design. To evaluate these considerations, the pros and cons of implementing a sampling frame and subsampling are discussed. Particular attention is paid to the impacts of each design on accuracy (bias), feasibility and sampling cost (precision). Further, this report presents an analysis of data to determine the optimal number of subsamples to collect if subsampling were used. (PDF contains 19 pages)
Resumo:
Skin cancers pose a significant public health problem in high-risk populations. We have prospectively monitored basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) incidence in a Queensland community over a 10-y period by recording newly treated lesions, supplemented by skin examination surveys. Age-standardized incidence rates of people with new histologically confirmed BCC were 2787 per 100,000 person-years at risk (pyar) among men and 1567 per 100,000 pyar among women, and corresponding tumor rates were 5821 per 100,000 pyar and 2733 per 100,000 pyar, respectively. Incidence rates for men with new SCC were 944 per 100,000 pyar and for women 675 per 100,000 pyar; tumor rates were 1754 per 100,000 pyar and 846 per 100,000 pyar, respectively. Incidence rates of BCC tumors but not SCC tumors varied noticeably according to method of surveillance, with BCC incidence rates based on skin examination surveys around three times higher than background treatment rates. This was mostly due to an increase in diagnosis of new BCC on sites other than the head and neck, arms, and hands associated with skin examination surveys and little to do with advancing the time of diagnosis of BCC on these sites as seen by a return to background rates following the examination surveys. We conclude that BCC that might otherwise go unreported are detected during skin examination surveys and thus that such skin cancer screening can influence the apparent burden of skin cancer.
Resumo:
Introduction This study reports on the application of the Manchester Driver Behaviour Questionnaire (DBQ) to examine the self-reported driving behaviours (e.g., speeding, errors & aggressive manoeuvres) and predict crash involvement among a sample of general Queensland motorists. Material and Methods Surveys were completed by 249 general motorists on-line or via a pen-and-paper format. Results A factor analysis revealed a three factor solution for the DBQ which was consistent with previous Australian-based research. It accounted for 40.5% of the total variance, although some cross-loadings were observed on nine of the twenty items. The internal reliability of the DBQ was satisfactory. However, multivariate analysis using the DBQ revealed little predictive ability of the tool to predict crash involvement or demerit point loss e.g. violation notices. Rather, exposure to the road was found to be predictive of crashes, although speeding did make a small contribution to those who recently received a violation notice. Conclusions Taken together, the findings contribute to a growing body of research that raises questions about the predictive ability of the most widely used driving assessment tool globally. Ongoing research (which also includes official crash and offence outcomes) is required to better understand the actual contribution that the DBQ can make to understanding and improving road safety. Future research should also aim to confirm whether this lack of predictive efficacy originates from broader issues inherent within self-report data (e.g., memory recall problems) or issues underpinning the conceptualisation of the scale.
Resumo:
Environmental acoustic recordings can be used to perform avian species richness surveys, whereby a trained ornithologist can observe the species present by listening to the recording. This could be made more efficient by using computational methods for iteratively selecting the richest parts of a long recording for the human observer to listen to, a process known as “smart sampling”. This allows scaling up to much larger ecological datasets. In this paper we explore computational approaches based on information and diversity of selected samples. We propose to use an event detection algorithm to estimate the amount of information present in each sample. We further propose to cluster the detected events for a better estimate of this amount of information. Additionally, we present a time dispersal approach to estimating diversity between iteratively selected samples. Combinations of approaches were evaluated on seven 24-hour recordings that have been manually labeled by bird watchers. The results show that on average all the methods we have explored would allow annotators to observe more new species in fewer minutes compared to a baseline of random sampling at dawn.
Resumo:
Rockfish species are notoriously difficult to sample with multispecies bottom trawl survey methods. Typically, biomass estimates have high coefficients of variation and can fluctuate outside the bounds of biological reality from year to year. This variation may be due in part to their patchy distribution related to very specific habitat preferences. We successfully modeled the distribution of five commercially important and abundant rockf ish species. A two-stage modeling method (modeling both presence-absence and abundance) and a collection of important habitat variables were used to predict bottom trawl survey catch per unit of effort. The resulting models explained between 22% and 66% of the variation in rockfish distribution. The models were largely driven by depth, local slope, bottom temperature, abundance of coral and sponge, and measures of water column productivity (i.e., phytoplankton and zooplankton). A year-effect in the models was back-transformed and used as an index of the time series of abundance. The abundance index trajectories of three of five species were similar to the existing estimates of their biomass. In the majority of cases the habitat-based indices exhibited less interannual variability and similar precision when compared with stratified survey-based biomass estimates. These indices may provide for stock assessment models a more stable alternative to current biomass estimates produced by the multispecies bottom trawl survey in the Gulf of Alaska.
Resumo:
Population assessments seldom incorporate habitat information or use previously observed distributions of fish density. Because habitat affects the spatial distribution of fish density and overall abundance, the use of habitat information and previous estimates of fish density can produce more precise and less biased population estimates. In this study, we describe how poststratification can be applied as an unbiased estimator to data sets that were collected under a probability sampling design, typical of many multispecies trawl surveys. With data from a multispecies survey of juvenile flatfish, we show how poststratification can be applied to a data set that was not collected under a probability sampling design, where both the precision and the bias are unknown. For each of four species, three estimates of total abundance were compared: 1) unstratified; 2) poststratified by habitat; and 3) poststratified by habitat and fish density (high fish density and low fish density) in nearby years. Poststratification by habitat gave more precise and (or) less design-biased estimates than an unstratified estimator for all species in all years. Poststratification by habitat and fish density produced the most precise and representative estimates when the sample size in the high fish-density and low fish-density strata were sufficient (in this study, n≥20 in the high fish-density stratum, n≥9 in the low fish-density stratum). Because of the complexities of statistically testing the annual stratified data, we compared three indices of abundance for determining statistically significant changes in annual abundance. Each of the indices closely approximated the annual differences of the poststratified estimates. Selection of the most appropriate index was dependent upon the species’ density distribution within habitat and the sample size in the different habitat areas. The methods used in this study are particularly useful for estimating individual species abundance from multispecies surveys and for retrospective st
Resumo:
Thirteen bottom trawl surveys conducted in Alaska waters for red king crab, Paralithodes camtschaticus, during 1940–61 are largely forgotten today even though they helped define our current knowledge of this resource. Government publications on six exploratory surveys (1940–49, 1957) included sample locations and some catch composition data, but these documents are rarely referenced. Only brief summaries of the other seven annual (1955–61) grid-patterned trawl surveys from the eastern Bering Sea were published. Although there have been interruptions in sampling and some changes in the trawl survey methods, a version of this grid-patterned survey continues through the present day, making it one of the oldest bottom-trawl surveys in U.S. waters. Unfortunately, many of the specific findings made during these early efforts have been lost to the research community. Here, we report on the methods, results, and significance of these early surveys, which were collated from published reports and the unpublished original data sheets so that researchers might begin incorporating this information into stock assessments, ecosystem trend analyses, and perhaps even revise the baseline population distribution and abundance estimates.