781 resultados para design-based survey sampling
Resumo:
This multicentric population-based study in Brazil is the first national effort to estimate the prevalence of hepatitis B (HBV) and risk factors in the capital cities of the Northeast. Central-West, and Federal Districts (2004-2005). Random multistage cluster sampling was used to select persons 13-69 years of age. Markers for HBV were tested by enzyme-linked immunosorbent assay. The HBV genotypes were determined by sequencing hepatitis B surface antigen (HBsAg). Multivariate analyses and simple catalytic model were performed. Overall. 7,881 persons were inculded < 70% were not vaccinated. Positivity for HBsAg was less than 1% among non-vaccinated persons and genotypes A, D, and F co-circulated. The incidence of infection increased with age with similar force of infection in all regions. Males and persons having initiated sexual activity were associated with HBV infection in the two settings: healthcare jobs and prior hospitalization were risk factors in the Federal District. Our survey classified these regions as areas with HBV endemicity and highlighted the risk factors differences among the settings.
Resumo:
A risks management, carried on in an effective way, leads the software development to success and may influence on the organization. The knowledge takes part of such a process as a way to help taking decisions. This research aimed to analyze the use of Knowledge Management techniques to the Risk Management in software projects development and the possible influence on the enterprise revenue. It had, as its main studying subject, Brazilian incubated and graduated software developing enterprises. The chosen research method was the Survey type. Multivariate statistical methods were used for the treatment and analysis of the obtained results, this way identifying the most significant factors, that is, enterprise's achievement constraining factors and those outcome achievement ones. Among the latter we highlight the knowledge methodology, the time of existence of the enterprise, the amount of employees and the knowledge externalization. The results encourage contributing actions to the increasing of financial revenue. © 2013 Springer-Verlag.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modeling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark–recapture distance sampling, which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modeling analysis engine for spatial and habitat-modeling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of- the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.
Resumo:
Purpose: Trachoma, a blinding conjunctivitis, is the result of repeated infection with Chlamydia trachomatis. There are no recent data for the state of Roraima, Brazil, where it was thought that trachoma no longer existed. These data are derived from school children sampled in this state, with additional data collected from the contacts of children with trachoma. Design: A population-based cross-sectional study with random sampling of students in grades 1 through 4 of all public schools within municipalities where the human development index was less than the national average in 2003. The sample was stratified according to population size. Participants: A sample size of 7200 was determined and a total of 6986 (93%) students were examined, along with an additional 2152 contacts. Methods: All students were examined for trachoma according to World Health Organization criteria. Demographic data and contact information also was collected. The family and school contacts of students with trachoma then were located and examined. Main Outcome Measures: Prevalence and grade of trachoma, age, gender, race, and municipality location. Results: The overall prevalence of trachoma was 4.5% (95% confidence interval [CI], 3.7%–5.3%), but there were municipalities within the state where the prevalence of inflammatory trachoma was more than 10%. The prevalence was greater in rural areas (4.9%; 95% CI, 3.7%–6.0%) compared with urban areas (3.9%; 95% CI, 2.9%–4.9%). Living in indigenous communities was associated with trachoma (odds ratio, 1.6; 95% CI, 0.9 –2.6). An additional 2152 contacts were examined, and the overall trachoma prevalence was 9.3% (95% CI, 8.1–10.5). Conclusions: Trachoma continues to exist in Roraima, Brazil, where there are municipalities with a significant prevalence of disease. The indigenous population is highly mobile, crossing state and international borders, raising the possibility of trachoma in neighboring countries. Trachoma prevalence among the contacts of students with trachoma was higher than the school population, highlighting the importance of contact tracing.
Resumo:
In this thesis, we propose a novel approach to model the diffusion of residential PV systems. For this purpose, we use an agent-based model where agents are the families living in the area of interest. The case study is the Emilia-Romagna Regional Energy plan, which aims to increase the produc- tion of electricity from renewable energy. So, we study the microdata from the Survey on Household Income and Wealth (SHIW) provided by Bank of Italy in order to obtain the characteristics of families living in Emilia-Romagna. These data have allowed us to artificial generate families and reproduce the socio-economic aspects of the region. The families generated by means of a software are placed on the virtual world by associating them with the buildings. These buildings are acquired by analysing the vector data of regional buildings made available by the region. Each year, the model determines the level of diffusion by simulating the installed capacity. The adoption behaviour is influenced by social interactions, household’s economic situation, the environmental benefits arising from the adoption and the payback period of the investment.
Resumo:
Image-based Relighting (IBRL) has recently attracted a lot of research interest for its ability to relight real objects or scenes, from novel illuminations captured in natural/synthetic environments. Complex lighting effects such as subsurface scattering, interreflection, shadowing, mesostructural self-occlusion, refraction and other relevant phenomena can be generated using IBRL. The main advantage of image-based graphics is that the rendering time is independent of scene complexity as the rendering is actually a process of manipulating image pixels, instead of simulating light transport. The goal of this paper is to provide a complete and systematic overview of the research in Imagebased Relighting. We observe that essentially all IBRL techniques can be broadly classified into three categories (Fig. 9), based on how the scene/illumination information is captured: Reflectance function-based, Basis function-based and Plenoptic function-based. We discuss the characteristics of each of these categories and their representative methods. We also discuss about the sampling density and types of light source(s), relevant issues of IBRL.
Resumo:
I introduce the new mgof command to compute distributional tests for discrete (categorical, multinomial) variables. The command supports largesample tests for complex survey designs and exact tests for small samples as well as classic large-sample x2-approximation tests based on Pearson’s X2, the likelihood ratio, or any other statistic from the power-divergence family (Cressie and Read, 1984, Journal of the Royal Statistical Society, Series B (Methodological) 46: 440–464). The complex survey correction is based on the approach by Rao and Scott (1981, Journal of the American Statistical Association 76: 221–230) and parallels the survey design correction used for independence tests in svy: tabulate. mgof computes the exact tests by using Monte Carlo methods or exhaustive enumeration. mgof also provides an exact one-sample Kolmogorov–Smirnov test for discrete data.
Resumo:
We consider the problem of developing efficient sampling schemes for multiband sparse signals. Previous results on multicoset sampling implementations that lead to universal sampling patterns (which guarantee perfect reconstruction), are based on a set of appropriate interleaved analog to digital converters, all of them operating at the same sampling frequency. In this paper we propose an alternative multirate synchronous implementation of multicoset codes, that is, all the analog to digital converters in the sampling scheme operate at different sampling frequencies, without need of introducing any delay. The interleaving is achieved through the usage of different rates, whose sum is significantly lower than the Nyquist rate of the multiband signal. To obtain universal patterns the sampling matrix is formulated and analyzed. Appropriate choices of the parameters, that is the block length and the sampling rates, are also proposed.
Resumo:
A new Stata command called -mgof- is introduced. The command is used to compute distributional tests for discrete (categorical, multinomial) variables. Apart from classic large sample $\chi^2$-approximation tests based on Pearson's $X^2$, the likelihood ratio, or any other statistic from the power-divergence family (Cressie and Read 1984), large sample tests for complex survey designs and exact tests for small samples are supported. The complex survey correction is based on the approach by Rao and Scott (1981) and parallels the survey design correction used for independence tests in -svy:tabulate-. The exact tests are computed using Monte Carlo methods or exhaustive enumeration. An exact Kolmogorov-Smirnov test for discrete data is also provided.
Resumo:
"NCES 96-089."
Resumo:
Context: Population-based screening has been advocated for subclinical thyroid dysfunction in the elderly because the disorder is perceived to be common, and health benefits may be accrued by detection and treatment. Objective: The objective of the study was to determine the prevalence of subclinical thyroid dysfunction and unidentified overt thyroid dysfunction in an elderly population. Design, Setting, and Participants: A cross-sectional survey of a community sample of participants aged 65 yr and older registered with 20 family practices in the United Kingdom. Exclusions: Exclusions included current therapy for thyroid disease, thyroid surgery, or treatment within 12 months. Outcome Measure: Tests of thyroid function (TSH concentration and free T 4 concentration in all, with measurement of free T3 in those with low TSH) were conducted. Explanatory Variables: These included all current medical diagnoses and drug therapies, age, gender, and socioeconomic deprivation (Index of Multiple Deprivation, 2004) Analysis: Standardized prevalence rates were analyzed. Logistic regression modeling was used to determine factors associated with the presence of subclinical thyroid dysfunction Results: A total of 5960 attended for screening. Using biochemical definitions, 94.2% [95% confidence interval (CI) 93.8-94.6%] were euthyroid. Unidentified overt hyper- and hypothyroidism were uncommon (0.3, 0.4%, respectively). Subclinical hyperthyroidism and hypothyroidism were identified with similar frequency (2.1%, 95% CI 1.8-2.3%; 2.9%, 95% CI 2.6-3.1%, respectively). Subclinical thyroid dysfunction was more common in females (P < 0.001) and with increasing age (P < 0.001). After allowing for comorbidities, concurrent drug therapies, age, and gender, an association between subclinical hyperthyroidism and a composite measure of socioeconomic deprivation remained. Conclusions: Undiagnosed overt thyroid dysfunction is uncommon. The prevalence of subclinical thyroid dysfunction is 5%. We have, for the first time, identified an independent association between the prevalence of subclinical thyroid dysfunction and deprivation that cannot be explained solely by the greater burden of chronic disease and/or consequent drug therapies in the deprived population. Copyright © 2006 by The Endocrine Society.
Resumo:
Investigation of the different approaches used by Expert Systems researchers to solve problems in the domain of Mechanical Design and Expert Systems was carried out. The techniques used for conventional formal logic programming were compared with those used when applying Expert Systems concepts. A literature survey of design processes was also conducted with a view to adopting a suitable model of the design process. A model, comprising a variation on two established ones, was developed and applied to a problem within what are described as class 3 design tasks. The research explored the application of these concepts to Mechanical Engineering Design problems and their implementation on a microcomputer using an Expert System building tool. It was necessary to explore the use of Expert Systems in this manner so as to bridge the gap between their use as a control structure and for detailed analytical design. The former application is well researched into and this thesis discusses the latter. Some Expert System building tools available to the author at the beginning of his work were evaluated specifically for their suitability for Mechanical Engineering design problems. Microsynics was found to be the most suitable on which to implement a design problem because of its simple but powerful Semantic Net Knowledge Representation structure and the ability to use other types of representation schemes. Two major implementations were carried out. The first involved a design program for a Helical compression spring and the second a gearpair system design. Two concepts were proposed in the thesis for the modelling and implementation of design systems involving many equations. The method proposed enables equation manipulation and analysis using a combination of frames, semantic nets and production rules. The use of semantic nets for purposes other than for psychology and natural language interpretation, is quite new and represents one of the major contributions to knowledge by the author. The development of a purpose built shell program for this type of design problems was recommended as an extension of the research. Microsynics may usefully be used as a platform for this development.
Resumo:
Purpose: In today's competitive scenario, effective supply chain management is increasingly dependent on third-party logistics (3PL) companies' capabilities and performance. The dissemination of information technology (IT) has contributed to change the supply chain role of 3PL companies and IT is considered an important element influencing the performance of modern logistics companies. Therefore, the purpose of this paper is to explore the relationship between IT and 3PLs' performance, assuming that logistics capabilities play a mediating role in this relationship. Design/methodology/approach: Empirical evidence based on a questionnaire survey conducted on a sample of logistics service companies operating in the Italian market was used to test a conceptual resource-based view (RBV) framework linking IT adoption, logistics capabilities and firm performance. Factor analysis and ordinary least square (OLS) regression analysis have been used to test hypotheses. The focus of the paper is multidisciplinary in nature; management of information systems, strategy, logistics and supply chain management approaches have been combined in the analysis. Findings: The results indicate strong relationships among data gathering technologies, transactional capabilities and firm performance, in terms of both efficiency and effectiveness. Moreover, a positive correlation between enterprise information technologies and 3PL financial performance has been found. Originality/value: The paper successfully uses the concept of logistics capabilities as mediating factor between IT adoption and firm performance. Objective measures have been proposed for IT adoption and logistics capabilities. Direct and indirect relationships among variables have been successfully tested. © Emerald Group Publishing Limited.
Resumo:
As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.