893 resultados para variable sample size


Relevância:

80.00% 80.00%

Publicador:

Resumo:

RESUMEN El ausentismo laboral genera un gran impacto económico en las empresas y a la sociedad en general. Es un problema difícil de manejar ya que es multifactorial, porque a pesar de que en su gran mayoría es generado por enfermedad general, al analizarlo se puede encontrar otros factores que conlleven a la ausencia del trabajador y con ello producir alteración al normal funcionamiento de la empresa, por lo que resulta indispensable estudiar este tema. Objetivo Caracterizar las principales causas de ausentismo laboral en los médicos generales de una IPS que presta servicios de consulta externa de medicina general a nivel nacional durante el año 2014. Materiales y Métodos: es un estudio de corte transversal sobre datos secundarios correspondientes al registro de incapacidades que presento la IPS durante el año 2014. Los criterios de inclusión fueron los médicos generales con los que contaba la IPS que presta servicios de salud a nivel nacional durante el año 2014 y los criterios de exclusión fueron las licencias de maternidad y paternidad. El tamaño de la muestra final fue de 202 médicos y el número de incapacidades que se presentó durante el año 2014 fue 313. Se realizó análisis de distribución de frecuencias, porcentaje y prevalencia de las incapacidades. Resultados: durante el año 2014 se presentaron 313 incapacidades, en una población de 202 médicos generales con prevalencia en las mujeres. El diagnóstico más frecuente de las incapacidades fue la categoría diagnostica “otros” en el cual se encuentra migraña, vértigo, alteraciones de la mama con 59 incapacidades, seguida por enfermedades gastrointestinales con 25 incapacidades. Conclusiones y recomendaciones: Las incapacidades fueron más frecuentes en mujeres que en hombres. El diagnóstico de las incapacidades más frecuente fue “enfermedad genérica o ausencia de diagnóstico”. La incapacidad más frecuente de un día que se presentaron 46 registros. El médico que mayor número de incapacidades presento fue de 18 para el año 2014. Se recomienda a la empresa tener un seguimiento de las incapacidades repetitivas, ya que estas podrían tener relación con enfermedad laboral que aún no ha sido calificada. Se recomienda complementar la base de datos con información como el antecedente de enfermedad crónica y el sedentarismo, lo que puede permitir realizar nuevos estudios respecto al riesgo cardiovascular de esta población.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

RESUMEN Introducción El papel de las nuevas técnicas ecocardiográficas para el diagnóstico de infarto agudo del miocardio se encuentra en desarrollo y la realización de mecánica ventricular izquierda podría sugerir la presencia de enfermedad coronaria hemodinámicamente significativa. Objetivos Determinar si en pacientes con infarto agudo del miocardio la medición de strain longitudinal global y regional sirve para predecir la presencia de enfermedad coronaria significativa. Métodos Es un estudio de pruebas diagnósticas en el que se evaluaron las características operativas de la mecánica ventricular izquierda para la detección de enfermedad coronaria significativa comparado contra el cateterismo cardiaco, considerado el patrón de oro. Se analizaron 54 pacientes con infarto agudo del miocardio llevados a cateterismo cardiaco, a quienes se les realizó un ecocardiograma transtorácico con medición de strain longitudinal global y regional. Resultados De los 54 pacientes analizados, el 83% tenía enfermedad coronaria significativa. El hallazgo de un strain longitudinal global < -17.5 tuvo una sensibilidad del 85% y una especificidad del 78% para predecir la presencia de enfermedad coronaria; para la arteria descendente anterior un strain longitudinal regional < – 17.4 tuvo una sensibilidad de 82% y una especificidad de 44%, para la arteria circunfleja una sensibilidad del 87% y una especificidad del 37% y para la arteria coronaria derecha una sensibilidad de 73% y una especificidad de 32%. Conclusiones La realización de ecocardiografía con mecánica ventricular en pacientes con infarto agudo del miocardio es útil para predecir la presencia de enfermedad coronaria hemodinámicamente significativa.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Interpretation of 1000 Hz tympanometry is not standardized. Several compensated and uncompensated measures were analyzed and compared to otologic findings. Results of auditory brainstem testing and otoacoustic emissions were considered to better obtain middle ear status. Findings were inconclusive due to small sample size.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The behavior of the Asian summer monsoon is documented and compared using the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis (ERA) and the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) Reanalysis. In terms of seasonal mean climatologies the results suggest that, in several respects, the ERA is superior to the NCEP-NCAR Reanalysis. The overall better simulation of the precipitation and hence the diabatic heating field over the monsoon domain in ERA means that the analyzed circulation is probably nearer reality. In terms of interannual variability, inconsistencies in the definition of weak and strong monsoon years based on typical monsoon indices such as All-India Rainfall (AIR) anomalies and the large-scale wind shear based dynamical monsoon index (DMI) still exist. Two dominant modes of interannual variability have been identified that together explain nearly 50% of the variance. Individually, they have many features in common with the composite flow patterns associated with weak and strong monsoons, when defined in terms of regional AIR anomalies and the large-scale DMI. The reanalyses also show a common dominant mode of intraseasonal variability that describes the latitudinal displacement of the tropical convergence zone from its oceanic-to-continental regime and essentially captures the low-frequency active/break cycles of the monsoon. The relationship between interannual and intraseasonal variability has been investigated by considering the probability density function (PDF) of the principal component of the dominant intraseasonal mode. Based on the DMI, there is an indication that in years with a weaker monsoon circulation, the PDF is skewed toward negative values (i,e., break conditions). Similarly, the PDFs for El Nino and La Nina years suggest that El Nino predisposes the system to more break spells, although the sample size may limit the statistical significance of the results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Representative Soil Sampling Scheme (RSSS) has monitored the soil of agricultural land in England and Wales since 1969. Here we describe the first spatial analysis of the data from these surveys using geostatistics. Four years of data (1971, 1981, 1991 and 2001) were chosen to examine the nutrient (available K, Mg and P) and pH status of the soil. At each farm, four fields were sampled; however, for the earlier years, coordinates were available for the farm only and not for each field. The averaged data for each farm were used for spatial analysis and the variograms showed spatial structure even with the smaller sample size. These variograms provide a reasonable summary of the larger scale of variation identified from the data of the more intensively sampled National Soil Inventory. Maps of kriged predictions of K generally show larger values in the central and southeastern areas (above 200 mg L-1) and an increase in values in the west over time, whereas Mg is fairly stable over time. The kriged predictions of P show a decline over time, particularly in the east, and those of pH show an increase in the east over time. Disjunctive kriging was used to examine temporal changes in available P using probabilities less than given thresholds of this element. The RSSS was not designed for spatial analysis, but the results show that the data from these surveys are suitable for this purpose. The results of the spatial analysis, together with those of the statistical analyses, provide a comprehensive view of the RSSS database as a basis for monitoring the soil. These data should be taken into account when future national soil monitoring schemes are designed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It has been generally accepted that the method of moments (MoM) variogram, which has been widely applied in soil science, requires about 100 sites at an appropriate interval apart to describe the variation adequately. This sample size is often larger than can be afforded for soil surveys of agricultural fields or contaminated sites. Furthermore, it might be a much larger sample size than is needed where the scale of variation is large. A possible alternative in such situations is the residual maximum likelihood (REML) variogram because fewer data appear to be required. The REML method is parametric and is considered reliable where there is trend in the data because it is based on generalized increments that filter trend out and only the covariance parameters are estimated. Previous research has suggested that fewer data are needed to compute a reliable variogram using a maximum likelihood approach such as REML, however, the results can vary according to the nature of the spatial variation. There remain issues to examine: how many fewer data can be used, how should the sampling sites be distributed over the site of interest, and how do different degrees of spatial variation affect the data requirements? The soil of four field sites of different size, physiography, parent material and soil type was sampled intensively, and MoM and REML variograms were calculated for clay content. The data were then sub-sampled to give different sample sizes and distributions of sites and the variograms were computed again. The model parameters for the sets of variograms for each site were used for cross-validation. Predictions based on REML variograms were generally more accurate than those from MoM variograms with fewer than 100 sampling sites. A sample size of around 50 sites at an appropriate distance apart, possibly determined from variograms of ancillary data, appears adequate to compute REML variograms for kriging soil properties for precision agriculture and contaminated sites. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cost-sharing, which involves government-farmer partnership in the funding of agricultural extension service, is one of the reforms aimed at achieving sustainable funding for extension systems. This study examined the perceptions of farmers and extension professionals on this reform agenda in Nigeria. The study was carried out in six geopolitical zones of Nigeria. A multi-stage random sampling technique was applied in the selection of respondents. A sample size of 268 farmers and 272 Agricultural Development Programme (ADP) extension professionals participated in the study. Both descriptive and inferential statistics were used in analysing the data generated from this research. The results show that majority of farmers (80.6%) and extension professionals (85.7%) had favourable perceptions towards cost-sharing. Furthermore, the overall difference in their perceptions was not significant (t =0.03). The study concludes that the strong favourable perception held by the respondents is a pointer towards acceptance of the reform. It therefore recommends that government, extension administrators and policymakers should design and formulate effective strategies and regulations for the introduction and use of cost-sharing as an alternative approach to financing agricultural technology transfer in Nigeria.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper introduces a simple futility design that allows a comparative clinical trial to be stopped due to lack of effect at any of a series of planned interim analyses. Stopping due to apparent benefit is not permitted. The design is for use when any positive claim should be based on the maximum sample size, for example to allow subgroup analyses or the evaluation of safety or secondary efficacy responses. A final frequentist analysis can be performed that is valid for the type of design employed. Here the design is described and its properties are presented. Its advantages and disadvantages relative to the use of stochastic curtailment are discussed. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pharmacogenetic trials investigate the effect of genotype on treatment response. When there are two or more treatment groups and two or more genetic groups, investigation of gene-treatment interactions is of key interest. However, calculation of the power to detect such interactions is complicated because this depends not only on the treatment effect size within each genetic group, but also on the number of genetic groups, the size of each genetic group, and the type of genetic effect that is both present and tested for. The scale chosen to measure the magnitude of an interaction can also be problematic, especially for the binary case. Elston et al. proposed a test for detecting the presence of gene-treatment interactions for binary responses, and gave appropriate power calculations. This paper shows how the same approach can also be used for normally distributed responses. We also propose a method for analysing and performing sample size calculations based on a generalized linear model (GLM) approach. The power of the Elston et al. and GLM approaches are compared for the binary and normal case using several illustrative examples. While more sensitive to errors in model specification than the Elston et al. approach, the GLM approach is much more flexible and in many cases more powerful. Copyright © 2005 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sequential methods provide a formal framework by which clinical trial data can be monitored as they accumulate. The results from interim analyses can be used either to modify the design of the remainder of the trial or to stop the trial as soon as sufficient evidence of either the presence or absence of a treatment effect is available. The circumstances under which the trial will be stopped with a claim of superiority for the experimental treatment, must, however, be determined in advance so as to control the overall type I error rate. One approach to calculating the stopping rule is the group-sequential method. A relatively recent alternative to group-sequential approaches is the adaptive design method. This latter approach provides considerable flexibility in changes to the design of a clinical trial at an interim point. However, a criticism is that the method by which evidence from different parts of the trial is combined means that a final comparison of treatments is not based on a sufficient statistic for the treatment difference, suggesting that the method may lack power. The aim of this paper is to compare two adaptive design approaches with the group-sequential approach. We first compare the form of the stopping boundaries obtained using the different methods. We then focus on a comparison of the power of the different trials when they are designed so as to be as similar as possible. We conclude that all methods acceptably control type I error rate and power when the sample size is modified based on a variance estimate, provided no interim analysis is so small that the asymptotic properties of the test statistic no longer hold. In the latter case, the group-sequential approach is to be preferred. Provided that asymptotic assumptions hold, the adaptive design approaches control the type I error rate even if the sample size is adjusted on the basis of an estimate of the treatment effect, showing that the adaptive designs allow more modifications than the group-sequential method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The proportional odds model provides a powerful tool for analysing ordered categorical data and setting sample size, although for many clinical trials its validity is questionable. The purpose of this paper is to present a new class of constrained odds models which includes the proportional odds model. The efficient score and Fisher's information are derived from the profile likelihood for the constrained odds model. These results are new even for the special case of proportional odds where the resulting statistics define the Mann-Whitney test. A strategy is described involving selecting one of these models in advance, requiring assumptions as strong as those underlying proportional odds, but allowing a choice of such models. The accuracy of the new procedure and its power are evaluated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A score test is developed for binary clinical trial data, which incorporates patient non-compliance while respecting randomization. It is assumed in this paper that compliance is all-or-nothing, in the sense that a patient either accepts all of the treatment assigned as specified in the protocol, or none of it. Direct analytic comparisons of the adjusted test statistic for both the score test and the likelihood ratio test are made with the corresponding test statistics that adhere to the intention-to-treat principle. It is shown that no gain in power is possible over the intention-to-treat analysis, by adjusting for patient non-compliance. Sample size formulae are derived and simulation studies are used to demonstrate that the sample size approximation holds. Copyright © 2003 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

While planning the GAIN International Study of gavestinel in acute stroke, a sequential triangular test was proposed but not implemented. Before the trial commenced it was agreed to evaluate the sequential design retrospectively to evaluate the differences in the resulting analyses, trial durations and sample sizes in order to assess the potential of sequential procedures for future stroke trials. This paper presents four sequential reconstructions of the GAIN study made under various scenarios. For the data as observed, the sequential design would have reduced the trial sample size by 234 patients and shortened its duration by 3 or 4 months. Had the study not achieved a recruitment rate that far exceeded expectation, the advantages of the sequential design would have been much greater. Sequential designs appear to be an attractive option for trials in stroke. Copyright 2004 S. Karger AG, Basel

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Proportion estimators are quite frequently used in many application areas. The conventional proportion estimator (number of events divided by sample size) encounters a number of problems when the data are sparse as will be demonstrated in various settings. The problem of estimating its variance when sample sizes become small is rarely addressed in a satisfying framework. Specifically, we have in mind applications like the weighted risk difference in multicenter trials or stratifying risk ratio estimators (to adjust for potential confounders) in epidemiological studies. It is suggested to estimate p using the parametric family (see PDF for character) and p(1 - p) using (see PDF for character), where (see PDF for character). We investigate the estimation problem of choosing c 0 from various perspectives including minimizing the average mean squared error of (see PDF for character), average bias and average mean squared error of (see PDF for character). The optimal value of c for minimizing the average mean squared error of (see PDF for character) is found to be independent of n and equals c = 1. The optimal value of c for minimizing the average mean squared error of (see PDF for character) is found to be dependent of n with limiting value c = 0.833. This might justifiy to use a near-optimal value of c = 1 in practice which also turns out to be beneficial when constructing confidence intervals of the form (see PDF for character).