897 resultados para sample size calculation


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sample size calculations are advocated by the CONSORT group to justify sample sizes in randomized controlled trials (RCTs). The aim of this study was primarily to evaluate the reporting of sample size calculations, to establish the accuracy of these calculations in dental RCTs and to explore potential predictors associated with adequate reporting. Electronic searching was undertaken in eight leading specific and general dental journals. Replication of sample size calculations was undertaken where possible. Assumed variances or odds for control and intervention groups were also compared against those observed. The relationship between parameters including journal type, number of authors, trial design, involvement of methodologist, single-/multi-center study and region and year of publication, and the accuracy of sample size reporting was assessed using univariable and multivariable logistic regression. Of 413 RCTs identified, sufficient information to allow replication of sample size calculations was provided in only 121 studies (29.3%). Recalculations demonstrated an overall median overestimation of sample size of 15.2% after provisions for losses to follow-up. There was evidence that journal, methodologist involvement (OR = 1.97, CI: 1.10, 3.53), multi-center settings (OR = 1.86, CI: 1.01, 3.43) and time since publication (OR = 1.24, CI: 1.12, 1.38) were significant predictors of adequate description of sample size assumptions. Among journals JCP had the highest odds of adequately reporting sufficient data to permit sample size recalculation, followed by AJODO and JDR, with 61% (OR = 0.39, CI: 0.19, 0.80) and 66% (OR = 0.34, CI: 0.15, 0.75) lower odds, respectively. Both assumed variances and odds were found to underestimate the observed values. Presentation of sample size calculations in the dental literature is suboptimal; incorrect assumptions may have a bearing on the power of RCTs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE This retrospective observational pilot study examined differences in peri-implant bone level changes (ΔIBL) between two similar implant types differing only in the surface texture of the neck. The hypothesis tested was that ΔIBL would be greater with machined-neck implants than with groovedneck implants. METHOD AND MATERIALS 40 patients were enrolled; n = 20 implants with machined (group 1) and n = 20 implants with a rough, grooved neck (group 2), all placed in the posterior mandible. Radiographs were obtained after loading (at 3 to 9 months) and at 12 to 18 months after implant insertion. Case number calculation with respect to ΔIBL was conducted. Groups were compared using a Brunner-Langer model, the Mann-Whitney test, the Wilcoxon signed rank test, and linear model analysis. RESULTS After the 12- to 18-month observation period, mean ΔIBL was -1.11 ± 0.92 mm in group 1 and -1.25 ± 1.23 mm in group 2. ΔIBL depended significantly on time (P < .001), but not on group. In both groups, mean marginal ΔIBL was significantly less than -1.5 mm. Only insertion depth had a significant influence on the amount of periimplant bone loss (P = .013). Case number estimate testing for a difference between group 1 and 2 with a power of 90% revealed a sample size per group of 1,032 subjects. CONCLUSION ΔIBL values indicated that both implant designs fulfilled implant success criteria, and the modification of implant neck texture had no significant influence on ΔIBL.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper examines how the geospatial accuracy of samples and sample size influence conclusions from geospatial analyses. It does so using the example of a study investigating the global phenomenon of large-scale land acquisitions and the socio-ecological characteristics of the areas they target. First, we analysed land deal datasets of varying geospatial accuracy and varying sizes and compared the results in terms of land cover, population density, and two indicators for agricultural potential: yield gap and availability of uncultivated land that is suitable for rainfed agriculture. We found that an increase in geospatial accuracy led to a substantial and greater change in conclusions about the land cover types targeted than an increase in sample size, suggesting that using a sample of higher geospatial accuracy does more to improve results than using a larger sample. The same finding emerged for population density, yield gap, and the availability of uncultivated land suitable for rainfed agriculture. Furthermore, the statistical median proved to be more consistent than the mean when comparing the descriptive statistics for datasets of different geospatial accuracy. Second, we analysed effects of geospatial accuracy on estimations regarding the potential for advancing agricultural development in target contexts. Our results show that the target contexts of the majority of land deals in our sample whose geolocation is known with a high level of accuracy contain smaller amounts of suitable, but uncultivated land than regional- and national-scale averages suggest. Consequently, the more target contexts vary within a country, the more detailed the spatial scale of analysis has to be in order to draw meaningful conclusions about the phenomena under investigation. We therefore advise against using national-scale statistics to approximate or characterize phenomena that have a local-scale impact, particularly if key indicators vary widely within a country.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this research is to develop a new statistical method to determine the minimum set of rows (R) in a R x C contingency table of discrete data that explains the dependence of observations. The statistical power of the method will be empirically determined by computer simulation to judge its efficiency over the presently existing methods. The method will be applied to data on DNA fragment length variation at six VNTR loci in over 72 populations from five major racial groups of human (total sample size is over 15,000 individuals; each sample having at least 50 individuals). DNA fragment lengths grouped in bins will form the basis of studying inter-population DNA variation within the racial groups are significant, will provide a rigorous re-binning procedure for forensic computation of DNA profile frequencies that takes into account intra-racial DNA variation among populations. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Reliable information on causes of death is a fundamental component of health development strategies, yet globally only about one-third of countries have access to such information. For countries currently without adequate mortality reporting systems there are useful models other than resource-intensive population-wide medical certification. Sample-based mortality surveillance is one such approach. This paper provides methods for addressing appropriate sample size considerations in relation to mortality surveillance, with particular reference to situations in which prior information on mortality is lacking. Methods The feasibility of model-based approaches for predicting the expected mortality structure and cause composition is demonstrated for populations in which only limited empirical data is available. An algorithm approach is then provided to derive the minimum person-years of observation needed to generate robust estimates for the rarest cause of interest in three hypothetical populations, each representing different levels of health development. Results Modelled life expectancies at birth and cause of death structures were within expected ranges based on published estimates for countries at comparable levels of health development. Total person-years of observation required in each population could be more than halved by limiting the set of age, sex, and cause groups regarded as 'of interest'. Discussion The methods proposed are consistent with the philosophy of establishing priorities across broad clusters of causes for which the public health response implications are similar. The examples provided illustrate the options available when considering the design of mortality surveillance for population health monitoring purposes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Registration of births, recording deaths by age, sex and cause, and calculating mortality levels and differentials are fundamental to evidence-based health policy, monitoring and evaluation. Yet few of the countries with the greatest need for these data have functioning systems to produce them despite legislation providing for the establishment and maintenance of vital registration. Sample vital registration (SVR), when applied in conjunction with validated verbal autopsy, procedures and implemented in a nationally representative sample of population clusters represents an affordable, cost-effective, and sustainable short- and medium-term solution to this problem. SVR complements other information sources by producing age-, sex-, and cause-specific mortality data that are more complete and continuous than those currently available. The tools and methods employed in an SVR system, however, are imperfect and require rigorous validation and continuous quality assurance; sampling strategies for SVR are also still evolving. Nonetheless, interest in establishing SVR is rapidly growing in Africa and Asia. Better systems for reporting and recording data on vital events will be sustainable only if developed hand-in-hand with existing health information strategies at the national and district levels; governance structures; and agendas for social research and development monitoring. If the global community wishes to have mortality measurements 5 or 10 years hence, the foundation stones of SVR must be laid today.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Vapnik-Chervonenkis (VC) dimension is a combinatorial measure of a certain class of machine learning problems, which may be used to obtain upper and lower bounds on the number of training examples needed to learn to prescribed levels of accuracy. Most of the known bounds apply to the Probably Approximately Correct (PAC) framework, which is the framework within which we work in this paper. For a learning problem with some known VC dimension, much is known about the order of growth of the sample-size requirement of the problem, as a function of the PAC parameters. The exact value of sample-size requirement is however less well-known, and depends heavily on the particular learning algorithm being used. This is a major obstacle to the practical application of the VC dimension. Hence it is important to know exactly how the sample-size requirement depends on VC dimension, and with that in mind, we describe a general algorithm for learning problems having VC dimension 1. Its sample-size requirement is minimal (as a function of the PAC parameters), and turns out to be the same for all non-trivial learning problems having VC dimension 1. While the method used cannot be naively generalised to higher VC dimension, it suggests that optimal algorithm-dependent bounds may improve substantially on current upper bounds.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The coming out process has been conceptualized as a developmental imperative for those who will eventually accept their same-sex attractions. It is widely accepted that homophobia, heterosexism, and homonegativity are cultural realities that may complicate this developmental process for gay men. The current study views coming out as an extra-developmental life task that is at best a stressful event, and at worst traumatic when coming out results in the rupture of salient relationships with parents, siblings, and/or close friends. To date, the minority stress model (Meyer, 1995; 2003) has been utilized as an organizing framework for how to empirically examine external stressors and mental health disparities for lesbians, gay men, and bisexual individuals in the United States. The current study builds on this literature by focusing on the influence of how gay men make sense of and represent the coming out process in a semi-structured interview, more specifically, by examining the legacy of the coming out process on indicators of wellness. In a two-part process, this study first employs the framework well articulated in the adult attachment literature of coherence of narratives to explore both variation and implications of the coming out experience for a sample of gay men (n = 60) in romantic relationships (n = 30). In particular, this study employed constructs identified in the adult attachment literature, namely Preoccupied and Dismissing current state of mind, to code a Coming Out Interview (COI). In the present study current state of mind refers to the degree of coherent discourse produced about coming out experiences as relayed during the COI. Multilevel analyses tested the extent to which these COI dimensions, as revealed through an analysis of coming out narratives in the COI, were associated with relationship quality, including self-reported satisfaction and observed emotional tone in a standard laboratory interaction task and self-reported symptoms of psychopathology. In addition, multilevel analyses also assessed the Acceptance by primary relationship figures at the time of disclosure, as well as the degree of Outness at the time of the study. Results revealed that participant’s narratives on the COI varied with regard to Preoccupied and Dismissing current state of mind, suggesting that the AAI coding system provides a viable organizing framework for extracting meaning from coming out narratives as related to attachment relevant constructs. Multilevel modeling revealed construct validity of the attachment dimensions assessed via the COI; attachment (i.e., Preoccupied and Dismissing current state of mind) as assessed via the Adult Attachment Interview (AAI) was significantly correlated with the corresponding COI variables. These finding suggest both methodological and conceptual convergence between these two measures. However, with one exception, COI Preoccupied and Dismissing current state of mind did not predict relationship outcomes or self-reported internalizing and externalizing symptoms. However, further analyses revealed that the degree to which one is out to others moderated the relationship between COI Preoccupied and internalizing. Specifically, for those who were less out to others, there was a significant and positive relationship between Preoccupied current state of mind towards coming out and internalizing symptoms. In addition, the degree of perceived acceptance of sexual orientation by salient relationship figures at the time of disclosure emerged as a predictor of mental health. In particular, Acceptance was significantly negatively related to internalizing symptoms. Overall, the results offer preliminary support that gay men’s narratives do reflect variation as assessed by attachment dimensions and highlights the role of Acceptance by salient relationship figures at the time of disclosure. Still, for the most part, current state of mind towards coming out in this study was not associated with relationship quality and self-reported indicators of mental health. This finding may be a function of low statistical power given the modest sample size. However, the relationship between Preoccupied current state of mind and mental health (i.e., internalizing) appears to depend on degree of Outness. In addition, the response of primary relationships figures to coming out may be a relevant factor in shaping mental health outcomes for gay men. Limitations and suggestions for future research and clinical intervention are offered.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This report discusses the calculation of analytic second-order bias techniques for the maximum likelihood estimates (for short, MLEs) of the unknown parameters of the distribution in quality and reliability analysis. It is well-known that the MLEs are widely used to estimate the unknown parameters of the probability distributions due to their various desirable properties; for example, the MLEs are asymptotically unbiased, consistent, and asymptotically normal. However, many of these properties depend on an extremely large sample sizes. Those properties, such as unbiasedness, may not be valid for small or even moderate sample sizes, which are more practical in real data applications. Therefore, some bias-corrected techniques for the MLEs are desired in practice, especially when the sample size is small. Two commonly used popular techniques to reduce the bias of the MLEs, are ‘preventive’ and ‘corrective’ approaches. They both can reduce the bias of the MLEs to order O(n−2), whereas the ‘preventive’ approach does not have an explicit closed form expression. Consequently, we mainly focus on the ‘corrective’ approach in this report. To illustrate the importance of the bias-correction in practice, we apply the bias-corrected method to two popular lifetime distributions: the inverse Lindley distribution and the weighted Lindley distribution. Numerical studies based on the two distributions show that the considered bias-corrected technique is highly recommended over other commonly used estimators without bias-correction. Therefore, special attention should be paid when we estimate the unknown parameters of the probability distributions under the scenario in which the sample size is small or moderate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study aimed at evaluating whether human papillomavirus (HPV) groups and E6/E7 mRNA of HPV 16, 18, 31, 33, and 45 are prognostic of cervical intraepithelial neoplasia (CIN) 2 outcome in women with a cervical smear showing a low-grade squamous intraepithelial lesion (LSIL). This cohort study included women with biopsy-confirmed CIN 2 who were followed up for 12 months, with cervical smear and colposcopy performed every three months. Women with a negative or low-risk HPV status showed 100% CIN 2 regression. The CIN 2 regression rates at the 12-month follow-up were 69.4% for women with alpha-9 HPV versus 91.7% for other HPV species or HPV-negative status (P < 0.05). For women with HPV 16, the CIN 2 regression rate at the 12-month follow-up was 61.4% versus 89.5% for other HPV types or HPV-negative status (P < 0.05). The CIN 2 regression rate was 68.3% for women who tested positive for HPV E6/E7 mRNA versus 82.0% for the negative results, but this difference was not statistically significant. The expectant management for women with biopsy-confirmed CIN 2 and previous cytological tests showing LSIL exhibited a very high rate of spontaneous regression. HPV 16 is associated with a higher CIN 2 progression rate than other HPV infections. HPV E6/E7 mRNA is not a prognostic marker of the CIN 2 clinical outcome, although this analysis cannot be considered conclusive. Given the small sample size, this study could be considered a pilot for future larger studies on the role of predictive markers of CIN 2 evolution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Long-acting reversible contraceptives (LARCs) include the copper-releasing intrauterine device (IUD), the levonorgestrel-releasing intrauterine system (LNG-IUS) and implants. Despite the high contraceptive efficacy of LARCs, their prevalence of use remains low in many countries. The objective of this study was to assess the main reasons for switching from contraceptive methods requiring daily or monthly compliance to LARC methods within a Brazilian cohort. Women of 18-50 years of age using different contraceptives and wishing to switch to a LARC method answered a questionnaire regarding their motivations for switching from their current contraceptive. Continuation rates were evaluated 1 year after method initiation. Sample size was calculated at 1040 women. Clinical performance was evaluated by life table analysis. The cutoff date for analysis was May 23, 2013. Overall, 1167 women were interviewed; however, after 1 year of use, the medical records of only 1154 women were available for review. The main personal reason for switching, as reported by the women, was fear of becoming pregnant while the main medical reasons were nausea and vomiting and unscheduled bleeding. No pregnancies occurred during LARC use, and the main reasons for discontinuation were expulsion (in the case of the IUD and LNG-IUS) and a decision to undergo surgical sterilization (in the case of the etonogestrel-releasing implant). Continuation rate was ~95.0/100 women/year for the three methods. Most women chose a LARC method for its safety and for practical reasons, and after 1 year of use, most women continued with the method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this study was to review the growth curves for Turner syndrome, evaluate the methodological and statistical quality, and suggest potential growth curves for clinical practice guidelines. The search was carried out in the databases Medline and Embase. Of 1006 references identified, 15 were included. Studies constructed curves for weight, height, weight/height, body mass index, head circumference, height velocity, leg length, and sitting height. The sample ranged between 47 and 1,565 (total = 6,273) girls aged 0 to 24 y, born between 1950 and 2006. The number of measures ranged from 580 to 9,011 (total = 28,915). Most studies showed strengths such as sample size, exclusion of the use of growth hormone and androgen, and analysis of confounding variables. However, the growth curves were restricted to height, lack of information about selection bias, limited distributional properties, and smoothing aspects. In conclusion, we observe the need to construct an international growth reference for girls with Turner syndrome, in order to provide support for clinical practice guidelines.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Advances in diagnostic research are moving towards methods whereby the periodontal risk can be identified and quantified by objective measures using biomarkers. Patients with periodontitis may have elevated circulating levels of specific inflammatory markers that can be correlated to the severity of the disease. The purpose of this study was to evaluate whether differences in the serum levels of inflammatory biomarkers are differentially expressed in healthy and periodontitis patients. Twenty-five patients (8 healthy patients and 17 chronic periodontitis patients) were enrolled in the study. A 15 mL blood sample was used for identification of the inflammatory markers, with a human inflammatory flow cytometry multiplex assay. Among 24 assessed cytokines, only 3 (RANTES, MIG and Eotaxin) were statistically different between groups (p<0.05). In conclusion, some of the selected markers of inflammation are differentially expressed in healthy and periodontitis patients. Cytokine profile analysis may be further explored to distinguish the periodontitis patients from the ones free of disease and also to be used as a measure of risk. The present data, however, are limited and larger sample size studies are required to validate the findings of the specific biomarkers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aproximadamente sete milhões de brasileiros acima de 40 anos são acometidos pela DPOC. Nos últimos anos, importantes avanços foram registrados no campo do tratamento medicamentoso dessa condição. Foi realizada uma revisão sistemática incluindo artigos originais sobre tratamento farmacológico da DPOC publicados entre 2005 e 2009, indexados em bases de dados nacionais e internacionais e escritos em inglês, espanhol ou português. Artigos com tamanho amostral menor de 100 indivíduos foram excluídos. Os desfechos sintomas, função pulmonar, qualidade de vida, exacerbações, mortalidade e efeitos adversos foram pesquisados. Os artigos foram classificados segundo o critério da Global Initiative for Chronic Obstructive Lung Disease para nível de evidência científica (grau de recomendação A, B e C). Dos 84 artigos selecionados, 40 (47,6%), 18 (21,4%) e 26 (31,0%) foram classificados com graus A, B e C, respectivamente. Das 420 análises oriundas desses artigos, 236 referiam-se à comparação de fármacos contra placebo nos diversos desfechos estudados. Dessas 236 análises, os fármacos mais frequentemente estudados foram anticolinérgicos de longa duração, a combinação β2-agonistas de longa duração + corticosteroides inalatórios e corticosteroides inalatórios isolados em 66, 48 e 42 análises, respectivamente. Nas mesmas análises, os desfechos função pulmonar, efeitos adversos e sintomas geraram 58, 54 e 35 análises, respectivamente. A maioria dos estudos mostrou que os medicamentos aliviaram os sintomas, melhoraram a qualidade de vida, a função pulmonar e preveniram as exacerbações. Poucos estudos contemplaram o desfecho mortalidade, e o papel do tratamento medicamentoso nesse desfecho ainda não está completamente definido. Os fármacos estudados são seguros no manejo da DPOC, com poucos efeitos adversos.