939 resultados para Zero-inflated models, Statistical models, Poisson, Negative binomial, Statistical methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Innovations in the current interconnected world of organizations have lead to a focus on business models as a fundamental statement of direction and identity. Although industry transformations generally emanate from technological changes, recent examples suggest they may also be due to the introduction of new business models. In the past, different types of airline business models could be clearly separated from each other. However, this has changed in recent years partly due to the concentration process and partly to reaction caused by competitive pressure. At least it can be concluded that in future the distinction of different business models will remain less clear. To advance the use of business models as a concept, it is essential to be able to compare and perform analyses to identify the business models that may have the highest potential. This can essentially contribute to understanding the synergies and incompatibilities in the case of two airlines that are going in for a merger. This is illustrated by the example of Swiss Air-Lufthansa merger analysis. The idea is to develop quantitative methods and tools for comparing and analyzing Aeronautical/Airline business models. The paper identifies available methods of comparing airline business models and lays the ground work for a quantitative model of comparing airline business models. This can be a useful tool for business model analysis when two airlines are merged

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Public Private Partnerships (PPPs) are mostly implemented for three reasons: to circumvent budgetary constraints, encourage efficiency and improvement of quality in the provision of public infrastructure. One of the ways of reaching the latter objective is by the introduction of performance-based standards tied to bonuses and penalties to reward or punish the performance of the contractor. These performance based standards often refer to different aspects such as technical, environmental and safety issues. This paper focuses on the implementation of safety based incentives in PPPs. The main aim of this paper is to analyze whether the incentives to improve road safety in PPPs are effective in improving safety ratios in Spain. To this end, negative binomial regression models have been applied using information from the Spanish high capacity network in 2006. The findings indicate that even though road safety is highly influenced by variables that are not much controllable by the contractor such as the Average Annual Daily Traffic and the percentage of heavy vehicles in the highway, the implementation of safety incentives in PPPs has a positive influence in the reduction of fatalities, injuries and accidents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this paper is to evaluate whether the incentives incorporated in toll highway concession contracts in order to encourage private operators to adopt measures to reduce accidents are actually effective at improving safety. To this end, we implemented negative binomial regression models using information about highway characteristics and accident data from toll highway concessions in Spain from 2007 to 2009. Our results show that even though road safety is highly influenced by variables that are not managed by the contractor, such as the annual average daily traffic (AADT), the percentage of heavy vehicles on the highway, number of lanes, number of intersections and average speed; the implementation of these incentives has a positive influence on the reduction of accidents and injuries. Consequently, this measure seems to be an effective way of improving safety performance in road networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Public Private Partnerships (PPPs) are mostly implemented for three reasons: to circumvent budgetary constraints, encourage efficiency and improvement of quality in the provision of public infrastructure. One of the ways of reaching the latter objective is by the introduction of performance-based standards tied to bonuses and penalties to reward or punish the performance of the contractor. These performance based standards often refer to different aspects such as technical, environmental and safety issues. This paper focuses on the implementation of safety based incentives in PPPs. The main aim of this paper is to analyze whether the incentives to improve road safety in PPPs are effective in improving safety ratios in Spain. To this end, negative binomial regression models have been applied using information from the Spanish high capacity network in 2006. The findings indicate that even though road safety is highly influenced by variables that are not much controllable by the contractor such as the Average Annual Daily Traffic and the percentage of heavy vehicles in the highway, the implementation of safety incentives in PPPs has a positive influence in the reduction of fatalities, injuries and accidents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El Transportation Research Board es un congreso de reconocido prestigio internacional en el ámbito de la investigación del transporte. Aunque las actas publicadas están en formato digital y sin ISSN ni ISBN, lo consideramos lo suficientemente importante como para que se considere en los indicadores. This paper focuses on the implementation of safety based incentives in Public Private Partnerships (PPPs). The aim of this paper is twofold. First, to evaluate whether PPPs lead to an improvement in road safety, when compared with other infrastructure management systems. Second, is to analyze whether the incentives to improve road safety in PPP contracts in Spain have been effective at improving safety performance. To this end, negative binomial regression models have been applied using information from the Spanish high-capacity network covering years 2007-2009. The results showed that even though road safety is highly influenced by variables that are not manageable by the private concessionaire such as the average annual daily traffic, the implementation of safety incentives in PPPs has a positive influence in the reduction of accidents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

v. 1. Multicomponent methods.--v. 2. Mathematical models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the performance of Low Density Parity Check (LDPC) error-correcting codes using the methods of statistical physics. LDPC codes are based on the generation of codewords using Boolean sums of the original message bits by employing two randomly-constructed sparse matrices. These codes can be mapped onto Ising spin models and studied using common methods of statistical physics. We examine various regular constructions and obtain insight into their theoretical and practical limitations. We also briefly report on results obtained for irregular code constructions, for codes with non-binary alphabet, and on how a finite system size effects the error probability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The PC12 and SH-SY5Y cell models have been proposed as potentially realistic models to investigate neuronal cell toxicity. The effects of oxidative stress (OS) caused by both H2O2 and Aβ on both cell models were assessed by several methods. Cell toxicity was quantitated by measuring cell viability using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium (MTT) viability assay, an indicator of the integrity of the electron transfer chain (ETC), and cell morphology by fluorescence and video microscopy, both of which showed OS to cause decreased viability and changes in morphology. Levels of intracellular peroxide production, and changes in glutathione and carbonyl levels were also assessed, which showed OS to cause increases in intracellular peroxide production, glutathione and carbonyl levels. Differentiated SH-SY5y cells were also employed and observed to exhibit the greatest sensitivity to toxicity. The neurotrophic factor, nerve growth factor (NGF) was shown to cause protection against OS. Cells pre-treated with NGF showed higher viability after OS, generally less apoptotic morphology, recorded less apoptotic nucleiods, generally lower levels of intracellular peroxides and changes in gene expression. The neutrophic factor, brain derived growth factor (BDNF) and ascorbic acid (AA) were also investigated. BDNF showed no specific neuroprotection, however the preliminary data does warrant further investigation. AA showed a 'janus face' showing either anti-oxidant action and neuroprotection or pro-oxidant action depending on the situation. Results showed that the toxic effects of compounds such as Aβ and H2O2 are cell type dependent, and that OS alters glutathione metabolism in neuronal cells. Following toxic insult, glutathione levels are depleted to low levels. It is herein suggested that this lowering triggers an adaptive response causing alterations in glutathione metabolism as assessed by evaluation of glutathione mRNA biosynthetic enzyme expression and the subsequent increase in glutathione peroxidase (GPX) levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Highway Safety Manual (HSM) estimates roadway safety performance based on predictive models that were calibrated using national data. Calibration factors are then used to adjust these predictive models to local conditions for local applications. The HSM recommends that local calibration factors be estimated using 30 to 50 randomly selected sites that experienced at least a total of 100 crashes per year. It also recommends that the factors be updated every two to three years, preferably on an annual basis. However, these recommendations are primarily based on expert opinions rather than data-driven research findings. Furthermore, most agencies do not have data for many of the input variables recommended in the HSM. This dissertation is aimed at determining the best way to meet three major data needs affecting the estimation of calibration factors: (1) the required minimum sample sizes for different roadway facilities, (2) the required frequency for calibration factor updates, and (3) the influential variables affecting calibration factors. In this dissertation, statewide segment and intersection data were first collected for most of the HSM recommended calibration variables using a Google Maps application. In addition, eight years (2005-2012) of traffic and crash data were retrieved from existing databases from the Florida Department of Transportation. With these data, the effect of sample size criterion on calibration factor estimates was first studied using a sensitivity analysis. The results showed that the minimum sample sizes not only vary across different roadway facilities, but they are also significantly higher than those recommended in the HSM. In addition, results from paired sample t-tests showed that calibration factors in Florida need to be updated annually. To identify influential variables affecting the calibration factors for roadway segments, the variables were prioritized by combining the results from three different methods: negative binomial regression, random forests, and boosted regression trees. Only a few variables were found to explain most of the variation in the crash data. Traffic volume was consistently found to be the most influential. In addition, roadside object density, major and minor commercial driveway densities, and minor residential driveway density were also identified as influential variables.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research aimed to analyse the effect of different territorial divisions in the random fluctuation of socio-economic indicators related to social determinants of health. This is an ecological study resulting from a combination of statistical methods including individuated and aggregate data analysis, using five databases derived from the database of the Brazilian demographic census 2010: overall results of the sample by weighting area. These data were grouped into the following levels: households; weighting areas; cities; Immediate Urban Associated Regions and Intermediate Urban Associated Regions. A theoretical model related to social determinants of health was used, with the dependent variable Household with death and as independent variables: Black race; Income; Childcare and school no attendance; Illiteracy; and Low schooling. The data was analysed in a model related to social determinants of health, using Poisson regression in individual basis, multilevel Poisson regression and multiple linear regression in light of the theoretical framework of the area. It was identified a greater proportion of households with deaths among those with at least one black resident, lower-income, illiterate, who do not attend or attended school or day-care and less educated. The analysis of the adjusted model showed that most adjusted prevalence ratio was related to Income, where there is a risk value of 1.33 for households with at least one resident with lower average personal income to R$ 655,00 (Brazilian current). The multilevel analysis demonstrated that there was a context effect when the variables were subjected to the effects of areas, insofar as the random effects were significant for all models and with different prevalence rates being higher in the areas with smaller dimensions - Weighting areas with coefficient of 0.035 and Cities with coefficient of 0.024. The ecological analyses have shown that the variable Income and Low schooling presented explanatory potential for the outcome on all models, having income greater power to determine the household deaths, especially in models related to Immediate Urban Associated Regions with a standardized coefficient of -0.616 and regions intermediate urban associated regions with a standardized coefficient of -0.618. It was concluded that there was a context effect on the random fluctuation of the socioeconomic indicators related to social determinants of health. This effect was explained by the characteristics of territorial divisions and individuals who live or work there. Context effects were better identified in the areas with smaller dimensions, which are more favourable to explain phenomena related to social determinants of health, especially in studies of societies marked by social inequalities. The composition effects were better identified in the Regions of Urban Articulation, shaped through mechanisms similar to the phenomenon under study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between workplace absenteeism and adverse lifestyle factors (smoking, physical inactivity and poor dietary patterns) remains ambiguous. Reliance on self-reported absenteeism and obesity measures may contribute to this uncertainty. Using objective absenteeism and health status measures, the present study aimed to investigate what health status outcomes and lifestyle factors influence workplace absenteeism. Cross-sectional data were obtained from a complex workplace dietary intervention trial, the Food Choice at Work Study. Four multinational manufacturing workplaces in Cork, Republic of Ireland. Participants included 540 randomly selected employees from the four workplaces. Annual count absenteeism data were collected. Physical assessments included objective health status measures (BMI, midway waist circumference and blood pressure). FFQ measured diet quality from which DASH (Dietary Approaches to Stop Hypertension) scores were constructed. A zero-inflated negative binomial (zinb) regression model examined associations between health status outcomes, lifestyle characteristics and absenteeism. The mean number of absences was 2·5 (sd 4·5) d. After controlling for sociodemographic and lifestyle characteristics, the zinb model indicated that absenteeism was positively associated with central obesity, increasing expected absence rate by 72 %. Consuming a high-quality diet and engaging in moderate levels of physical activity were negatively associated with absenteeism and reduced expected frequency by 50 % and 36 %, respectively. Being in a managerial/supervisory position also reduced expected frequency by 50 %. To reduce absenteeism, workplace health promotion policies should incorporate recommendations designed to prevent and manage excess weight, improve diet quality and increase physical activity levels of employees.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uma avaliação das metodologias de análise e recolha de dados aplicadas pelo Programa NOCTUAPortugal é de extrema importância para se apurar se estas são as mais indicadas em estudos de citizen science. Comparou-se os resultados de diferentes metodologias analíticas de estimação das tendências populacionais das espécies de aves noturnas durante o período de realização do Programa NOCTUA-Portugal (análise gráfica simples, modelos lineares generalizados (GLM-Poisson e GLMM), modelos aditivos generalizados (GAM-LOESS e GAM-mgcv) e software TRIM). Analisou-se a metodologia de censo de modo a avaliar o número de registos face à duração dos pontos de escuta, comparar a eficiência do ponto de deteção com outros estudos, variação das respostas ao longo da noite e efeito da época do ano, vento, nebulosidade e luminosidade da lua. Os resultados mostraram que a metodologia analítica mais indicada era o GLMM e que não era necessário realizar nenhum ajuste em particular na metodologia de censo; Trends in nocturnal birds in Portugal Methods and analysis of a volunteer-based monitoring program ABSTRACT: An evaluation of the methodologies of analysis and data collection applied by NOCTUA-Portugal Program is extremely important to determine whether these are the most suitable in citizen science studies. We compared the results of different analytical methodologies to estimate population trends of the species of nocturnal birds during the period of the NOCTUA-Portugal Program (simple graphical analysis, generalized linear models (GLM-Poisson and GLMM), generalized additive models (GAM-LOESS and GAMmgcv) and software TRIM). We analyzed the field methodology to assess the effect of point duration on the number of records, compared the point count efficiency with other sources, the variation of responses throughout the night, the effect of time of year, wind, cloud cover and moon luminosity. The results showed that the most suitable analytical methodology was the GLMM and it was not necessary to make any particular adjustment in the field methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The interactions between host individual, host population, and environmental factors modulate parasite abundance in a given host population. Since adult exophilic ticks are highly aggregated in red deer (Cervus elaphus) and this ungulate exhibits significant sexual size dimorphism, life history traits and segregation, we hypothesized that tick parasitism on males and hinds would be differentially influenced by each of these factors. To test the hypothesis, ticks from 306 red deer-182 males and 124 females-were collected during 7 years in a red deer population in south-central Spain. By using generalized linear models, with a negative binomial error distribution and a logarithmic link function, we modeled tick abundance on deer with 20 potential predictors. Three models were developed: one for red deer males, another for hinds, and one combining data for males and females and including "sex" as factor. Our rationale was that if tick burdens on males and hinds relate to the explanatory factors in a differential way, it is not possible to precisely and accurately predict the tick burden on one sex using the model fitted on the other sex, or with the model that combines data from both sexes. Our results showed that deer males were the primary target for ticks, the weight of each factor differed between sexes, and each sex specific model was not able to accurately predict burdens on the animals of the other sex. That is, results support for sex-biased differences. The higher weight of host individual and population factors in the model for males show that intrinsic deer factors more strongly explain tick burden than environmental host-seeking tick abundance. In contrast, environmental variables predominated in the models explaining tick burdens in hinds.